Item Response Theory provides a number of alternative ways of estimating latent scores. Here we compare 6 different ways to estimate the latent variable associated with a pattern of responses. Originally developed as a test for scoreIrt, but perhaps useful for demonstration purposes. Items are simulated using sim.irt
and then scored using factor scores from factor.scores
using statistics found using irt.fa
, simple weighted models for 1 and 2 PL and 2 PN. Results show almost perfect agreement with estimates from MIRT and ltm for the dichotomous case and with MIRT for the polytomous case. (Results from ltm are unstable for the polytomous case, sometimes agreeing with scoreIrt
and MIRT, sometimes being much worse.)
test.irt(nvar = 9, n.obs = 1000, mod = "logistic",type="tetra", low = -3, high = 3,
seed = NULL)
Number of variables to create (simulate) and score
Number of simulated subjects
"logistic" or "normal" theory data are generated
"tetra" for dichotomous, "poly" for polytomous
items range from low to high
items range from low to high
Set the random number seed using some non-nul value. Otherwise, use the existing sequence of random numbers
A dataframe of scores as well as the generating theta true score. A graphic display of the correlations is also shown.
n.obs observations (0/1) on nvar variables are simulated using either a logistic or normal theory model. Then, a number of different scoring algorithms are applied and shown graphically. Requires the ltm package to be installed to compare ltm scores.
# NOT RUN {
#not run
#test.irt(9,1000)
# }
Run the code above in your browser using DataLab