Learn R Programming

ifaTools (version 0.23)

uniquenessPrior: Uniqueness prior to assist in item factor analysis

Description

To prevent Heywood cases, Bock, Gibbons, & Muraki (1988) suggested a beta prior on the uniqueness (Equations 43-46). The analytic gradient and Hessian are included for quick optimization using Newton-Raphson.

Usage

uniquenessPrior(model, numFactors, strength = 0.1, name = "uniquenessPrior")

Arguments

model

an mxModel

numFactors

the number of factors. All items are assumed to have the same number of factors.

strength

the strength of the prior

name

the name of the mxModel that is returned

Value

an mxModel that evaluates to the prior density in deviance units

Details

To reproduce these derivatives in maxima for the case of 2 slopes (c and d), use the following code:

f(c,d) := -p*log(1-(c^2 / (c^2+d^2+1) + (d^2 / (c^2+d^2+1))));

diff(f(c,d), d),radcan;

diff(diff(f(c,d), d),d),radcan;

The general pattern is given in Bock, Gibbons, & Muraki.

References

Bock, R. D., Gibbons, R., & Muraki, E. (1988). Full-information item factor analysis. Applied Psychological Measurement, 12(3), 261-280.

Examples

Run this code
# NOT RUN {
numItems <- 6
spec <- list()
spec[1:numItems] <- list(rpf.drm(factors=2))
names(spec) <- paste0("i", 1:numItems)
item <- mxMatrix(name="item", free=TRUE,
                 values=mxSimplify2Array(lapply(spec, rpf.rparam)))
item$labels[1:2,] <- paste0('p',1:(numItems * 2))
data <- rpf.sample(100, spec, item$values)  # use a larger sample size
m1 <- mxModel(model="m1", item,
              mxData(observed=data, type="raw"),
              mxExpectationBA81(spec),
              mxFitFunctionML())
up <- uniquenessPrior(m1, 2)
container <- mxModel("container", m1, up,
                     mxFitFunctionMultigroup(c("m1", "uniquenessPrior")),
                     mxComputeSequence(list(
                       mxComputeOnce('fitfunction', c('fit','gradient')),
                       mxComputeReportDeriv())))
container <- mxRun(container)
container$output$fit
container$output$gradient
# }

Run the code above in your browser using DataLab