Learn R Programming

semTools (version 0.5-3)

modindices.mi: Modification Indices for Multiple Imputations

Description

Modification indices (1-df Lagrange multiplier tests) from a latent variable model fitted to multiple imputed data sets. Statistics for releasing one or more fixed or constrained parameters in model can be calculated by pooling the gradient and information matrices across imputed data sets in a method proposed by Mansolf, Jorgensen, & Enders (in press)---analogous to the "D1" Wald test proposed by Li, Meng, Raghunathan, & Rubin (1991)---or by pooling the complete-data score-test statistics across imputed data sets (i.e., "D2"; Li et al., 1991).

Usage

modindices.mi(object, test = c("D2", "D1"), omit.imps = c("no.conv",
  "no.se"), standardized = TRUE, cov.std = TRUE,
  information = "expected", power = FALSE, delta = 0.1, alpha = 0.05,
  high.power = 0.75, sort. = FALSE, minimum.value = 0,
  maximum.number = nrow(LIST), na.remove = TRUE, op = NULL)

modificationIndices.mi(object, test = c("D2", "D1"), omit.imps = c("no.conv", "no.se"), standardized = TRUE, cov.std = TRUE, information = "expected", power = FALSE, delta = 0.1, alpha = 0.05, high.power = 0.75, sort. = FALSE, minimum.value = 0, maximum.number = nrow(LIST), na.remove = TRUE, op = NULL)

Arguments

object

An object of class '>lavaan.mi

test

character indicating which pooling method to use. "D1" requests Mansolf, Jorgensen, & Enders' (in press) proposed Wald-like test for pooling the gradient and information, which are then used to calculate score-test statistics in the usual manner. "D2" (default because it is less computationall intensive) requests to pool the complete-data score-test statistics from each imputed data set, then pool them across imputations, described by Li et al. (1991) and Enders (2010).

omit.imps

character vector specifying criteria for omitting imputations from pooled results. Can include any of c("no.conv", "no.se", "no.npd"), the first 2 of which are the default setting, which excludes any imputations that did not converge or for which standard errors could not be computed. The last option ("no.npd") would exclude any imputations which yielded a nonpositive definite covariance matrix for observed or latent variables, which would include any "improper solutions" such as Heywood cases. Specific imputation numbers can also be included in this argument, in case users want to apply their own custom omission criteria (or simulations can use different numbers of imputations without redundantly refitting the model).

standardized

logical. If TRUE, two extra columns ($sepc.lv and $sepc.all) will contain standardized values for the EPCs. In the first column ($sepc.lv), standardizization is based on the variances of the (continuous) latent variables. In the second column ($sepc.all), standardization is based on both the variances of both (continuous) observed and latent variables. (Residual) covariances are standardized using (residual) variances.

cov.std

logical. TRUE if test == "D2". If TRUE (default), the (residual) observed covariances are scaled by the square-root of the diagonal elements of the \(\Theta\) matrix, and the (residual) latent covariances are scaled by the square-root of the diagonal elements of the \(\Psi\) matrix. If FALSE, the (residual) observed covariances are scaled by the square-root of the diagonal elements of the model-implied covariance matrix of observed variables (\(\Sigma\)), and the (residual) latent covariances are scaled by the square-root of the diagonal elements of the model-implied covariance matrix of the latent variables.

information

character indicating the type of information matrix to use (check lavInspect for available options). "expected" information is the default, which provides better control of Type I errors.

power

logical. If TRUE, the (post-hoc) power is computed for each modification index, using the values of delta and alpha.

delta

The value of the effect size, as used in the post-hoc power computation, currently using the unstandardized metric of the $epc column.

alpha

The significance level used for deciding if the modification index is statistically significant or not.

high.power

If the computed power is higher than this cutoff value, the power is considered 'high'. If not, the power is considered 'low'. This affects the values in the $decision column in the output.

sort.

logical. If TRUE, sort the output using the values of the modification index values. Higher values appear first.

minimum.value

numeric. Filter output and only show rows with a modification index value equal or higher than this minimum value.

maximum.number

integer. Filter output and only show the first maximum number rows. Most useful when combined with the sort. option.

na.remove

logical. If TRUE (default), filter output by removing all rows with NA values for the modification indices.

op

character string. Filter the output by selecting only those rows with operator op.

Value

A data.frame containing modification indices and (S)EPCs.

References

Enders, C. K. (2010). Applied missing data analysis. New York, NY: Guilford.

Li, K.-H., Meng, X.-L., Raghunathan, T. E., & Rubin, D. B. (1991). Significance levels from repeated p-values with multiply-imputed data.Statistica Sinica, 1(1), 65--92. Retrieved from https://www.jstor.org/stable/24303994

Mansolf, M., Jorgensen, T. D., & Enders, C. K. (in press). A multiple imputation score test for model modification in structural equation models. Psychological Methods. doi:10.1037/met0000243

Examples

Run this code
# NOT RUN {
 
# }
# NOT RUN {
## impose missing data for example
HSMiss <- HolzingerSwineford1939[ , c(paste("x", 1:9, sep = ""),
                                      "ageyr","agemo","school")]
set.seed(12345)
HSMiss$x5 <- ifelse(HSMiss$x5 <= quantile(HSMiss$x5, .3), NA, HSMiss$x5)
age <- HSMiss$ageyr + HSMiss$agemo/12
HSMiss$x9 <- ifelse(age <= quantile(age, .3), NA, HSMiss$x9)

## impute missing data
library(Amelia)
set.seed(12345)
HS.amelia <- amelia(HSMiss, m = 20, noms = "school", p2s = FALSE)
imps <- HS.amelia$imputations

## specify CFA model from lavaan's ?cfa help page
HS.model <- '
  visual  =~ x1 + x2 + x3
  textual =~ x4 + x5 + x6
  speed   =~ x7 + x8 + x9
'

out <- cfa.mi(HS.model, data = imps)

modindices.mi(out) # default: Li et al.'s (1991) "D2" method
modindices.mi(out, test = "D1") # Li et al.'s (1991) "D1" method

# }
# NOT RUN {
# }

Run the code above in your browser using DataLab