Learn R Programming

semTools (version 0.5-6)

moreFitIndices: Calculate more fit indices

Description

Calculate more fit indices that are not already provided in lavaan.

Usage

moreFitIndices(object, fit.measures = "all", nPrior = 1)

Arguments

object

The lavaan model object provided after running the cfa, sem, growth, or lavaan functions.

fit.measures

Additional fit measures to be calculated. All additional fit measures are calculated by default

nPrior

The sample size on which prior is based. This argument is used to compute bic.priorN.

Value

A numeric lavaan.vector including any of the following requested via fit.measures=

  1. gammaHat: Gamma-Hat

  2. adjGammaHat: Adjusted Gamma-Hat

  3. baseline.rmsea: RMSEA of the default baseline (i.e., independence) model

  4. gammaHat.scaled: Gamma-Hat using scaled \(\chi^2\)

  5. adjGammaHat.scaled: Adjusted Gamma-Hat using scaled \(\chi^2\)

  6. baseline.rmsea.scaled: RMSEA of the default baseline (i.e., independence) model using scaled \(\chi^2\)

  7. aic.smallN: Corrected (for small sample size) AIC

  8. bic.priorN: BIC with specified prior sample size

  9. spbic: Scaled Unit-Information Prior BIC (SPBIC)

  10. hbic: Haughton's BIC (HBIC)

  11. ibic: Information-matrix-based BIC (IBIC)

  12. sic: Stochastic Information Criterion (SIC)

  13. hqc: Hannan-Quinn Information Criterion (HQC)

Details

See nullRMSEA for the further details of the computation of RMSEA of the null model.

Gamma-Hat (gammaHat; West, Taylor, & Wu, 2012) is a global goodness-of-fit index which can be computed (assuming equal number of indicators across groups) by

$$ \hat{\Gamma} =\frac{p}{p + 2 \times \frac{\chi^{2}_{k} - df_{k}}{N}},$$

where \(p\) is the number of variables in the model, \(\chi^{2}_{k}\) is the \(\chi^2\) test statistic value of the target model, \(df_{k}\) is the degree of freedom when fitting the target model, and \(N\) is the sample size (or sample size minus the number of groups if mimic is set to "EQS").

Adjusted Gamma-Hat (adjGammaHat; West, Taylor, & Wu, 2012) is a global fit index which can be computed by

$$ \hat{\Gamma}_\textrm{adj} = \left(1 - \frac{K \times p \times (p + 1)}{2 \times df_{k}} \right) \times \left( 1 - \hat{\Gamma} \right),$$

where \(K\) is the number of groups (please refer to Dudgeon, 2004, for the multiple-group adjustment for adjGammaHat).

The remaining indices are information criteria calculated using the object's \(-2 \times\) log-likelihood, abbreviated \(-2LL\).

Corrected Akaike Information Criterion (aic.smallN; Burnham & Anderson, 2003) is a corrected version of AIC for small sample size, often abbreviated AICc:

$$ \textrm{AIC}_{\textrm{small}-N} = AIC + \frac{2q(q + 1)}{N - q - 1},$$

where \(AIC\) is the original AIC: \(-2LL + 2q\) (where \(q\) = the number of estimated parameters in the target model). Note that AICc is a small-sample correction derived for univariate regression models, so it is probably not appropriate for comparing SEMs.

Corrected Bayesian Information Criterion (bic.priorN; Kuha, 2004) is similar to BIC but explicitly specifying the sample size on which the prior is based (\(N_{prior}\)) using the nPrior argument.

$$ \textrm{BIC}_{\textrm{prior}-N} = -2LL + q\log{( 1 + \frac{N}{N_{prior}} )}.$$

Bollen et al. (2014) discussed additional BICs that incorporate more terms from a Taylor series expansion, which the standard BIC drops. The "Scaled Unit-Information Prior" BIC is calculated depending on whether the product of the vector of estimated model parameters (\(\hat{\theta}\)) and the observed information matrix (FIM) exceeds the number of estimated model parameters (Case 1) or not (Case 2), which is checked internally:

$$ \textrm{SPBIC}_{\textrm{Case 1}} = -2LL + q(1 - \frac{q}{\hat{\theta}^{'} \textrm{FIM} \hat{\theta}}), or$$ $$ \textrm{SPBIC}_{\textrm{Case 2}} = -2LL + \hat{\theta}^{'} \textrm{FIM} \hat{\theta},$$

Bollen et al. (2014) credit the HBIC to Haughton (1988):

$$ \textrm{HBIC}_{\textrm{Case 1}} = -2LL - q\log{2 \times \pi},$$

and proposes the information-matrix-based BIC by adding another term:

$$ \textrm{IBIC}_{\textrm{Case 1}} = -2LL - q\log{2 \times \pi} - \log{\det{\textrm{ACOV}}},$$

Stochastic information criterion (SIC; see Preacher, 2006, for details) is similar to IBIC but does not subtract the term \(q\log{2 \times \pi}\) that is also in HBIC. SIC and IBIC account for model complexity in a model's functional form, not merely the number of free parameters. The SIC can be computed by

$$ \textrm{SIC} = -2LL + \log{\det{\textrm{FIM}^{-1}}} = -2LL - \log{\det{\textrm{ACOV}}},$$

where the inverse of FIM is the asymptotic sampling covariance matrix (ACOV).

Hannan--Quinn Information Criterion (HQC; Hannan & Quinn, 1979) is used for model selection, similar to AIC or BIC.

$$ \textrm{HQC} = -2LL + 2k\log{(\log{N})},$$

Note that if Satorra--Bentler's or Yuan--Bentler's method is used, the fit indices using the scaled \(\chi^2\) values are also provided.

References

Bollen, K. A., Ray, S., Zavisca, J., & Harden, J. J. (2012). A comparison of Bayes factor approximation methods including two new methods. Sociological Methods & Research, 41(2), 294--324. 10.1177/0049124112452393

Burnham, K., & Anderson, D. (2003). Model selection and multimodel inference: A practical--theoretic approach. New York, NY: Springer--Verlag.

Dudgeon, P. (2004). A note on extending Steiger's (1998) multiple sample RMSEA adjustment to other noncentrality parameter-based statistic. Structural Equation Modeling, 11(3), 305--319. 10.1207/s15328007sem1103_1

Kuha, J. (2004). AIC and BIC: Comparisons of assumptions and performance. Sociological Methods Research, 33(2), 188--229. 10.1177/0049124103262065

Preacher, K. J. (2006). Quantifying parsimony in structural equation modeling. Multivariate Behavioral Research, 43(3), 227-259. 10.1207/s15327906mbr4103_1

West, S. G., Taylor, A. B., & Wu, W. (2012). Model fit and model selection in structural equation modeling. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (pp. 209--231). New York, NY: Guilford.

See Also

  • miPowerFit For the modification indices and their power approach for model fit evaluation

  • nullRMSEA For RMSEA of the default independence model

Examples

Run this code
# NOT RUN {
HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 '

fit <- cfa(HS.model, data = HolzingerSwineford1939)
moreFitIndices(fit)

fit2 <- cfa(HS.model, data = HolzingerSwineford1939, estimator = "mlr")
moreFitIndices(fit2)

# }

Run the code above in your browser using DataLab