Learn R Programming

semTools (version 0.5-7)

moreFitIndices: Calculate more fit indices

Description

Calculate more fit indices that are not already provided in lavaan.

Usage

moreFitIndices(object, fit.measures = "all", nPrior = 1)

Value

A numeric

lavaan.vector including any of the following requested via fit.measures=

  1. gammaHat: Gamma-Hat

  2. adjGammaHat: Adjusted Gamma-Hat

  3. baseline.rmsea: RMSEA of the default baseline (i.e., independence) model

  4. gammaHat.scaled: Gamma-Hat using scaled \(\chi^2\)

  5. adjGammaHat.scaled: Adjusted Gamma-Hat using scaled \(\chi^2\)

  6. baseline.rmsea.scaled: RMSEA of the default baseline (i.e., independence) model using scaled \(\chi^2\)

  7. aic.smallN: Corrected (for small sample size) AIC

  8. bic.priorN: BIC with specified prior sample size

  9. spbic: Scaled Unit-Information Prior BIC (SPBIC)

  10. hbic: Haughton's BIC (HBIC)

  11. ibic: Information-matrix-based BIC (IBIC)

  12. sic: Stochastic Information Criterion (SIC)

  13. hqc: Hannan-Quinn Information Criterion (HQC)

  14. icomp: Bozdogan Information Complexity (ICOMP) Criteria

Arguments

object

The lavaan model object provided after running the cfa, sem, growth, or lavaan functions.

fit.measures

Additional fit measures to be calculated. All additional fit measures are calculated by default

nPrior

The sample size on which prior is based. This argument is used to compute bic.priorN.

Author

Sunthud Pornprasertmanit (psunthud@gmail.com)

Terrence D. Jorgensen (University of Amsterdam; TJorgensen314@gmail.com)

Aaron Boulton (University of Delaware)

Ruben Arslan (Humboldt-University of Berlin, rubenarslan@gmail.com)

Yves Rosseel (Ghent University; Yves.Rosseel@UGent.be)

Mauricio Garnier-Villarreal (Vrije Universiteit Amsterdam; mgv@pm.me)

A great deal of feedback was provided by Kris Preacher regarding Bollen et al.'s (2012, 2014) extensions of BIC.

Details

See nullRMSEA() for the further details of the computation of RMSEA of the null model.

Gamma-Hat (gammaHat; West, Taylor, & Wu, 2012) is a global goodness-of-fit index which can be computed (assuming equal number of indicators across groups) by

$$ \hat{\Gamma} =\frac{p}{p + 2 \times \frac{\chi^{2}_{k} - df_{k}}{N}},$$

where \(p\) is the number of variables in the model, \(\chi^{2}_{k}\) is the \(\chi^2\) test statistic value of the target model, \(df_{k}\) is the degree of freedom when fitting the target model, and \(N\) is the sample size (or sample size minus the number of groups if mimic is set to "EQS").

Adjusted Gamma-Hat (adjGammaHat; West, Taylor, & Wu, 2012) is a global fit index which can be computed by

$$ \hat{\Gamma}_\textrm{adj} = \left(1 - \frac{K \times p \times (p + 1)}{2 \times df_{k}} \right) \times \left( 1 - \hat{\Gamma} \right),$$

where \(K\) is the number of groups (please refer to Dudgeon, 2004, for the multiple-group adjustment for adjGammaHat).

Note that if Satorra--Bentler's or Yuan--Bentler's method is used, the fit indices using the scaled \(\chi^2\) values are also provided.

The remaining indices are information criteria calculated using the object's \(-2 \times\) log-likelihood, abbreviated \(-2LL\).

Corrected Akaike Information Criterion (aic.smallN; Burnham & Anderson, 2003) is a corrected version of AIC for small sample size, often abbreviated AICc:

$$ \textrm{AIC}_{\textrm{small}-N} = AIC + \frac{2q(q + 1)}{N - q - 1},$$

where \(AIC\) is the original AIC: \(-2LL + 2q\) (where \(q\) = the number of estimated parameters in the target model). Note that AICc is a small-sample correction derived for univariate regression models, so it is probably not appropriate for comparing SEMs.

Corrected Bayesian Information Criterion (bic.priorN; Kuha, 2004) is similar to BIC but explicitly specifying the sample size on which the prior is based (\(N_{prior}\)) using the nPrior argument.

$$ \textrm{BIC}_{\textrm{prior}-N} = -2LL + q\log{( 1 + \frac{N}{N_{prior}} )}.$$

Bollen et al. (2012, 2014) discussed additional BICs that incorporate more terms from a Taylor series expansion, which the standard BIC drops. The "Scaled Unit-Information Prior" BIC is calculated depending on whether the product of the vector of estimated model parameters (\(\hat{\theta}\)) and the observed information matrix (FIM) exceeds the number of estimated model parameters (Case 1) or not (Case 2), which is checked internally:

$$ \textrm{SPBIC}_{\textrm{Case 1}} = -2LL + q(1 - \frac{q}{\hat{\theta}^{'} \textrm{FIM} \hat{\theta}}), \textrm{ or}$$ $$ \textrm{SPBIC}_{\textrm{Case 2}} = -2LL + \hat{\theta}^{'} \textrm{FIM} \hat{\theta},$$

Note that this implementation of SPBIC is calculated on the assumption that priors for all estimated parameters are centered at zero, which is inappropriate for most SEMs (e.g., variances should not have priors centered at the lowest possible value; Bollen, 2014, p. 6).

Bollen et al. (2014, eq. 14) credit the HBIC to Haughton (1988):

$$ \textrm{HBIC} = -2LL + q\log{\frac{N}{2 \pi}}.$$

Bollen et al. (2012, p. 305) proposed the information matrix (\(I\))-based BIC by adding another term:

$$ \textrm{IBIC} = -2LL + q\log{\frac{N}{2 \pi}} + \log{\det{\textrm{FIM}}},$$

or equivalently, using the inverse information (the asymptotic sampling covariance matrix of estimated parameters: ACOV):

$$ \textrm{IBIC} = -2LL - q\log{2 \pi} - \log{\det{\textrm{ACOV}}}.$$

Stochastic information criterion (SIC; see Preacher, 2006, for details) is similar to IBIC but does not include the \(q\log{2 \pi}\) term that is also in HBIC. SIC and IBIC both account for model complexity in a model's functional form, not merely the number of free parameters. The SIC can be computed as:

$$ \textrm{SIC} = -2LL + q\log{N} + \log{\det{\textrm{FIM}}} = -2LL - \log{\det{\textrm{ACOV}}}.$$

Hannan--Quinn Information Criterion (HQC; Hannan & Quinn, 1979) is used for model selection, similar to AIC or BIC.

$$ \textrm{HQC} = -2LL + 2q\log{(\log{N})},$$

Bozdogan Information Complexity (ICOMP) Criteria (Howe et al., 2011), instead of penalizing the number of free parameters directly, ICOMP penalizes the covariance complexity of the model.

$$ \textrm{ICOMP} = -2LL + s \times log(\frac{\bar{\lambda_a}}{\bar{\lambda_g}}) $$

References

Bollen, K. A., Ray, S., Zavisca, J., & Harden, J. J. (2012). A comparison of Bayes factor approximation methods including two new methods. Sociological Methods & Research, 41(2), 294--324. tools:::Rd_expr_doi("10.1177/0049124112452393")

Bollen, K. A., Harden, J. J., Ray, S., & Zavisca, J. (2014). BIC and alternative Bayesian information criteria in the selection of structural equation models. Structural Equation Modeling, 21(1), 1--19. tools:::Rd_expr_doi("10.1080/10705511.2014.856691")

Burnham, K., & Anderson, D. (2003). Model selection and multimodel inference: A practical--theoretic approach. New York, NY: Springer--Verlag.

Dudgeon, P. (2004). A note on extending Steiger's (1998) multiple sample RMSEA adjustment to other noncentrality parameter-based statistic. Structural Equation Modeling, 11(3), 305--319. tools:::Rd_expr_doi("10.1207/s15328007sem1103_1")

Howe, E. D., Bozdogan, H., & Katragadda, S. (2011). Structural equation modeling (SEM) of categorical and mixed-data using the novel Gifi transformations and information complexity (ICOMP) criterion. Istanbul University Journal of the School of Business Administration, 40(1), 86--123.

Kuha, J. (2004). AIC and BIC: Comparisons of assumptions and performance. Sociological Methods Research, 33(2), 188--229. tools:::Rd_expr_doi("10.1177/0049124103262065")

Preacher, K. J. (2006). Quantifying parsimony in structural equation modeling. Multivariate Behavioral Research, 43(3), 227--259. tools:::Rd_expr_doi("10.1207/s15327906mbr4103_1")

West, S. G., Taylor, A. B., & Wu, W. (2012). Model fit and model selection in structural equation modeling. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (pp. 209--231). New York, NY: Guilford.

See Also

  • miPowerFit() For the modification indices and their power approach for model fit evaluation

  • nullRMSEA() For RMSEA of the default independence model

Examples

Run this code

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 '

fit <- cfa(HS.model, data = HolzingerSwineford1939)
moreFitIndices(fit)

fit2 <- cfa(HS.model, data = HolzingerSwineford1939, estimator = "mlr")
moreFitIndices(fit2)

Run the code above in your browser using DataLab