Calculate reliability values of factors by coefficient omega
reliability(object, return.total = FALSE, dropSingle = TRUE,
omit.imps = c("no.conv", "no.se"))
logical
indicating whether to return a final
column containing the reliability of a composite of all items. Ignored
in 1-factor models, and should only be set TRUE
if all factors
represent scale dimensions that could nonetheless be collapsed to a
single scale composite (scale sum or scale mean).
logical
indicating whether to exclude factors
defined by a single indicator from the returned results. If TRUE
(default), single indicators will still be included in the total
column when return.total = TRUE
.
character
vector specifying criteria for omitting
imputations from pooled results. Can include any of
c("no.conv", "no.se", "no.npd")
, the first 2 of which are the
default setting, which excludes any imputations that did not
converge or for which standard errors could not be computed. The
last option ("no.npd"
) would exclude any imputations which
yielded a nonpositive definite covariance matrix for observed or
latent variables, which would include any "improper solutions" such
as Heywood cases. NPD solutions are not excluded by default because
they are likely to occur due to sampling error, especially in small
samples. However, gross model misspecification could also cause
NPD solutions, users can compare pooled results with and without
this setting as a sensitivity analysis to see whether some
imputations warrant further investigation.
Reliability values (coefficient alpha, coefficients omega, average
variance extracted) of each factor in each group. If there are multiple
factors, a total
column can optionally be included.
The coefficient alpha (Cronbach, 1951) can be calculated by
$$ \alpha = \frac{k}{k - 1}\left[ 1 - \frac{\sum^{k}_{i = 1} \sigma_{ii}}{\sum^{k}_{i = 1} \sigma_{ii} + 2\sum_{i < j} \sigma_{ij}} \right],$$
where \(k\) is the number of items in a factor, \(\sigma_{ii}\) is the item i observed variances, \(\sigma_{ij}\) is the observed covariance of items i and j.
The coefficient omega (Bollen, 1980; see also Raykov, 2001) can be calculated by
$$ \omega_1 =\frac{\left( \sum^{k}_{i = 1} \lambda_i \right)^{2} Var\left( \psi \right)}{\left( \sum^{k}_{i = 1} \lambda_i \right)^{2} Var\left( \psi \right) + \sum^{k}_{i = 1} \theta_{ii} + 2\sum_{i < j} \theta_{ij} }, $$
where \(\lambda_i\) is the factor loading of item i, \(\psi\) is the factor variance, \(\theta_{ii}\) is the variance of measurement errors of item i, and \(\theta_{ij}\) is the covariance of measurement errors from item i and j.
The second coefficient omega (Bentler, 1972, 2009) can be calculated by
$$ \omega_2 = \frac{\left( \sum^{k}_{i = 1} \lambda_i \right)^{2} Var\left( \psi \right)}{\bold{1}^\prime \hat{\Sigma} \bold{1}}, $$
where \(\hat{\Sigma}\) is the model-implied covariance matrix, and \(\bold{1}\) is the \(k\)-dimensional vector of 1. The first and the second coefficients omega will have the same value when the model has simple structure, but different values when there are (for example) cross-loadings or method factors. The first coefficient omega can be viewed as the reliability controlling for the other factors (like \(\eta^2_partial\) in ANOVA). The second coefficient omega can be viewed as the unconditional reliability (like \(\eta^2\) in ANOVA).
The third coefficient omega (McDonald, 1999), which is sometimes referred to hierarchical omega, can be calculated by
$$ \omega_3 =\frac{\left( \sum^{k}_{i = 1} \lambda_i \right)^{2} Var\left( \psi \right)}{\bold{1}^\prime \Sigma \bold{1}}, $$
where \(\Sigma\) is the observed covariance matrix. If the model fits the
data well, the third coefficient omega will be similar to the
\(\omega_2\). Note that if there is a directional effect in the model, all
coefficients omega will use the total factor variances, which is calculated
by lavInspect(object, "cov.lv")
.
In conclusion, \(\omega_1\), \(\omega_2\), and \(\omega_3\) are different in the denominator. The denominator of the first formula assumes that a model is congeneric factor model where measurement errors are not correlated. The second formula accounts for correlated measurement errors. However, these two formulas assume that the model-implied covariance matrix explains item relationships perfectly. The residuals are subject to sampling error. The third formula use observed covariance matrix instead of model-implied covariance matrix to calculate the observed total variance. This formula is the most conservative method in calculating coefficient omega.
The average variance extracted (AVE) can be calculated by
$$ AVE = \frac{\bold{1}^\prime \textrm{diag}\left(\Lambda\Psi\Lambda^\prime\right)\bold{1}}{\bold{1}^\prime \textrm{diag}\left(\hat{\Sigma}\right) \bold{1}}, $$
Note that this formula is modified from Fornell & Larcker (1981) in the case that factor variances are not 1. The proposed formula from Fornell & Larcker (1981) assumes that the factor variances are 1. Note that AVE will not be provided for factors consisting of items with dual loadings. AVE is the property of items but not the property of factors.
Regarding categorical indicators, coefficient alpha and AVE are calculated
based on polychoric correlations. The coefficient alpha from this function
may be not the same as the standard alpha calculation for categorical items.
Researchers may check the alpha
function in the psych
package
for the standard coefficient alpha calculation.
Item thresholds are not accounted for. Coefficient omega for categorical
items, however, is calculated by accounting for both item covariances and
item thresholds using Green and Yang's (2009, formula 21) approach. Three
types of coefficient omega indicate different methods to calculate item
total variances. The original formula from Green and Yang is equivalent to
\(\omega_3\) in this function. Green and Yang did not propose a method for
calculating reliability with a mixture of categorical and continuous
indicators, and we are currently unaware of an appropriate method.
Therefore, when reliability
detects both categorical and continuous
indicators in the model, an error is returned. If the categorical indicators
load on a different factor(s) than continuous indicators, then reliability
can be calculated separately for those scales by fitting separate models and
submitting each to the reliability
function.
Bollen, K. A. (1980). Issues in the comparative measurement of political democracy. American Sociological Review, 45(3), 370--390. doi:10.2307/2095172
Bentler, P. M. (1972). A lower-bound method for the dimension-free measurement of internal consistency. Social Science Research, 1(4), 343--357. doi:10.1016/0049-089X(72)90082-8
Bentler, P. M. (2009). Alpha, dimension-free, and model-based internal consistency reliability. Psychometrika, 74(1), 137--143. doi:10.1007/s11336-008-9100-1
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297--334. doi:10.1007/BF02310555
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement errors. Journal of Marketing Research, 18(1), 39--50. doi:10.2307/3151312
Green, S. B., & Yang, Y. (2009). Reliability of summed item scores using structural equation modeling: An alternative to coefficient alpha. Psychometrika, 74(1), 155--167. doi:10.1007/s11336-008-9099-3
McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum.
Raykov, T. (2001). Estimation of congeneric scale reliability using covariance structure analysis with nonlinear constraints British Journal of Mathematical and Statistical Psychology, 54(2), 315--323. doi:10.1348/000711001159582
reliabilityL2
for reliability value of a desired
second-order factor, maximalRelia
for the maximal reliability
of weighted composite
# NOT RUN {
library(lavaan)
HS.model <- ' visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9 '
fit <- cfa(HS.model, data = HolzingerSwineford1939)
reliability(fit)
reliability(fit, return.total = TRUE)
# }
Run the code above in your browser using DataLab