Learn R Programming

lmomco (version 2.4.14)

headrick.sheng.lalpha: Sample Headrick and Sheng L-alpha

Description

Compute the sample “Headrick and Sheng L-alpha” (\(\alpha_L\)) (Headrick and Sheng, 2013) by $$\alpha_L = \frac{d}{d-1} \biggl(1 - \frac{\sum_j \lambda^{(j)}_2}{\sum_j \lambda^{(j)}_2 + \sum\sum_{j\ne j'} \lambda_2^{(jj')}} \biggr)\mathrm{,}$$ where \(j = 1,\ldots,d\) for dimensions \(d\), the \(\sum_j \lambda^{(j)}_2\) is the summation of all the 2nd order (univariate) L-moments (L-scales, \(\lambda^{(j)}_2\)), and the double summation is the summation of all the 2nd-order L-comoments (\(\lambda_2^{(jj')}\)). In other words, the double summation is the sum total of all entries in both the lower and upper triangles (not the primary diagonal) of the L-comoment matrix (the L-scale and L-coscale [L-covariance] matrix) (Lcomoment.matrix).

The \(\alpha_L\) is closely related in structural computation as the well-known “Cronbach alpha” (\(\alpha_C\)). These are coefficients of reliability, which commonly ranges from 0 to 1, that provide what some methodologists portray as an overall assessment of a measure's reliability. If all of the scale items are entirely independent from one another, meaning that they are not correlated or share no covariance, then \(\alpha_C\) is 0, and, if all of the items have high covariances, then \(\alpha_C\) will approach 1 as the number of items in the scale approaches infinity. The higher the \(\alpha_C\) coefficient, the more the items have shared covariance and probably measure the same underlying concept. Theoretically, there is no lower bounds for \(\alpha_{C,L}\), which can add complicating nuances in bootstrap or simulation study of both \(\alpha_C\) and \(\alpha_L\). Negative values are considered a sign of something potentially wrong about the measure related to items not being positively correlated with each other, or a scoring system for a question item reversed. (This paragraph in part paraphrases https://data.library.virginia.edu/using-and-interpreting-cronbachs-alpha/
(accessed May 21, 2023) and other general sources.)

Usage

headrick.sheng.lalpha(x, bycovFF=FALSE, a=0.5, digits=8, ...)

lalpha(x, bycovFF=FALSE, a=0.5, digits=8, ...)

Value

An R

list is returned.

alpha

The \(\alpha_L\) statistic.

pitems

The number of items (column count) in the x.

n

The sample size (row count), if applicable, to the contents of x.

text

Any pertinent messages about the computations.

source

An attribute identifying the computational source of the Headrick and Sheng L-alpha: “headrick.sheng.lalpha” or “lalpha.star()”.

Arguments

x

An R data.frame of the random observations for the \(d\) random variables \(X\), which must be suitable for internal dispatch to the Lcomoment.matrix function for computation of the 2nd-order L-comoment matrix. Alternatively, x can be a precomputed 2nd-order L-comoment matrix (L-scale and L-coscale matrix) as shown by the following usage: lalpha(Lcomoment.matrix(x, k=2)$matrix).

bycovFF

A logical triggering the covariance pathway for the computation and bypassing the call to the L-comoments. The additional arguments can be used to control the pp function that is called internally to estimate nonexceedance probabilities and the “covariance pathway” (see Details). If bycovFF is FALSE, then the direct to L-comoment computation is used.

a

The plotting position argument a to the pp function that is hardwired here to Hazen in contrast to the default a=0 of pp (Weibull) for reasoning shown in this documentation.

digits

Number of digits for rounding on the returned value(s).

...

Additional arguments to pass.

Author

W.H. Asquith

Details

Headrick and Sheng (2013) propose \(\alpha_L\) to be an alternative estimator of reliability based on L-comoments. Those authors describe its context as follows: “Consider [a statistic] alpha (\(\alpha\)) in terms of a model that decomposes an observed score into the sum of two independent components: a true unobservable score \(t_i\) and a random error component \(\epsilon_{ij}\).”

Those authors continue “The model can be summarized as \(X_{ij} = t_i + \epsilon_{ij}\mathrm{,}\) where \(X_{ij}\) is the observed score associated with the \(i\)th examinee on the \(j\)th test item, and where \(i = 1,...,n\) [for sample size \(n\)]; \(j = 1,\ldots,d\); and the error terms (\(\epsilon_{ij}\)) are independent with a mean of zero.” Those authors comment that “inspection of [this model] indicates that this particular model restricts the true score \(t_i\) to be the same across all \(d\) test items.”

Those authors show empirical results for a simulation study, which indicate that \(\alpha_L\) can be “substantially superior” to [a different formulation of \(\alpha_C\) (Cronbach's alpha) based on product moments (the variance-covariance matrix)] in “terms of relative bias and relative standard error when distributions are heavy-tailed and sample sizes are small.”

Those authors show (Headrick and Sheng, 2013, eqs. 4 and 5) the reader that the second L-comoments of \(X_j\) and \(X_{j'}\) can alternatively be expressed as \(\lambda_2(X_j) = 2\mathrm{Cov}(X_j, F(X_j))\) and \(\lambda_2(X_{j'}) = 2\mathrm{Cov}(X_{j'}, F(X_{j'}))\). The second L-comoments of \(X_j\) toward (with respect to) \(X_{j'}\) and \(X_{j'}\) toward (with respect to) \(X_j\) are \(\lambda_2^{(jj')} = 2\mathrm{Cov}(X_j, F(X_{j'}))\) and \(\lambda_2^{(j'j)} = 2\mathrm{Cov}(X_{j'}, F(X_j))\). The respective cumulative distribution functions are denoted \(F(x_j)\) (nonexceedance probabilities). Evidently, those authors present the L-moments and L-comoments this way because their first example (thanks for detailed numerics!) already contain nonexceedance probabilities.

This apparent numerical difference between the version using estimates of nonexceedance probabilities for the data (the “covariance pathway”) compared to a “direct to L-comoment” pathway might be more than academic concern.

The Examples provide comparison and brief discussion of potential issues involved in the direct L-comoments and the covariance pathway. The discussion leads to interest in the effects of ties and their handling and the question of \(F(x_j)\) estimation by plotting position (pp). The Note section of this documentation provides expanded information and insights to \(\alpha_L\) computation.

References

Headrick, T.C., and Sheng, Y., 2013, An alternative to Cronbach's alpha---An L-moment-based measure of internal-consistency reliability: in Millsap, R.E., van der Ark, L.A., Bolt, D.M., Woods, C.M. (eds) New Developments in Quantitative Psychology, Springer Proceedings in Mathematics and Statistics, v. 66, tools:::Rd_expr_doi("10.1007/978-1-4614-9348-8_2").

Headrick, T.C., and Sheng, Y., 2013, A proposed measure of internal consistency reliability---Coefficient L-alpha: Behaviormetrika, v. 40, no. 1, pp. 57--68, tools:::Rd_expr_doi("10.2333/bhmk.40.57").

Béland, S., Cousineau, D., and Loye, N., 2017, Using the McDonald's omega coefficient instead of Cronbach's alpha [French]: McGill Journal of Education, v. 52, no. 3, pp. 791--804, tools:::Rd_expr_doi("https://doi.org/10.7202/1050915ar").

See Also

Lcomoment.matrix, pp

Examples

Run this code
# Table 1 in Headrick and Sheng (2013)
TV1 <- # Observations in cols 1:3, estimated nonexceedance probabilities in cols 4:6
c(2, 4, 3, 0.15, 0.45, 0.15,       5, 7, 7, 0.75, 0.95, 1.00,
  3, 5, 5, 0.35, 0.65, 0.40,       6, 6, 6, 0.90, 0.80, 0.75,
  7, 7, 6, 1.00, 0.95, 0.75,       5, 2, 6, 0.75, 0.10, 0.75,
  2, 3, 3, 0.15, 0.25, 0.15,       4, 3, 6, 0.55, 0.25, 0.75,
  3, 5, 5, 0.35, 0.65, 0.40,       4, 4, 5, 0.55, 0.45, 0.40)
T1 <- matrix(ncol=6, nrow=10)
for(r in seq(1,length(TV1), by=6)) T1[(r/6)+1, ] <- TV1[r:(r+5)]
colnames(T1) <- c("X1", "X2", "X3", "FX1", "FX2", "FX3"); T1 <- as.data.frame(T1)

lco2 <- matrix(nrow=3, ncol=3)
lco2[1,1] <- lmoms(T1$X1)$lambdas[2]
lco2[2,2] <- lmoms(T1$X2)$lambdas[2]
lco2[3,3] <- lmoms(T1$X3)$lambdas[2]
lco2[1,2] <- 2*cov(T1$X1, T1$FX2); lco2[1,3] <- 2*cov(T1$X1, T1$FX3)
lco2[2,1] <- 2*cov(T1$X2, T1$FX1); lco2[2,3] <- 2*cov(T1$X2, T1$FX3)
lco2[3,1] <- 2*cov(T1$X3, T1$FX1); lco2[3,2] <- 2*cov(T1$X3, T1$FX2)
headrick.sheng.lalpha(lco2)$alpha     # Headrick and Sheng (2013): alpha = 0.807
# 0.8074766
headrick.sheng.lalpha(Lcomoment.matrix(T1[,1:3], k=2)$matrix)$alpha
# 0.7805825
headrick.sheng.lalpha(T1[,1:3])$alpha #              FXs not used: alpha = 0.781
# 0.7805825
headrick.sheng.lalpha(T1[,1:3], bycovFF=TRUE)$alpha  # a=0.5, Hazen by default
# 0.7805825
headrick.sheng.lalpha(T1[,1:3], bycovFF=TRUE, a=0.5)$alpha
# 0.7805825

Run the code above in your browser using DataLab