principal(r, nfactors = 1, residuals = FALSE,rotate="varimax",n.obs=NA, scores=FALSE,missing=FALSE,impute="median",oblique.scores=TRUE)
factanal
:
$\chi^2 = (n.obs - 1 - (2 * p + 5)/6 - (2 * factors)/3)) * f$There are a number of data reduction techniques including principal components and factor analysis. Both PC and FA attempt to approximate a given correlation or covariance matrix of rank n with matrix of lower rank (p). $_nR_n \approx _{n}F_{kk}F_n'+ U^2$ where k is much less than n. For principal components, the item uniqueness is assumed to be zero and all elements of the correlation matrix are fitted. That is, $_nR_n \approx _{n}F_{kk}F_n'$ The primary empirical difference between a components versus a factor model is the treatment of the variances for each item. Philosophically, components are weighted composites of observed variables while in the factor model, variables are weighted composites of the factors.
For a n x n correlation matrix, the n principal components completely reproduce the correlation matrix. However, if just the first k principal components are extracted, this is the best k dimensional approximation of the matrix.
It is important to recognize that rotated principal components are not principal components (the axes associated with the eigen value decomposition) but are merely components. To point this out, unrotated principal components are labelled as PCi, while rotated PCs are now labeled as RCi (for rotated components) and obliquely transformed components as TCi (for transformed components). (Thanks to Ulrike Gromping for this suggestion.)
Rotations and transformations are either part of psych (Promax and cluster), of base R (varimax), or of GPArotation (simplimax, quartimax, oblimin).
Some of the statistics reported are more appropriate for (maximum likelihood) factor analysis rather than principal components analysis, and are reported to allow comparisons with these other models.
Although for items, it is typical to find component scores by scoring the salient items (using, e.g., score.items
) component scores are found by regression where the regression weights are $R^{-1} \lambda$ where $\lambda$ is the matrix of component loadings. The regression approach is done to be parallel with the factor analysis function fa
. The regression weights are found from the inverse of the correlation matrix times the component loadings. This has the result that the component scores are standard scores (mean=0, sd = 1) of the standardized input. A comparison to the scores from princomp
shows this difference. princomp does not, by default, standardize the data matrix, nor are the components themselves standardized. By default, the regression weights are found from the Structure matrix, not the Pattern matrix.
Revelle, W. An introduction to psychometric theory with applications in R (in prep) Springer. Draft chapters available at
VSS
(to test for the number of components or factors to extract), VSS.scree
and fa.parallel
to show a scree plot and compare it with random resamplings of the data), factor2cluster
(for course coding keys), fa
(for factor analysis), factor.congruence
(to compare solutions)#Four principal components of the Harmon 24 variable problem
#compare to a four factor principal axes solution using factor.congruence
pc <- principal(Harman74.cor$cov,4,rotate="varimax")
mr <- fa(Harman74.cor$cov,4,rotate="varimax") #minres factor analysis
pa <- fa(Harman74.cor$cov,4,rotate="varimax",fm="pa") # principal axis factor analysis
round(factor.congruence(list(pc,mr,pa)),2)
Run the code above in your browser using DataLab