Although it is more common to calculate multiple regression and canonical correlations from the raw data, it is, of course, possible to do so from a matrix of correlations or covariances. In this case, the input to the function is a square covariance or correlation matrix, as well as the column numbers (or names) of the x (predictor), y (criterion) variables, and if desired z (covariates). The function will find the correlations if given raw data.
Input is either the set of y variables and the set of x variables, this can be written in the standard formula style of lm (see last example). In this case, pairwise or higher interactions (product terms) may also be specified. By default, when finding product terms, the data are zero centered (Cohen, Cohen, West and Aiken, 2003), although this option can be turned off (zero=FALSE) to match the results of link{lm}
or the results discussed in Hayes (2013).
The output is a set of multiple correlations, one for each dependent variable in the y set, as well as the set of canonical correlations.
An additional output is the R2 found using Cohen's set correlation (Cohen, 1982). This is a measure of how much variance and the x and y set share.
Cohen (1982) introduced the set correlation, a multivariate generalization of the multiple correlation to measure the overall relationship between two sets of variables. It is an application of canoncial correlation (Hotelling, 1936) and \(1 - \prod(1-\rho_i^2)\) where \(\rho_i^2\) is the squared canonical correlation. Set correlation is the amount of shared variance (R2) between two sets of variables. With the addition of a third, covariate set, set correlation will find multivariate R2, as well as partial and semi partial R2. (The semi and bipartial options are not yet implemented.) Details on set correlation may be found in Cohen (1982), Cohen (1988) and Cohen, Cohen, Aiken and West (2003).
R2 between two sets is just $$R^2 = 1- \frac{\left | R_{yx} \right |}{\left | R_y \right | \left |R_x\right |} = 1 - \prod(1-\rho_i^2) $$ where R is the complete correlation matrix of the x and y variables and Rx and Ry are the two sets involved.
Unfortunately, the R2 is sensitive to one of the canonical correlations being very high. An alternative, T2, is the proportion of additive variance and is the average of the squared canonicals. (Cohen et al., 2003), see also Cramer and Nicewander (1979). This average, because it includes some very small canonical correlations, will tend to be too small. Cohen et al. admonition is appropriate: "In the final analysis, however, analysts must be guided by their substantive and methodological conceptions of the problem at hand in their choice of a measure of association." ( p613).
Yet another measure of the association between two sets is just the simple, unweighted correlation between the two sets. That is,
$$R_{uw} =\frac{ 1 R_{xy} 1' }{(1R_{yy}1')^{.5} (1R_{xx}1')^{.5}} $$ where Rxy is the matrix of correlations between the two sets. This is just the simple (unweighted) sums of the correlations in each matrix. This technique exemplifies the robust beauty of linear models and is particularly appropriate in the case of one dimension in both x and y, and will be a drastic underestimate in the case of items where the betas differ in sign.
When finding the unweighted correlations, as is done in alpha
, items are flipped so that they all are positively signed.
A typical use in the SAPA project is to form item composites by clustering or factoring (see fa
,ICLUST
, principal
), extract the clusters from these results (factor2cluster
), and then form the composite correlation matrix using cluster.cor
. The variables in this reduced matrix may then be used in multiple R procedures using set.cor
.
Although the overall matrix can have missing correlations, the correlations in the subset of the matrix used for prediction must exist.
If the number of observations is entered, then the conventional confidence intervals, statistical significance, and shrinkage estimates are reported.
If the input is rectangular (not square), correlations or covariances are found from the data.
The print function reports t and p values for the beta weights, the summary function just reports the beta weights.
The Variance Inflation Factor is reported but should be taken with the normal cautions of interpretation discussed by Guide and Ketokivm. That is to say, VIF > 10 is not a magic cuttoff to define colinearity. It is merely 1/(1-smc(R(x)).
matReg
is primarily a helper function for mediate
but is a general multiple regression function given a covariance matrix and the specified x, y and z variables. Its output includes betas, se, t, p and R2. The call includes m for mediation variables, but these are only used to adjust the degrees of freedom.
matReg
does not work on data matrices, nor does it take formula input. It is really just a helper function for mediate