There are two ways to use partial.r
. One is to find the complete partial correlation matrix (that is, partial all the other variables out of each variable). This may be done by simply specifying the raw data or correlation matrix. (In the case of raw data, correlations will be found according to use and method.) In this case, just specify the data matrix.
This is useful in the case of multiple regression. If we think of the data as an X matrix and a Y vector (D = X + Y) with correlations R. Then the partial correlations of the X predictors are just the last column of R^(-1). See the Tal.Or
example below.
The second usage is to partial a set of variables(y) out of another set (x). It is sometimes convenient to partial the effect of a number of variables (e.g., sex, age, education) out of the correlations of another set of variables. This could be done laboriously by finding the residuals of various multiple correlations, and then correlating these residuals. The matrix algebra alternative is to do it directly.
To find the confidence intervals and "significance" of the correlations, use the corr.p
function with n = n - s where s is the numer of covariates.
Following a thoughtful request from Fransisco Wilheim, just find the correlations of the variables specified in the call (previously I had found the entire correlation matrix, which is a waste of time and breaks if some variables are non-numeric).)
In the case of non-positive definite matrices, find the Pinv (pseudo inverse) of the matrix.