Function to perform Partial Least Squares (PLS) regression.
pls(X,
Y,
ncomp = 2,
scale = TRUE,
mode = c("regression", "canonical", "invariant", "classic"),
tol = 1e-06,
max.iter = 100,
near.zero.var = FALSE,
logratio="none",
multilevel=NULL,
all.outputs = TRUE)
numeric matrix of predictors. NA
s are allowed.
numeric vector or matrix of responses (for multi-response models).
NA
s are allowed.
the number of components to include in the model. Default to 2.
boleean. If scale = TRUE, each block is standardized to zero means and unit variances (default: TRUE)
character string. What type of algorithm to use, (partially) matching
one of "regression"
, "canonical"
, "invariant"
or "classic"
.
See Details.
Convergence stopping value.
integer, the maximum number of iterations.
boolean, see the internal nearZeroVar
function (should be set to TRUE in particular for data with many zero values). Setting this argument to FALSE (when appropriate) will speed up the computations. Default value is FALSE
one of ('none','CLR'). Default to 'none'
Design matrix for repeated measurement analysis, where multlevel decomposition is required. For a one factor decomposition, the repeated measures on each individual, i.e. the individuals ID is input as the first column. For a 2 level factor decomposition then 2nd AND 3rd columns indicate those factors. See examples in ?spls
).
boolean. Computation can be faster when some specific (and non-essential) outputs are not calculated. Default = TRUE
.
pls
returns an object of class "pls"
, a list
that contains the following components:
the centered and standardized original predictor matrix.
the centered and standardized original response vector or matrix.
the number of components included in the model.
the algorithm used to fit the model.
list containing the variates.
list containing the estimated loadings for the \(X\) and \(Y\) variates.
list containing the names to be used for individuals and variables.
the tolerance used in the iterative algorithm, used for subsequent S3 methods
Number of iterations of the algorthm for each component
the maximum number of iterations, used for subsequent S3 methods
list containing the zero- or near-zero predictors information.
whether scaling was applied per predictor.
whether log ratio transformation for relative proportion data was applied, and if so, which type of transformation.
amount of variance explained per component (note that contrary to PCA, this amount may not decrease as the aim of the method is not to maximise the variance, but the covariance between data sets).
numeric matrix of predictors in X that was input, before any saling / logratio / multilevel transformation.
matrix of coefficients from the regression of X / residual matrices X on the X-variates, to be used internally by predict
.
residual matrices X for each dimension.
pls
function fit PLS models with \(1, \ldots ,\)ncomp
components.
Multi-response models are fully supported. The X
and Y
datasets
can contain missing values.
The type of algorithm to use is specified with the mode
argument. Four PLS
algorithms are available: PLS regression ("regression")
, PLS canonical analysis
("canonical")
, redundancy analysis ("invariant")
and the classical PLS
algorithm ("classic")
(see References). Different modes relate on how the Y matrix is deflated across the iterations of the algorithms - i.e. the different components.
- Regression mode: the Y matrix is deflated with respect to the information extracted/modelled from the local regression on X. Here the goal is to predict Y from X (Y and X play an asymmetric role). Consequently the latent variables computed to predict Y from X are different from those computed to predict X from Y.
- Canonical mode: the Y matrix is deflated to the information extracted/modelled from the local regression on Y. Here X and Y play a symmetric role and the goal is similar to a Canonical Correlation type of analysis.
- Invariant mode: the Y matrix is not deflated
- Classic mode: is similar to a regression mode. It gives identical results for the variates and loadings associated to the X data set, but differences for the loadings vectors associated to the Y data set (different normalisations are used). Classic mode is the PLS2 model as defined by Tenenhaus (1998), Chap 9.
Note that in all cases the results are the same on the first component as deflation only starts after component 1.
The estimation of the missing values can be performed
by the reconstitution of the data matrix using the nipals
function. Otherwise, missing
values are handled by casewise deletion in the pls
function without having to
delete the rows with missing data.
logratio transform and multilevel analysis are performed sequentially as internal pre-processing step, through logratio.transfo
and withinVariation
respectively.
Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris: Editions Technic.
Wold H. (1966). Estimation of principal components and related models by iterative least squares. In: Krishnaiah, P. R. (editors), Multivariate Analysis. Academic Press, N.Y., 391-420.
Abdi H (2010). Partial least squares regression and projection on latent structure regression (PLS Regression). Wiley Interdisciplinary Reviews: Computational Statistics, 2(1), 97-106.
spls
, summary
,
plotIndiv
, plotVar
, predict
, perf
and http://www.mixOmics.org for more details.
# NOT RUN {
data(linnerud)
X <- linnerud$exercise
Y <- linnerud$physiological
linn.pls <- pls(X, Y, mode = "classic")
data(liver.toxicity)
X <- liver.toxicity$gene
Y <- liver.toxicity$clinic
toxicity.pls <- pls(X, Y, ncomp = 3)
# }
Run the code above in your browser using DataLab