Learn R Programming

mixOmics (version 6.3.2)

pls: Partial Least Squares (PLS) Regression

Description

Function to perform Partial Least Squares (PLS) regression.

Usage

pls(X,
Y,
ncomp = 2,
scale = TRUE,
mode = c("regression", "canonical", "invariant", "classic"),
tol = 1e-06,
max.iter = 100,
near.zero.var = FALSE,
logratio="none",
multilevel=NULL,
all.outputs = TRUE)

Arguments

X

numeric matrix of predictors. NAs are allowed.

Y

numeric vector or matrix of responses (for multi-response models). NAs are allowed.

ncomp

the number of components to include in the model. Default to 2.

scale

boleean. If scale = TRUE, each block is standardized to zero means and unit variances (default: TRUE)

mode

character string. What type of algorithm to use, (partially) matching one of "regression", "canonical", "invariant" or "classic". See Details.

tol

Convergence stopping value.

max.iter

integer, the maximum number of iterations.

near.zero.var

boolean, see the internal nearZeroVar function (should be set to TRUE in particular for data with many zero values). Setting this argument to FALSE (when appropriate) will speed up the computations. Default value is FALSE

logratio

one of ('none','CLR'). Default to 'none'

multilevel

Design matrix for repeated measurement analysis, where multlevel decomposition is required. For a one factor decomposition, the repeated measures on each individual, i.e. the individuals ID is input as the first column. For a 2 level factor decomposition then 2nd AND 3rd columns indicate those factors. See examples in ?spls).

all.outputs

boolean. Computation can be faster when some specific (and non-essential) outputs are not calculated. Default = TRUE.

Value

pls returns an object of class "pls", a list that contains the following components:

X

the centered and standardized original predictor matrix.

Y

the centered and standardized original response vector or matrix.

ncomp

the number of components included in the model.

mode

the algorithm used to fit the model.

variates

list containing the variates.

loadings

list containing the estimated loadings for the \(X\) and \(Y\) variates.

names

list containing the names to be used for individuals and variables.

tol

the tolerance used in the iterative algorithm, used for subsequent S3 methods

iter

Number of iterations of the algorthm for each component

max.iter

the maximum number of iterations, used for subsequent S3 methods

nzv

list containing the zero- or near-zero predictors information.

scale

whether scaling was applied per predictor.

logratio

whether log ratio transformation for relative proportion data was applied, and if so, which type of transformation.

explained_variance

amount of variance explained per component (note that contrary to PCA, this amount may not decrease as the aim of the method is not to maximise the variance, but the covariance between data sets).

input.X

numeric matrix of predictors in X that was input, before any saling / logratio / multilevel transformation.

mat.c

matrix of coefficients from the regression of X / residual matrices X on the X-variates, to be used internally by predict.

defl.matrix

residual matrices X for each dimension.

Details

pls function fit PLS models with \(1, \ldots ,\)ncomp components. Multi-response models are fully supported. The X and Y datasets can contain missing values.

The type of algorithm to use is specified with the mode argument. Four PLS algorithms are available: PLS regression ("regression"), PLS canonical analysis ("canonical"), redundancy analysis ("invariant") and the classical PLS algorithm ("classic") (see References). Different modes relate on how the Y matrix is deflated across the iterations of the algorithms - i.e. the different components.

- Regression mode: the Y matrix is deflated with respect to the information extracted/modelled from the local regression on X. Here the goal is to predict Y from X (Y and X play an asymmetric role). Consequently the latent variables computed to predict Y from X are different from those computed to predict X from Y.

- Canonical mode: the Y matrix is deflated to the information extracted/modelled from the local regression on Y. Here X and Y play a symmetric role and the goal is similar to a Canonical Correlation type of analysis.

- Invariant mode: the Y matrix is not deflated

- Classic mode: is similar to a regression mode. It gives identical results for the variates and loadings associated to the X data set, but differences for the loadings vectors associated to the Y data set (different normalisations are used). Classic mode is the PLS2 model as defined by Tenenhaus (1998), Chap 9.

Note that in all cases the results are the same on the first component as deflation only starts after component 1.

The estimation of the missing values can be performed by the reconstitution of the data matrix using the nipals function. Otherwise, missing values are handled by casewise deletion in the pls function without having to delete the rows with missing data.

logratio transform and multilevel analysis are performed sequentially as internal pre-processing step, through logratio.transfo and withinVariation respectively.

References

Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris: Editions Technic.

Wold H. (1966). Estimation of principal components and related models by iterative least squares. In: Krishnaiah, P. R. (editors), Multivariate Analysis. Academic Press, N.Y., 391-420.

Abdi H (2010). Partial least squares regression and projection on latent structure regression (PLS Regression). Wiley Interdisciplinary Reviews: Computational Statistics, 2(1), 97-106.

See Also

spls, summary, plotIndiv, plotVar, predict, perf and http://www.mixOmics.org for more details.

Examples

Run this code
# NOT RUN {
data(linnerud)
X <- linnerud$exercise
Y <- linnerud$physiological
linn.pls <- pls(X, Y, mode = "classic")

data(liver.toxicity)
X <- liver.toxicity$gene
Y <- liver.toxicity$clinic

toxicity.pls <- pls(X, Y, ncomp = 3)
# }

Run the code above in your browser using DataLab