Fits a PLSR model with the kernel algorithm.
kernelpls.fit(X, Y, ncomp, center = TRUE, stripped = FALSE, ...)A list containing the following components is returned:
an array of regression coefficients for 1, ...,
ncomp components. The dimensions of coefficients are
c(nvar, npred, ncomp) with nvar the number of X
variables and npred the number of variables to be predicted in
Y.
a matrix of scores.
a matrix of loadings.
a matrix of loading weights.
a matrix of Y-scores.
a matrix of Y-loadings.
the projection matrix used to convert X to scores.
a vector of means of the X variables.
a vector of means of the Y variables.
an
array of fitted values. The dimensions of fitted.values are
c(nobj, npred, ncomp) with nobj the number samples and
npred the number of Y variables.
an array of
regression residuals. It has the same dimensions as fitted.values.
a vector with the amount of X-variance explained by each component.
Total variance in X.
If stripped is TRUE, only the components coefficients,
Xmeans and Ymeans are returned.
a matrix of observations. NAs and Infs are not
allowed.
a vector or matrix of responses. NAs and Infs are
not allowed.
the number of components to be used in the modelling.
logical, determines if the \(X\) and \(Y\) matrices are mean centered or not. Default is to perform mean centering.
logical. If TRUE the calculations are stripped as
much as possible for speed; this is meant for use with cross-validation or
simulations when only the coefficients are needed. Defaults to
FALSE.
other arguments. Currently ignored.
Ron Wehrens and Bjørn-Helge Mevik
This function should not be called directly, but through the generic
functions plsr or mvr with the argument
method="kernelpls" (default). Kernel PLS is particularly efficient
when the number of objects is (much) larger than the number of variables.
The results are equal to the NIPALS algorithm. Several different forms of
kernel PLS have been described in literature, e.g. by De Jong and Ter
Braak, and two algorithms by Dayal and MacGregor. This function implements
the fastest of the latter, not calculating the crossproduct matrix of X. In
the Dyal & MacGregor paper, this is “algorithm 1”.
de Jong, S. and ter Braak, C. J. F. (1994) Comments on the PLS kernel algorithm. Journal of Chemometrics, 8, 169--174.
Dayal, B. S. and MacGregor, J. F. (1997) Improved PLS algorithms. Journal of Chemometrics, 11, 73--85.
mvr plsr cppls
pcr widekernelpls.fit simpls.fit
oscorespls.fit