This function computes the Partial Least Squares fit. This algorithm scales mainly in the number of observations.
Usage
kernel.pls.fit(X, y, m, compute.jacobian,DoF.max)
Arguments
X
matrix of predictor observations.
y
vector of response observations. The length of y is the same as the number of rows of X.
m
maximal number of Partial Least Squares components. Default is m=ncol(X).
compute.jacobian
Should the first derivative of the regression coefficients be computed as well? Default is FALSE
DoF.max
upper bound on the Degrees of Freedom. Default is min(ncol(X)+1,nrow(X)-1).
Value
coefficientsmatrix of regression coefficients
interceptvector of regression intercepts
DoFDegrees of Freedom
sigmahatvector of estimated model error
Yhatmatrix of fitted values
yhatvector of squared length of fitted values
RSSvector of residual sum of error
covarianceNULL object.
TTmatrix of normalized PLS components
Details
We first standardize X to zero mean and unit variance.
References
Kraemer, N., Sugiyama M. (2010). "The Degrees of Freedom of Partial Least Squares Regression". preprint, http://arxiv.org/abs/1002.4112
Kraemer, N., Braun, M.L. (2007) "Kernelizing PLS, Degrees of Freedom, and Efficient Model Selection", Proceedings of the 24th International Conference on Machine Learning, Omni Press, 441 - 448
n<-50# number of observationsp<-5# number of variablesX<-matrix(rnorm(n*p),ncol=p)
y<-rnorm(n)
pls.object<-kernel.pls.fit(X,y,m=5,compute.jacobian=TRUE)