This function computes the test error over several runs for different model selection strategies.
benchmark.pls(
X,
y,
m = ncol(X),
R = 20,
ratio = 0.8,
verbose = TRUE,
k = 10,
ratio.samples = 1,
use.kernel = FALSE,
criterion = "bic",
true.coefficients = NULL
)
data frame of size R x 5. It contains the test error for the five different methods for each of the R runs.
data frame of size R x 5. It contains the optimal number of components for the five different methods for each of the R runs.
data frame of size R x
5. It contains the Degrees of Freedom (corresponding to M
) for the
five different methods for each of the R runs.
data frame of size R x 4. It contains the runtime for all methods (apart from the zero model) for each of the R runs.
data frame of size R x 2. It contains the number of components for which the Krylov representation and the Lanczos representation return negative Degrees of Freedom, hereby indicating numerical problems.
if true.coefficients
are
available, this is a data frame of size R x 5. It contains the model error
for the five different methods for each of the R runs.
data frame of size R x 5. It contains the estimation of the noise level provided by the five different methods for each of the R runs.
matrix of predictor observations.
vector of response observations. The length of y
is the same
as the number of rows of X
.
maximal number of Partial Least Squares components. Default is
m=ncol(X)
.
number of runs. Default is 20.
ratio no of training examples/(no of training examples + no of test examples). Default is 0.8
If TRUE
, the functions plots the progress of the
function. Default is TRUE
.
number of cross-validation splits. Default is 10.
Ratio of (no of training examples + no of test
examples)/nrow(X)
. Default is 1.
Use kernel representation? Default is
use.kernel=FALSE
.
Choice of the model selection criterion. One of the three options aic, bic, gmdl. Default is "bic".
The vector of true regression coefficients (without
intercept), if available. Default is NULL
.
Nicole Kraemer
The function estimates the optimal number of PLS components based on four different criteria: (1) cross-validation, (2) information criteria with the naive Degrees of Freedom DoF(m)=m+1, (3) information criteri with the Degrees of Freedom computed via a Lanczos represenation of PLS and (4) information criteri with the Degrees of Freedom computed via a Krylov represenation of PLS. Note that the latter two options only differ with respect to the estimation of the model error.
In addition, the function computes the test error of the "zero model", i.e.
mean(y)
on the training data is used for prediction.
If true.coefficients
are available, the function also computes the
model error for the different methods, i.e. the sum of squared differences
between the true and the estimated regression coefficients.
Kraemer, N., Sugiyama M. (2011). "The Degrees of Freedom of Partial Least Squares Regression". Journal of the American Statistical Association 106 (494) https://www.tandfonline.com/doi/abs/10.1198/jasa.2011.tm10107
pls.ic
, pls.cv
# generate artificial data
n<-50 # number of examples
p<-5 # number of variables
X<-matrix(rnorm(n*p),ncol=p)
true.coefficients<-runif(p,1,3)
y<-X%*%true.coefficients + rnorm(n,0,5)
my.benchmark<-benchmark.pls(X,y,R=10,true.coefficients=true.coefficients)
Run the code above in your browser using DataLab