
plsgenomics
. For S4
method information, see pls_ldaCMA-methods
.
pls_ldaCMA(X, y, f, learnind, comp = 2, plot = FALSE,models=FALSE)
matrix
. Rows correspond to observations, columns to variables.
data.frame
, when f
is not missing (s. below).
ExpressionSet
.
numeric
vector.
factor
.
character
if X
is an ExpressionSet
that
specifies the phenotype variable.
missing
, if X
is a data.frame
and a
proper formula f
is provided.
WARNING: The class labels will be re-coded to
range from 0
to K-1
, where K
is the
total number of different classes in the learning set.
X
is a data.frame
. The
left part correspond to class labels, the right to variables.missing
;
in that case, the learning set consists of all
observations and predictions are made on the
learning set.tune
.comp <= 2<="" code="">, should the classification space of the
Partial Least Squares components be plotted ? Default is FALSE
.
Tumor classifcation by partial least squares using microarray gene expression data.
Bioinformatics 18, 39-50 Boulesteix, A.L., Strimmer, K. (2007).
Partial least squares: a versatile tool for the analysis of high-dimensional genomic data.
Briefings in Bioinformatics 7:32-44.
compBoostCMA
, dldaCMA
, ElasticNetCMA
,
fdaCMA
, flexdaCMA
, gbmCMA
,
knnCMA
, ldaCMA
, LassoCMA
,
nnetCMA
, pknnCMA
, plrCMA
,
pls_ldaCMA
, pls_lrCMA
, pls_rfCMA
,
pnnCMA
, qdaCMA
, rfCMA
,
scdaCMA
, shrinkldaCMA
, svmCMA
### load Khan data
data(khan)
### extract class labels
khanY <- khan[,1]
### extract gene expression
khanX <- as.matrix(khan[,-1])
### select learningset
set.seed(111)
learnind <- sample(length(khanY), size=floor(2/3*length(khanY)))
### run Shrunken Centroids classfier, without tuning
plsresult <- pls_ldaCMA(X=khanX, y=khanY, learnind=learnind, comp = 4)
### show results
show(plsresult)
ftable(plsresult)
plot(plsresult)
Run the code above in your browser using DataLab