msgl.cv(x, classes, sampleWeights = NULL,
grouping = NULL, groupWeights = NULL,
parameterWeights = NULL, alpha = 0.5,
standardize = TRUE, lambda, fold = 10L,
cv.indices = list(), sparse.data = FALSE,
max.threads = 2L, seed = 331L,
algorithm.config = sgl.standard.config)groupWeights
= NULL default weights will be used. Default weights are
0 for the intercept and $$\sqrt{K\cdcv.indices != NULL. If
fold$\le$max(table(classes)) then the
data will be split into fold disjoint subcv.indices = NULL then a random
splitting will be generated using the fold
argument.x will be treated as
sparse, if x is a sparse matrix it will be treated
as sparse by default.fold$\le$max(table(classes)).length(lambda) one item for each lambda value,
with each item a matrix of size $K \times N$
containing the linear predictors.length(lambda) one item for each lambda value,
with each item a matrix of size $K \times N$
containing the probabilities.length(lambda).data(SimData)
x <- sim.data$x
classes <- sim.data$classes
lambda <- msgl.lambda.seq(x, classes, alpha = .5, d = 25L, lambda.min = 0.03)
fit.cv <- msgl.cv(x, classes, alpha = .5, lambda = lambda)
# Missclassification count
colSums(fit.cv$classes != classes)Run the code above in your browser using DataLab