Cross validation for the regularised and flexible discriminant analysis with compositional data using the \(\alpha\)-transformation.
alfarda.tune(x, ina, a = seq(-1, 1, by = 0.1), nfolds = 10,
gam = seq(0, 1, by = 0.1), del = seq(0, 1, by = 0.1),
ncores = 1, folds = NULL, stratified = TRUE, seed = NULL)alfafda.tune(x, ina, a = seq(-1, 1, by = 0.1), nfolds = 10,
folds = NULL, stratified = TRUE, seed = NULL, graph = FALSE)
A matrix with the available compositional data. Zeros are allowed.
A group indicator variable for the avaiable data.
A vector with a grid of values of the power transformation, it has to be between -1 and 1. If zero values are present it has to be greater than 0. If \(\alpha=0\) the isometric log-ratio transformation is applied.
The number of folds. Set to 10 by default.
A vector of values between 0 and 1. It is the weight of the pooled covariance and the diagonal matrix.
A vector of values between 0 and 1. It is the weight of the LDA and QDA.
The number of cores to use. If it is more than 1 parallel computing is performed. It is advisable to use it if you have many observations and or many variables, otherwise it will slow down th process.
If you have the list with the folds supply it here. You can also leave it NULL and it will create folds.
Do you want the folds to be created in a stratified way? TRUE or FALSE.
You can specify your own seed number here or leave it NULL.
If graph is TRUE (default value) a filled contour plot will appear.
For the alfa.rda a list including:
The estimated optimal rate and the best values of \(\alpha\), \(gamma\) and \(delta\).
For the best value of \(\alpha\) the averaged over all folds best prates of correct classification. It is a matrix, where rows correspond to the \(\gamma\) values and columns correspond to \(\delta\) values.
The estimated standard errors of the "percent" matrix.
The runtime of the cross-validation procedure.
For the alfa.fda a list including:
The performance of the fda in each fold for each value of \(\alpha\).
The average performance for each value of \(\alpha\).
The optimal value of \(\alpha\).
The runtime of the cross-validation procedure.
A k-fold cross validation is performed.
Friedman Jerome, Trevor Hastie and Robert Tibshirani (2009). The elements of statistical learning, 2nd edition. Springer, Berlin
Tsagris M.T., Preston S. and Wood A.T.A. (2016). Improved classification for compositional data using the \(\alpha\)-transformation. Jounal of Classification, 33(2):243-261.
Hastie, Tibshirani and Buja (1994). Flexible Disriminant Analysis by Optimal Scoring. Journal of the American Statistical Association, 89(428):1255-1270.
alfa.rda, alfanb.tune, cv.dda, compknn.tune, rda.tune, cv.compnb
# NOT RUN {
library(MASS)
x <- as.matrix(fgl[, 2:9])
x <- x / rowSums(x)
ina <- fgl[, 10]
moda <- alfarda.tune(x, ina, a = seq(0.7, 1, by = 0.1), nfolds = 10,
gam = seq(0.1, 0.3, by = 0.1), del = seq(0.1, 0.3, by = 0.1) )
# }
Run the code above in your browser using DataLab