Tuning of the projection pursuit regression for compositional data In addition, estimation of the rate of correct classification via K-fold cross-validation.
compppr.tune(y, x, nfolds = 10, folds = NULL, seed = FALSE, nterms = 1:10,
type = "alr", yb = NULL, B = 1000 )
A matrix with the available compositional data, but zeros are not allowed.
A matrix with the continuous predictor variables.
The number of folds to use.
If you have the list with the folds supply it here.
If seed is TRUE the results will always be the same.
The number of terms to try in the projection pursuit regression.
Either "alr" or "ilr" corresponding to the additive or the isometric log-ratio transformation respectively.
If you have already transformed the data using a log-ratio transformation put it here. Othewrise leave it NULL.
The number of bootstrap re-samples to use for the unbiased estimation of the performance of the projection pursuit regression. If B = 1, no bootstrap is applied.
A list including:
The average Kullback-Leibler divergence.
The bootstrap bias corrected average Kullback-Leibler divergence. If no bootstrap was performed this is equal to the average Kullback-Leibler divergence.
The run time of the cross-validation procedure.
The function performs tuning of the projection pursuit regression algorithm.
Friedman, J. H. and Stuetzle, W. (1981). Projection pursuit regression. Journal of the American Statistical Association, 76, 817-823. doi: 10.2307/2287576.
Tsamardinos I., Greasidou E. and Borboudakis G. (2018). Bootstrapping the out-of-sample predictions for efficient and accurate cross-validation. Machine Learning 107(12): 1895-1922. https://link.springer.com/article/10.1007/s10994-018-5714-4
# NOT RUN {
x <- as.matrix(iris[, 1:4])
x <- x/ rowSums(x)
ina <- iris[, 5]
mod <- compknn.tune(x, ina, a = seq(1, 1, by = 0.1) )
# }
Run the code above in your browser using DataLab