Evaluate all combinations of predictors during model training
bss(
predictors,
response,
method = "rf",
metric = ifelse(is.factor(response), "Accuracy", "RMSE"),
maximize = ifelse(metric == "RMSE", FALSE, TRUE),
globalval = FALSE,
trControl = caret::trainControl(),
tuneLength = 3,
tuneGrid = NULL,
seed = 100,
verbose = TRUE,
...
)
A list of class train. Beside of the usual train content the object contains the vector "selectedvars" and "selectedvars_perf" that give the best variables selected as well as their corresponding performance. It also contains "perf_all" that gives the performance of all model runs.
see train
see train
see train
see train
see train
Logical. Should models be evaluated based on 'global' performance? See global_validation
see train
see train
see train
A random number
Logical. Should information about the progress be printed?
arguments passed to the classification or regression routine (such as randomForest).
Hanna Meyer
bss is an alternative to ffs
and ideal if the training
set is small. Models are iteratively fitted using all different combinations
of predictor variables. Hence, 2^X models are calculated. Don't try running bss
on very large datasets because the computation time is much higher compared to
ffs
.
The internal cross validation can be run in parallel. See information on parallel processing of carets train functions for details.
train
,ffs
,
trainControl
,CreateSpacetimeFolds
,
nndm
if (FALSE) {
data(iris)
bssmodel <- bss(iris[,1:4],iris$Species)
bssmodel$perf_all
plot(bssmodel)
}
Run the code above in your browser using DataLab