tuneModel
finds the hyperparameters from the set denoted by design
of the machine learning algorithm learner
that give the best performance
with respect to the measure metric
for the LLAMA model type
llama.fun
on data ldf
. It uses a nested cross-validation
internally; the number of inner folds is given through nfolds
, the number
of outer folds is either determined by any existing partitions of ldf
or,
if none are present, by nfolds
as well.
During each iteration of the inner cross-validation, all parameter sets
specified in design
are evaluated and the one with the best performance
value chosen. The mean performance over all instances in the data is logged for
all evaluations. This parameter set is then used to build and evaluate a model
in the outer cross-validation. The predictions made by this model along with the
parameter values used to train it are returned.
Finally, a normal (not-nested) cross-validation is performed to find the best
parameter values on the entire data set. The predictor of this model
along with the parameter values used to train it is returned. The interface
corresponds to the normal LLAMA model-building functions in that respect -- the
returned data structure is the same with a few additional values.
The evaluation across the folds sets will be parallelized automatically if a
suitable backend for parallel computation is loaded. The parallelMap
level is "llama.tune".