This creates a BenchmarkResult from a batchtools::ExperimentRegistry. To setup the benchmark have a look at batchmark.
reduceBatchmarkResults(ids = NULL, keep.pred = TRUE,
keep.extract = FALSE, show.info = getMlrOption("show.info"),
reg = batchtools::getDefaultRegistry())(data.frame or integer) A base::data.frame (or data.table::data.table) with a column named “job.id”. Alternatively, you may also pass a vector of integerish job ids. If not set, defaults to all successfully terminated jobs (return value of batchtools::findDone.
(logical(1))
Keep the prediction data in the pred slot of the result object.
If you do many experiments (on larger data sets) these objects might unnecessarily increase
object size / mem usage, if you do not really need them.
The default is set to TRUE.
(logical(1))
Keep the extract slot of the result object. When creating a lot of
benchmark results with extensive tuning, the resulting R objects can become
very large in size. That is why the tuning results stored in the extract
slot are removed by default (keep.extract = FALSE). Note that when
keep.extract = FALSE you will not be able to conduct analysis in the
tuning results.
(logical(1))
Print verbose output on console?
Default is set via configureMlr.
(batchtools::ExperimentRegistry) Registry, created by batchtools::makeExperimentRegistry. If not explicitly passed, uses the last created registry.
Other benchmark: BenchmarkResult,
batchmark, benchmark,
convertBMRToRankMatrix,
friedmanPostHocTestBMR,
friedmanTestBMR,
generateCritDifferencesData,
getBMRAggrPerformances,
getBMRFeatSelResults,
getBMRFilteredFeatures,
getBMRLearnerIds,
getBMRLearnerShortNames,
getBMRLearners,
getBMRMeasureIds,
getBMRMeasures, getBMRModels,
getBMRPerformances,
getBMRPredictions,
getBMRTaskDescs,
getBMRTaskIds,
getBMRTuneResults,
plotBMRBoxplots,
plotBMRRanksAsBarChart,
plotBMRSummary,
plotCritDifferences