Either a list of lists of ResamplePrediction objects, as returned by resample, or these objects are rbind-ed with extra columns “task.id” and “learner.id”.
If predict.type
is “prob”, the probabilities for each class are returned in addition to the response.
If keep.pred
is FALSE
in the call to benchmark, the function will return NULL
.
getBMRPredictions(
bmr,
task.ids = NULL,
learner.ids = NULL,
as.df = FALSE,
drop = FALSE
)
(list | data.frame). See above.
(BenchmarkResult)
Benchmark result.
(character(1)
)
Restrict result to certain tasks.
Default is all.
(character(1)
)
Restrict result to certain learners.
Default is all.
(character(1)
)
Return one data.frame as result - or a list of lists of objects?.
Default is FALSE
.
(logical(1)
)
If drop is FALSE
(the default), a nested list with
the following structure is returned:
res[task.ids][learner.ids]
.
If drop is set to TRUE
it is checked if the list
structure can be simplified.
If only one learner was passed, a list with entries
for each task is returned.
If only one task was passed, the entries are named after
the corresponding learner.
For an experiment with both one task and learner,
the whole list structure is removed.
Note that the name of the
task/learner will be dropped from the return object.
Other benchmark:
BenchmarkResult
,
batchmark()
,
benchmark()
,
convertBMRToRankMatrix()
,
friedmanPostHocTestBMR()
,
friedmanTestBMR()
,
generateCritDifferencesData()
,
getBMRAggrPerformances()
,
getBMRFeatSelResults()
,
getBMRFilteredFeatures()
,
getBMRLearnerIds()
,
getBMRLearnerShortNames()
,
getBMRLearners()
,
getBMRMeasureIds()
,
getBMRMeasures()
,
getBMRModels()
,
getBMRPerformances()
,
getBMRTaskDescs()
,
getBMRTaskIds()
,
getBMRTuneResults()
,
plotBMRBoxplots()
,
plotBMRRanksAsBarChart()
,
plotBMRSummary()
,
plotCritDifferences()
,
reduceBatchmarkResults()