Computes a matrix of all the ranks of different algorithms over different datasets (tasks). Ranks are computed from aggregated measures. Smaller ranks imply better methods, so for measures that are minimized, small ranks imply small scores. for measures that are maximized, small ranks imply large scores.
convertBMRToRankMatrix(
bmr,
measure = NULL,
ties.method = "average",
aggregation = "default"
)
(BenchmarkResult) Benchmark result.
(Measure) Performance measure. Default is the first measure used in the benchmark experiment.
(character(1)
)
See base::rank for details.
(character(1)
)
“mean” or “default”. See getBMRAggrPerformances
for details on “default”.
(matrix) with measure ranks as entries.
The matrix has one row for each learner
, and one column for each task
.
Other benchmark:
BenchmarkResult
,
batchmark()
,
benchmark()
,
friedmanPostHocTestBMR()
,
friedmanTestBMR()
,
generateCritDifferencesData()
,
getBMRAggrPerformances()
,
getBMRFeatSelResults()
,
getBMRFilteredFeatures()
,
getBMRLearnerIds()
,
getBMRLearnerShortNames()
,
getBMRLearners()
,
getBMRMeasureIds()
,
getBMRMeasures()
,
getBMRModels()
,
getBMRPerformances()
,
getBMRPredictions()
,
getBMRTaskDescs()
,
getBMRTaskIds()
,
getBMRTuneResults()
,
plotBMRBoxplots()
,
plotBMRRanksAsBarChart()
,
plotBMRSummary()
,
plotCritDifferences()
,
reduceBatchmarkResults()
# NOT RUN {
# see benchmark
# }
Run the code above in your browser using DataLab