Creates a scatter plot, where each line refers to a task. On that line the aggregated scores for all learners are plotted, for that task. Optionally, you can apply a rank transformation or just use one of ggplot2's transformations like ggplot2::scale_x_log10.
plotBMRSummary(
bmr,
measure = NULL,
trafo = "none",
order.tsks = NULL,
pointsize = 4L,
jitter = 0.05,
pretty.names = TRUE
)
ggplot2 plot object.
(BenchmarkResult)
Benchmark result.
(Measure)
Performance measure.
Default is the first measure used in the benchmark experiment.
(character(1)
)
Currently either “none” or “rank”, the latter performing a rank transformation
(with average handling of ties) of the scores per task.
NB: You can add always add ggplot2::scale_x_log10 to the result to put scores on a log scale.
Default is “none”.
(character(n.tasks)
)
Character vector with task.ids
in new order.
(numeric(1)
)
Point size for ggplot2 ggplot2::geom_point for data points.
Default is 4.
(numeric(1)
)
Small vertical jitter to deal with overplotting in case of equal scores.
Default is 0.05.
(logical(1)
)
Whether to use the short name of the learner instead of its ID in labels. Defaults to TRUE
.
Other benchmark:
BenchmarkResult
,
batchmark()
,
benchmark()
,
convertBMRToRankMatrix()
,
friedmanPostHocTestBMR()
,
friedmanTestBMR()
,
generateCritDifferencesData()
,
getBMRAggrPerformances()
,
getBMRFeatSelResults()
,
getBMRFilteredFeatures()
,
getBMRLearnerIds()
,
getBMRLearnerShortNames()
,
getBMRLearners()
,
getBMRMeasureIds()
,
getBMRMeasures()
,
getBMRModels()
,
getBMRPerformances()
,
getBMRPredictions()
,
getBMRTaskDescs()
,
getBMRTaskIds()
,
getBMRTuneResults()
,
plotBMRBoxplots()
,
plotBMRRanksAsBarChart()
,
plotCritDifferences()
,
reduceBatchmarkResults()
Other plot:
createSpatialResamplingPlots()
,
plotBMRBoxplots()
,
plotBMRRanksAsBarChart()
,
plotCalibration()
,
plotCritDifferences()
,
plotLearningCurve()
,
plotPartialDependence()
,
plotROCCurves()
,
plotResiduals()
,
plotThreshVsPerf()