generateThreshVsPerfData(obj, measures, gridsize = 100L, aggregate = TRUE,
task.id = NULL)
Prediction
| (list of) ResampleResult
| BenchmarkResult
]
Single prediction object, list of them, single resample result, list of them, or a benchmark result.
In case of a list probably produced by different learners you want to compare, then
name the list with the names you want to see in the plots, probably
learner shortnames or ids.Measure
| list of Measure
]
Performance measure(s) to evaluate.
Default is the default measure for the task, see here getDefaultMeasure
.integer(1)
]
Grid resolution for x-axis (threshold).
Default is 100.logical(1)
]
Whether to aggregate ResamplePrediction
s or to plot the performance
of each iteration separately.
Default is TRUE
.character(1)
]
Selected task in BenchmarkResult
to do plots for, ignored otherwise.
Default is first task.ThreshVsPerfData
]. A named list containing the measured performance
across the threshold grid, the measures, and whether the performance estimates were
aggregated (only applicable for (list of) ResampleResult
s).generateCalibrationData
,
generateCritDifferencesData
,
generateFeatureImportanceData
,
generateFilterValuesData
,
generateFunctionalANOVAData
,
generateLearningCurveData
,
generatePartialDependenceData
,
getFilterValues
,
plotFilterValues
Other thresh_vs_perf: plotROCCurves
,
plotThreshVsPerfGGVIS
,
plotThreshVsPerf