Generates data on threshold vs. performance(s) for 2-class classification that can be used for plotting.
generateThreshVsPerfData(obj, measures, gridsize = 100L,
aggregate = TRUE, task.id = NULL)
(list of Prediction | list of ResampleResult | BenchmarkResult) Single prediction object, list of them, single resample result, list of them, or a benchmark result. In case of a list probably produced by different learners you want to compare, then name the list with the names you want to see in the plots, probably learner shortnames or ids.
(Measure | list of Measure) Performance measure(s) to evaluate. Default is the default measure for the task, see here getDefaultMeasure.
(integer(1)
)
Grid resolution for x-axis (threshold).
Default is 100.
(logical(1)
)
Whether to aggregate ResamplePredictions or to plot the performance
of each iteration separately.
Default is TRUE
.
(character(1)
)
Selected task in BenchmarkResult to do plots for, ignored otherwise.
Default is first task.
(ThreshVsPerfData). A named list containing the measured performance across the threshold grid, the measures, and whether the performance estimates were aggregated (only applicable for (list of) ResampleResults).
Other generate_plot_data: generateCalibrationData
,
generateCritDifferencesData
,
generateFeatureImportanceData
,
generateFilterValuesData
,
generateLearningCurveData
,
generatePartialDependenceData
,
plotFilterValues
Other thresh_vs_perf: plotROCCurves
,
plotThreshVsPerf