generateCalibrationData(obj, breaks = "Sturges", groups = NULL,
task.id = NULL)Prediction | (list of) ResampleResult | BenchmarkResult]
Single prediction object, list of them, single resample result, list of them, or a benchmark result.
In case of a list probably produced by different learners you want to compare, then
name the list with the names you want to see in the plots, probably
learner shortnames or ids.character(1) | numeric]
If character(1), the algorithm to use in generating probability bins.
See hist for details.
If numeric, the cut points for the bins.
Default is “Sturges”.integer(1)]
The number of bins to construct.
If specified, breaks is ignored.
Default is NULL.character(1)]
Selected task in BenchmarkResult to do plots for, ignored otherwise.
Default is first task.list containing:
data.frame] with columns:
Learner Name of learner.
bin Bins calculated according to the breaks or groups argument.
Class Class labels (for binary classification only the positive class).
Proportion Proportion of observations from class Class among all
observations with posterior probabilities of class Class within the
interval given in bin.
data.frame] with columns:
Learner Name of learner.
truth True class label.
Class Class labels (for binary classification only the positive class).
Probability Predicted posterior probability of Class.
bin Bin corresponding to Probability.
TaskDesc]
Task description.generateCritDifferencesData,
generateFeatureImportanceData,
generateFilterValuesData,
generateFunctionalANOVAData,
generateLearningCurveData,
generatePartialDependenceData,
generateThreshVsPerfData,
getFilterValues,
plotFilterValues Other calibration: plotCalibration