Learn R Programming

mlr (version 2.10)

calculateROCMeasures: Calculate receiver operator measures.

Description

Calculate the relative number of correct/incorrect classifications and the following evaluation measures:
  • tpr True positive rate (Sensitivity, Recall)
  • fpr False positive rate (Fall-out)
  • fnr False negative rate (Miss rate)
  • tnr True negative rate (Specificity)
  • ppv Positive predictive value (Precision)
  • for False omission rate
  • lrp Positive likelihood ratio (LR+)
  • fdr False discovery rate
  • npv Negative predictive value
  • acc Accuracy
  • lrm Negative likelihood ratio (LR-)
  • dor Diagnostic odds ratio
For details on the used measures see measures and also https://en.wikipedia.org/wiki/Receiver_operating_characteristic. The element for the false omission rate in the resulting object is not called for but fomr since for should never be used as a variable name in an object.

Usage

calculateROCMeasures(pred)

# S3 method for ROCMeasures print(x, abbreviations = TRUE, digits = 2, ...)

Arguments

pred
[Prediction] Prediction object.
x
[ROCMeasures] Created by calculateROCMeasures.
abbreviations
[logical(1)] If TRUE a short paragraph with explanations of the used measures is printed additionally.
digits
[integer(1)] Number of digits the measures are rounded to.
...
[any] Currently not used.

Value

[ROCMeasures]. A list containing two elements confusion.matrix which is the 2 times 2 confusion matrix of relative frequencies and measures, a list of the above mentioned measures.

Methods (by generic)

  • print:

See Also

Other roc: asROCRPrediction, plotViperCharts Other performance: ConfusionMatrix, calculateConfusionMatrix, estimateRelativeOverfitting, makeCostMeasure, makeCustomResampledMeasure, makeMeasure, measures, performance

Examples

Run this code
lrn = makeLearner("classif.rpart", predict.type = "prob")
fit = train(lrn, sonar.task)
pred = predict(fit, task = sonar.task)
calculateROCMeasures(pred)

Run the code above in your browser using DataLab