powered by
Measure to compare true observed labels with predicted labels in binary classification tasks.
ppv(truth, response, positive, na_value = NaN, ...)precision(truth, response, positive, na_value = NaN, ...)
precision(truth, response, positive, na_value = NaN, ...)
Performance value as numeric(1).
numeric(1)
(factor()) True (observed) labels. Must have the exactly same two levels and the same length as response.
factor()
response
(factor()) Predicted response labels. Must have the exactly same two levels and the same length as truth.
truth
(character(1)) Name of the positive class.
character(1))
(numeric(1)) Value that should be returned if the measure is not defined for the input (as described in the note). Default is NaN.
NaN
(any) Additional arguments. Currently ignored.
any
Type: "binary"
"binary"
Range: \([0, 1]\)
Minimize: FALSE
FALSE
Required prediction: response
The Positive Predictive Value is defined as $$ \frac{\mathrm{TP}}{\mathrm{TP} + \mathrm{FP}}. $$ Also know as "precision".
This measure is undefined if TP + FP = 0.
https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram
Goutte C, Gaussier E (2005). “A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation.” In Lecture Notes in Computer Science, 345--359. tools:::Rd_expr_doi("10.1007/978-3-540-31865-1_25").
Other Binary Classification Measures: auc(), bbrier(), dor(), fbeta(), fdr(), fn(), fnr(), fomr(), fp(), fpr(), gmean(), gpr(), npv(), prauc(), tn(), tnr(), tp(), tpr()
auc()
bbrier()
dor()
fbeta()
fdr()
fn()
fnr()
fomr()
fp()
fpr()
gmean()
gpr()
npv()
prauc()
tn()
tnr()
tp()
tpr()
set.seed(1) lvls = c("a", "b") truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls) response = factor(sample(lvls, 10, replace = TRUE), levels = lvls) ppv(truth, response, positive = "a")
Run the code above in your browser using DataLab