Learn R Programming

mlr3measures (version 0.3.0)

fbeta: F-beta Score

Description

Binary classification measure defined with \(P\) as precision() and \(R\) as recall() as $$ (1 + \beta^2) \frac{P \cdot R}{(\beta^2 P) + R}. $$ It measures the effectiveness of retrieval with respect to a user who attaches \(\beta\) times as much importance to recall as precision. For \(\beta = 1\), this measure is called "F1" score.

Usage

fbeta(truth, response, positive, beta = 1, na_value = NaN, ...)

Arguments

truth

:: factor() True (observed) labels. Must have the exactly same two levels and the same length as response.

response

:: factor() Predicted response labels. Must have the exactly same two levels and the same length as truth.

positive

:: character(1) Name of the positive class.

beta

:: numeric(1) Parameter to give either precision or recall more weight. Default is 1, resulting in balanced weights.

na_value

:: numeric(1) Value that should be returned if the measure is not defined for the input (as described in the note). Default is NaN.

...

:: any Additional arguments. Currently ignored.

Value

Performance value as numeric(1).

Meta Information

  • Type: "binary"

  • Range: \([0, 1]\)

  • Minimize: FALSE

  • Required prediction: response

References

Rijsbergen, Van CJ (1979). Information Retrieval, 2nd edition. Butterworth-Heinemann, Newton, MA, USA. ISBN 408709294. Sasaki, Yutaka, others (2007). “The truth of the F-measure.” Teach Tutor mater, 1(5), 1--5. https://www.cs.odu.edu/~mukka/cs795sum10dm/Lecturenotes/Day3/F-measure-YS-26Oct07.pdf.

See Also

Other Binary Classification Measures: auc(), bbrier(), dor(), fdr(), fnr(), fn(), fomr(), fpr(), fp(), mcc(), npv(), ppv(), prauc(), tnr(), tn(), tpr(), tp()

Examples

Run this code
# NOT RUN {
set.seed(1)
lvls = c("a", "b")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
response = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
fbeta(truth, response, positive = "a")
# }

Run the code above in your browser using DataLab