Learn R Programming

yardstick (version 1.1.0)

metrics: General Function to Estimate Performance

Description

This function estimates one or more common performance estimates depending on the class of truth (see Value below) and returns them in a three column tibble.

Usage

metrics(data, ...)

# S3 method for data.frame metrics(data, truth, estimate, ..., na_rm = TRUE, options = list())

Value

A three column tibble.

  • When truth is a factor, there are rows for accuracy() and the Kappa statistic (kap()).

  • When truth has two levels and 1 column of class probabilities is passed to ..., there are rows for the two class versions of mn_log_loss() and roc_auc().

  • When truth has more than two levels and a full set of class probabilities are passed to ..., there are rows for the multiclass version of mn_log_loss() and the Hand Till generalization of roc_auc().

  • When truth is numeric, there are rows for rmse(), rsq(), and mae().

Arguments

data

A data.frame containing the columns specified by truth, estimate, and ....

...

A set of unquoted column names or one or more dplyr selector functions to choose which variables contain the class probabilities. If truth is binary, only 1 column should be selected. Otherwise, there should be as many columns as factor levels of truth.

truth

The column identifier for the true results (that is numeric or factor). This should be an unquoted column name although this argument is passed by expression and support quasiquotation (you can unquote column names).

estimate

The column identifier for the predicted results (that is also numeric or factor). As with truth this can be specified different ways but the primary method is to use an unquoted variable name.

na_rm

A logical value indicating whether NA values should be stripped before the computation proceeds.

options

[deprecated]

No longer supported as of yardstick 1.0.0. If you pass something here it will be ignored with a warning.

Previously, these were options passed on to pROC::roc(). If you need support for this, use the pROC package directly.

See Also

metric_set()

Examples

Run this code

# Accuracy and kappa
metrics(two_class_example, truth, predicted)

# Add on multinomal log loss and ROC AUC by specifying class prob columns
metrics(two_class_example, truth, predicted, Class1)

# Regression metrics
metrics(solubility_test, truth = solubility, estimate = prediction)

# Multiclass metrics work, but you cannot specify any averaging
# for roc_auc() besides the default, hand_till. Use the specific function
# if you need more customization
library(dplyr)

hpc_cv %>%
  group_by(Resample) %>%
  metrics(obs, pred, VF:L) %>%
  print(n = 40)

Run the code above in your browser using DataLab