Formula:
b2 <- beta^2
f_beta_score <- (1 + b2) * (precision * recall) / (precision * b2 + recall)
This is the weighted harmonic mean of precision and recall.
Its output range is [0, 1]
. It works for both multi-class
and multi-label classification.
metric_fbeta_score(
...,
average = NULL,
beta = 1,
threshold = NULL,
name = "fbeta_score",
dtype = NULL
)
a Metric
instance is returned. The Metric
instance can be passed
directly to compile(metrics = )
, or used as a standalone object. See
?Metric
for example usage.
For forward/backward compatability.
Type of averaging to be performed across per-class results
in the multi-class case.
Acceptable values are NULL
, "micro"
, "macro"
and
"weighted"
. Defaults to NULL
.
If NULL
, no averaging is performed and result()
will return
the score for each class.
If "micro"
, compute metrics globally by counting the total
true positives, false negatives and false positives.
If "macro"
, compute metrics for each label,
and return their unweighted mean.
This does not take label imbalance into account.
If "weighted"
, compute metrics for each label,
and return their average weighted by support
(the number of true instances for each label).
This alters "macro"
to account for label imbalance.
It can result in an score that is not between precision and recall.
Determines the weight of given to recall
in the harmonic mean between precision and recall (see pseudocode
equation above). Defaults to 1
.
Elements of y_pred
greater than threshold
are
converted to be 1, and the rest 0. If threshold
is
NULL
, the argmax of y_pred
is converted to 1, and the rest to 0.
Optional. String name of the metric instance.
Optional. Data type of the metric result.
metric <- metric_fbeta_score(beta = 2.0, threshold = 0.5)
y_true <- rbind(c(1, 1, 1),
c(1, 0, 0),
c(1, 1, 0))
y_pred <- rbind(c(0.2, 0.6, 0.7),
c(0.2, 0.6, 0.6),
c(0.6, 0.8, 0.0))
metric$update_state(y_true, y_pred)
metric$result()
## tf.Tensor([0.3846154 0.90909094 0.8333332 ], shape=(3), dtype=float32)
F-Beta Score: float.
Other f score metrics:
metric_f1_score()
Other metrics:
Metric()
custom_metric()
metric_auc()
metric_binary_accuracy()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_binary_iou()
metric_categorical_accuracy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_concordance_correlation()
metric_cosine_similarity()
metric_f1_score()
metric_false_negatives()
metric_false_positives()
metric_hinge()
metric_huber()
metric_iou()
metric_kl_divergence()
metric_log_cosh()
metric_log_cosh_error()
metric_mean()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_iou()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_mean_wrapper()
metric_one_hot_iou()
metric_one_hot_mean_iou()
metric_pearson_correlation()
metric_poisson()
metric_precision()
metric_precision_at_recall()
metric_r2_score()
metric_recall()
metric_recall_at_precision()
metric_root_mean_squared_error()
metric_sensitivity_at_specificity()
metric_sparse_categorical_accuracy()
metric_sparse_categorical_crossentropy()
metric_sparse_top_k_categorical_accuracy()
metric_specificity_at_sensitivity()
metric_squared_hinge()
metric_sum()
metric_top_k_categorical_accuracy()
metric_true_negatives()
metric_true_positives()