If sample_weight
is given, calculates the sum of the weights of
true positives. This metric creates one local variable, true_positives
that is used to keep track of the number of true positives.
If sample_weight
is NULL
, weights default to 1.
Use sample_weight
of 0 to mask values.
metric_true_positives(..., thresholds = NULL, name = NULL, dtype = NULL)
a Metric
instance is returned. The Metric
instance can be passed
directly to compile(metrics = )
, or used as a standalone object. See
?Metric
for example usage.
For forward/backward compatability.
(Optional) Defaults to 0.5
. A float value, or a Python
list of float threshold values in [0, 1]
. A threshold is
compared with prediction values to determine the truth value of
predictions (i.e., above the threshold is TRUE
, below is FALSE
).
If used with a loss function that sets from_logits=TRUE
(i.e. no
sigmoid applied to predictions), thresholds
should be set to 0.
One metric value is generated for each threshold value.
(Optional) string name of the metric instance.
(Optional) data type of the metric result.
Standalone usage:
m <- metric_true_positives()
m$update_state(c(0, 1, 1, 1), c(1, 0, 1, 1))
m$result()
## tf.Tensor(2.0, shape=(), dtype=float32)
m$reset_state()
m$update_state(c(0, 1, 1, 1), c(1, 0, 1, 1), sample_weight = c(0, 0, 1, 0))
m$result()
## tf.Tensor(1.0, shape=(), dtype=float32)
Other confusion metrics:
metric_auc()
metric_false_negatives()
metric_false_positives()
metric_precision()
metric_precision_at_recall()
metric_recall()
metric_recall_at_precision()
metric_sensitivity_at_specificity()
metric_specificity_at_sensitivity()
metric_true_negatives()
Other metrics:
Metric()
custom_metric()
metric_auc()
metric_binary_accuracy()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_binary_iou()
metric_categorical_accuracy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_concordance_correlation()
metric_cosine_similarity()
metric_f1_score()
metric_false_negatives()
metric_false_positives()
metric_fbeta_score()
metric_hinge()
metric_huber()
metric_iou()
metric_kl_divergence()
metric_log_cosh()
metric_log_cosh_error()
metric_mean()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_iou()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_mean_wrapper()
metric_one_hot_iou()
metric_one_hot_mean_iou()
metric_pearson_correlation()
metric_poisson()
metric_precision()
metric_precision_at_recall()
metric_r2_score()
metric_recall()
metric_recall_at_precision()
metric_root_mean_squared_error()
metric_sensitivity_at_specificity()
metric_sparse_categorical_accuracy()
metric_sparse_categorical_crossentropy()
metric_sparse_top_k_categorical_accuracy()
metric_specificity_at_sensitivity()
metric_squared_hinge()
metric_sum()
metric_top_k_categorical_accuracy()
metric_true_negatives()