Formula:
sum_squares_residuals <- sum((y_true - y_pred) ** 2)
sum_squares <- sum((y_true - mean(y_true)) ** 2)
R2 <- 1 - sum_squares_residuals / sum_squares
This is also called the coefficient of determination.
It indicates how close the fitted regression line is to ground-truth data.
The highest score possible is 1.0. It indicates that the predictors perfectly accounts for variation in the target.
A score of 0.0 indicates that the predictors do not account for variation in the target.
It can also be negative if the model is worse than random.
This metric can also compute the "Adjusted R2" score.
metric_r2_score(
...,
class_aggregation = "uniform_average",
num_regressors = 0L,
name = "r2_score",
dtype = NULL
)
a Metric
instance is returned. The Metric
instance can be passed
directly to compile(metrics = )
, or used as a standalone object. See
?Metric
for example usage.
For forward/backward compatability.
Specifies how to aggregate scores corresponding to
different output classes (or target dimensions),
i.e. different dimensions on the last axis of the predictions.
Equivalent to multioutput
argument in Scikit-Learn.
Should be one of
NULL
(no aggregation), "uniform_average"
,
"variance_weighted_average"
.
Number of independent regressors used
("Adjusted R2" score). 0 is the standard R2 score.
Defaults to 0
.
Optional. string name of the metric instance.
Optional. data type of the metric result.
y_true <- rbind(1, 4, 3)
y_pred <- rbind(2, 4, 4)
metric <- metric_r2_score()
metric$update_state(y_true, y_pred)
metric$result()
## tf.Tensor(0.57142854, shape=(), dtype=float32)
Other regression metrics:
metric_concordance_correlation()
metric_cosine_similarity()
metric_log_cosh_error()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_pearson_correlation()
metric_root_mean_squared_error()
Other metrics:
Metric()
custom_metric()
metric_auc()
metric_binary_accuracy()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_binary_iou()
metric_categorical_accuracy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_concordance_correlation()
metric_cosine_similarity()
metric_f1_score()
metric_false_negatives()
metric_false_positives()
metric_fbeta_score()
metric_hinge()
metric_huber()
metric_iou()
metric_kl_divergence()
metric_log_cosh()
metric_log_cosh_error()
metric_mean()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_iou()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_mean_wrapper()
metric_one_hot_iou()
metric_one_hot_mean_iou()
metric_pearson_correlation()
metric_poisson()
metric_precision()
metric_precision_at_recall()
metric_recall()
metric_recall_at_precision()
metric_root_mean_squared_error()
metric_sensitivity_at_specificity()
metric_sparse_categorical_accuracy()
metric_sparse_categorical_crossentropy()
metric_sparse_top_k_categorical_accuracy()
metric_specificity_at_sensitivity()
metric_squared_hinge()
metric_sum()
metric_top_k_categorical_accuracy()
metric_true_negatives()
metric_true_positives()