y_true
& y_pred
.Formula:
loss <- -sum(l2_norm(y_true) * l2_norm(y_pred))
Note that it is a number between -1 and 1. When it is a negative number
between -1 and 0, 0 indicates orthogonality and values closer to -1
indicate greater similarity. This makes it usable as a loss function in a
setting where you try to maximize the proximity between predictions and
targets. If either y_true
or y_pred
is a zero vector, cosine
similarity will be 0 regardless of the proximity between predictions
and targets.
loss_cosine_similarity(
y_true,
y_pred,
axis = -1L,
...,
reduction = "sum_over_batch_size",
name = "cosine_similarity",
dtype = NULL
)
Cosine similarity tensor.
Tensor of true targets.
Tensor of predicted targets.
The axis along which the cosine similarity is computed
(the features axis). Defaults to -1
.
For forward/backward compatability.
Type of reduction to apply to the loss. In almost all cases
this should be "sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or NULL
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and NULL
perform no aggregation. Defaults to "sum_over_batch_size"
.
Optional name for the loss instance.
The dtype of the loss's computations. Defaults to NULL
, which
means using config_floatx()
. config_floatx()
is a
"float32"
unless set to different value
(via config_set_floatx()
). If a keras$DTypePolicy
is
provided, then the compute_dtype
will be utilized.
y_true <- rbind(c(0., 1.), c(1., 1.), c(1., 1.))
y_pred <- rbind(c(1., 0.), c(1., 1.), c(-1., -1.))
loss <- loss_cosine_similarity(y_true, y_pred, axis=-1)
loss
## tf.Tensor([-0. -1. 1.], shape=(3), dtype=float64)
Other losses:
Loss()
loss_binary_crossentropy()
loss_binary_focal_crossentropy()
loss_categorical_crossentropy()
loss_categorical_focal_crossentropy()
loss_categorical_hinge()
loss_circle()
loss_ctc()
loss_dice()
loss_hinge()
loss_huber()
loss_kl_divergence()
loss_log_cosh()
loss_mean_absolute_error()
loss_mean_absolute_percentage_error()
loss_mean_squared_error()
loss_mean_squared_logarithmic_error()
loss_poisson()
loss_sparse_categorical_crossentropy()
loss_squared_hinge()
loss_tversky()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_hinge()
metric_huber()
metric_kl_divergence()
metric_log_cosh()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_poisson()
metric_sparse_categorical_crossentropy()
metric_squared_hinge()