Loss functions
loss_binary_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
label_smoothing = 0,
axis = -1L,
...,
reduction = "auto",
name = "binary_crossentropy"
)loss_categorical_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
label_smoothing = 0L,
axis = -1L,
...,
reduction = "auto",
name = "categorical_crossentropy"
)
loss_categorical_hinge(
y_true,
y_pred,
...,
reduction = "auto",
name = "categorical_hinge"
)
loss_cosine_similarity(
y_true,
y_pred,
axis = -1L,
...,
reduction = "auto",
name = "cosine_similarity"
)
loss_hinge(y_true, y_pred, ..., reduction = "auto", name = "hinge")
loss_huber(
y_true,
y_pred,
delta = 1,
...,
reduction = "auto",
name = "huber_loss"
)
loss_kullback_leibler_divergence(
y_true,
y_pred,
...,
reduction = "auto",
name = "kl_divergence"
)
loss_kl_divergence(
y_true,
y_pred,
...,
reduction = "auto",
name = "kl_divergence"
)
loss_logcosh(y_true, y_pred, ..., reduction = "auto", name = "log_cosh")
loss_mean_absolute_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_absolute_error"
)
loss_mean_absolute_percentage_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_absolute_percentage_error"
)
loss_mean_squared_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_squared_error"
)
loss_mean_squared_logarithmic_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_squared_logarithmic_error"
)
loss_poisson(y_true, y_pred, ..., reduction = "auto", name = "poisson")
loss_sparse_categorical_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
axis = -1L,
...,
reduction = "auto",
name = "sparse_categorical_crossentropy"
)
loss_squared_hinge(
y_true,
y_pred,
...,
reduction = "auto",
name = "squared_hinge"
)
If called with y_true
and y_pred
, then the corresponding loss is
evaluated and the result returned (as a tensor). Alternatively, if y_true
and y_pred
are missing, then a callable is returned that will compute the
loss function and, by default, reduce the loss to a scalar tensor; see the
reduction
parameter for details. (The callable is a typically a class
instance that inherits from keras$losses$Loss
).
Ground truth values. shape = [batch_size, d1, .. dN]
.
The predicted values. shape = [batch_size, d1, .. dN]
.
(Tensor of the same shape as y_true
)
Whether y_pred
is expected to be a logits tensor. By
default we assume that y_pred
encodes a probability distribution.
Float in [0, 1]
. If > 0
then smooth the labels.
For example, if 0.1
, use 0.1 / num_classes
for non-target labels and
0.9 + 0.1 / num_classes
for target labels.
The axis along which to compute crossentropy (the features axis).
Axis is 1-based (e.g, first axis is axis=1
). Defaults to -1
(the last axis).
Additional arguments passed on to the Python callable (for forward and backwards compatibility).
Only applicable if y_true
and y_pred
are missing. Type
of keras$losses$Reduction
to apply to loss. Default value is AUTO
.
AUTO
indicates that the reduction option will be determined by the usage
context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE
. When
used with tf$distribute$Strategy
, outside of built-in training loops such
as compile
and fit
, using AUTO
or SUM_OVER_BATCH_SIZE
will raise an
error. Please see this custom training tutorial for more
details.
Only applicable if y_true
and y_pred
are missing. Optional
name for the Loss instance.
A float, the point where the Huber loss function changes from a quadratic to linear.
Computes the binary crossentropy loss.
label_smoothing
details: Float in [0, 1]
. If > 0
then smooth the labels
by squeezing them towards 0.5 That is, using 1. - 0.5 * label_smoothing
for the target class and 0.5 * label_smoothing
for the non-target class.
Computes the categorical crossentropy loss.
When using the categorical_crossentropy loss, your targets should be in
categorical format (e.g. if you have 10 classes, the target for each sample
should be a 10-dimensional vector that is all-zeros except for a 1 at the
index corresponding to the class of the sample). In order to convert
integer targets into categorical targets, you can use the Keras utility
function to_categorical()
:
categorical_labels <- to_categorical(int_labels, num_classes = NULL)
Computes Huber loss value.
For each value x in error = y_true - y_pred
:
loss = 0.5 * x^2 if |x| <= d
loss = d * |x| - 0.5 * d^2 if |x| > d
where d is delta
. See: https://en.wikipedia.org/wiki/Huber_loss
Logarithm of the hyperbolic cosine of the prediction error.
log(cosh(x))
is approximately equal to (x ** 2) / 2
for small x
and
to abs(x) - log(2)
for large x
. This means that 'logcosh' works mostly
like the mean squared error, but will not be so strongly affected by the
occasional wildly incorrect prediction. However, it may return NaNs if the
intermediate value cosh(y_pred - y_true)
is too large to be represented
in the chosen precision.
Loss functions for model training. These are typically supplied in
the loss
parameter of the compile.keras.engine.training.Model()
function.
compile.keras.engine.training.Model()
,
loss_binary_crossentropy()