Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
callback_reduce_lr_on_plateau(
monitor = "val_loss",
factor = 0.1,
patience = 10L,
verbose = 0L,
mode = "auto",
min_delta = 1e-04,
cooldown = 0L,
min_lr = 0,
...
)
A Callback
instance that can be passed to fit.keras.src.models.model.Model()
.
String. Quantity to be monitored.
Float. Factor by which the learning rate will be reduced.
new_lr = lr * factor
.
Integer. Number of epochs with no improvement after which learning rate will be reduced.
Integer. 0: quiet, 1: update messages.
String. One of {'auto', 'min', 'max'}
. In 'min'
mode,
the learning rate will be reduced when the
quantity monitored has stopped decreasing; in 'max'
mode it will
be reduced when the quantity monitored has stopped increasing; in
'auto'
mode, the direction is automatically inferred from the name
of the monitored quantity.
Float. Threshold for measuring the new optimum, to only focus on significant changes.
Integer. Number of epochs to wait before resuming normal operation after the learning rate has been reduced.
Float. Lower bound on the learning rate.
For forward/backward compatability.
reduce_lr <- callback_reduce_lr_on_plateau(monitor = 'val_loss', factor = 0.2,
patience = 5, min_lr = 0.001)
model %>% fit(x_train, y_train, callbacks = list(reduce_lr))
Other callbacks:
Callback()
callback_backup_and_restore()
callback_csv_logger()
callback_early_stopping()
callback_lambda()
callback_learning_rate_scheduler()
callback_model_checkpoint()
callback_remote_monitor()
callback_swap_ema_weights()
callback_tensorboard()
callback_terminate_on_nan()