Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
callback_reduce_lr_on_plateau(
monitor = "val_loss",
factor = 0.1,
patience = 10,
verbose = 0,
mode = c("auto", "min", "max"),
min_delta = 1e-04,
cooldown = 0,
min_lr = 0
)
quantity to be monitored.
factor by which the learning rate will be reduced. new_lr = lr \* factor
number of epochs with no improvement after which learning rate will be reduced.
int. 0: quiet, 1: update messages.
one of "auto", "min", "max". In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing; in auto mode, the direction is automatically inferred from the name of the monitored quantity.
threshold for measuring the new optimum, to only focus on significant changes.
number of epochs to wait before resuming normal operation after lr has been reduced.
lower bound on the learning rate.
Other callbacks:
callback_csv_logger()
,
callback_early_stopping()
,
callback_lambda()
,
callback_learning_rate_scheduler()
,
callback_model_checkpoint()
,
callback_progbar_logger()
,
callback_remote_monitor()
,
callback_tensorboard()
,
callback_terminate_on_naan()