Learn R Programming

keras (version 2.0.9)

callback_reduce_lr_on_plateau: Reduce learning rate when a metric has stopped improving.

Description

Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

Usage

callback_reduce_lr_on_plateau(monitor = "val_loss", factor = 0.1,
  patience = 10, verbose = 0, mode = c("auto", "min", "max"),
  epsilon = 1e-04, cooldown = 0, min_lr = 0)

Arguments

monitor

quantity to be monitored.

factor

factor by which the learning rate will be reduced. new_lr = lr

  • factor

patience

number of epochs with no improvement after which learning rate will be reduced.

verbose

int. 0: quiet, 1: update messages.

mode

one of "auto", "min", "max". In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing; in auto mode, the direction is automatically inferred from the name of the monitored quantity.

epsilon

threshold for measuring the new optimum, to only focus on significant changes.

cooldown

number of epochs to wait before resuming normal operation after lr has been reduced.

min_lr

lower bound on the learning rate.

See Also

Other callbacks: callback_csv_logger, callback_early_stopping, callback_lambda, callback_learning_rate_scheduler, callback_model_checkpoint, callback_progbar_logger, callback_remote_monitor, callback_tensorboard, callback_terminate_on_naan