powered by
Implements stochastic gradient descent (optionally with momentum). Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning.
optim_ignite_sgd( params, lr = optim_required(), momentum = 0, dampening = 0, weight_decay = 0, nesterov = FALSE )
(iterable): iterable of parameters to optimize or dicts defining parameter groups
(float): learning rate
(float, optional): momentum factor (default: 0)
(float, optional): dampening for momentum (default: 0)
(float, optional): weight decay (L2 penalty) (default: 0)
(bool, optional): enables Nesterov momentum (default: FALSE)
See OptimizerIgnite.
OptimizerIgnite
if (torch_is_installed()) { if (FALSE) { optimizer <- optim_ignite_sgd(model$parameters(), lr = 0.1) optimizer$zero_grad() loss_fn(model(input), target)$backward() optimizer$step() } }
Run the code above in your browser using DataLab