Abstract base class for wrapping LibTorch C++ optimizers.
torch::torch_optimizer
-> OptimizerIgnite
new()
Initializes the optimizer with the specified parameters and defaults.
OptimizerIgnite$new(params, defaults)
params
(list()
)
Either a list of tensors or a list of parameter groups, each containing the params
to optimizer
as well as the optimizer options such as the learning rate, weight decay, etc.
defaults
(list()
)
A list of default optimizer options.
state_dict()
Returns the state dictionary containing the current state of the optimizer.
The returned list()
contains two lists:
param_groups
: The parameter groups of the optimizer (lr
, ...) as well as to which
parameters they are applied (params
, integer indices)
state
: The states of the optimizer. The names are the indices of the parameters to which
they belong, converted to character.
OptimizerIgnite$state_dict()
(list()
)
load_state_dict()
Loads the state dictionary into the optimizer.
OptimizerIgnite$load_state_dict(state_dict)
state_dict
(list()
)
The state dictionary to load into the optimizer.
closure
(function()
)
A closure that conducts the forward pass and returns the loss.
(numeric()
)
The loss.
zero_grad()
Zeros out the gradients of the parameters.
OptimizerIgnite$zero_grad()
add_param_group()
Adds a new parameter group to the optimizer.
OptimizerIgnite$add_param_group(param_group)
param_group
(list()
)
A parameter group to add to the optimizer.
This should contain the params
to optimize as well as the optimizer options.
For all options that are not specified, the defaults are used.
clone()
The objects of this class are cloneable with this method.
OptimizerIgnite$clone(deep = FALSE)
deep
Whether to make a deep clone.