# NAG¶

class mxnet.optimizer.NAG(momentum=0.0, **kwargs)[source]

Nesterov accelerated SGD.

This optimizer updates each weight by:

state = momentum * state + grad + wd * weight
weight = weight - (lr * (grad + momentum * state))

Parameters: momentum (float, optional) – The momentum value. multi_precision (bool, optional) – Flag to control the internal precision of the optimizer. False results in using the same precision as the weights (default), True makes internal 32-bit copy of the weights and applies gradients in 32-bit precision even if actual weights used in the model have lower precision. Turning this on can improve convergence and accuracy when training with float16.
__init__(momentum=0.0, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

 __init__([momentum]) Initialize self. create_optimizer(name, **kwargs) Instantiates an optimizer with a given name and kwargs. create_state(index, weight) Creates auxiliary state for a given weight. create_state_multi_precision(index, weight) Creates auxiliary state for a given weight, including FP32 high precision copy if original weight is FP16. register(klass) Registers a new optimizer. set_learning_rate(lr) Sets a new learning rate of the optimizer. set_lr_mult(args_lr_mult) Sets an individual learning rate multiplier for each parameter. set_lr_scale(args_lrscale) [DEPRECATED] Sets lr scale. set_wd_mult(args_wd_mult) Sets an individual weight decay multiplier for each parameter. update(index, weight, grad, state) Updates the given parameter using the corresponding gradient and state. update_multi_precision(index, weight, grad, …) Updates the given parameter using the corresponding gradient and state.

Attributes

 learning_rate opt_registry