Table Of Contents
Table Of Contents

mxnet.optimizer.Signum

class mxnet.optimizer.Signum(learning_rate=0.01, momentum=0.9, wd_lh=0.0, **kwargs)[source]

The Signum optimizer that takes the sign of gradient or momentum.

The optimizer updates the weight by:

rescaled_grad = rescale_grad * clip(grad, clip_gradient) + wd * weight
state = momentum * state + (1-momentum)*rescaled_grad
weight = (1 - lr * wd_lh) * weight - lr * sign(state)

References

Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli & Anima Anandkumar. (2018). signSGD: Compressed Optimisation for Non-Convex Problems. In ICML‘18.

See: https://arxiv.org/abs/1802.04434

For details of the update algorithm see signsgd_update and signum_update.

This optimizer accepts the following parameters in addition to those accepted by Optimizer.

Parameters
  • momentum (float, optional) – The momentum value.

  • wd_lh (float, optional) – The amount of decoupled weight decay regularization, see details in the original paper at:https://arxiv.org/abs/1711.05101

__init__(learning_rate=0.01, momentum=0.9, wd_lh=0.0, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__([learning_rate, momentum, wd_lh])

Initialize self.

create_optimizer(name, **kwargs)

Instantiates an optimizer with a given name and kwargs.

create_state(index, weight)

Creates auxiliary state for a given weight.

create_state_multi_precision(index, weight)

Creates auxiliary state for a given weight, including FP32 high precision copy if original weight is FP16.

register(klass)

Registers a new optimizer.

set_learning_rate(lr)

Sets a new learning rate of the optimizer.

set_lr_mult(args_lr_mult)

Sets an individual learning rate multiplier for each parameter.

set_lr_scale(args_lrscale)

[DEPRECATED] Sets lr scale.

set_wd_mult(args_wd_mult)

Sets an individual weight decay multiplier for each parameter.

update(index, weight, grad, state)

Updates the given parameter using the corresponding gradient and state.

update_multi_precision(index, weight, grad, …)

Updates the given parameter using the corresponding gradient and state.

Attributes

learning_rate

opt_registry