Table Of Contents
Table Of Contents


class mxnet.optimizer.FTML(beta1=0.6, beta2=0.999, epsilon=1e-08, **kwargs)[source]

The FTML optimizer.

This class implements the optimizer described in FTML - Follow the Moving Leader in Deep Learning, available at

Denote time step by t. The optimizer updates the weight by:

rescaled_grad = clip(grad * rescale_grad + wd * weight, clip_gradient)
v = beta2 * v + (1 - beta2) * square(rescaled_grad)
d_t = (1 - power(beta1, t)) / lr * square_root(v / (1 - power(beta2, t))) + epsilon)
z = beta1 * z + (1 - beta1) * rescaled_grad - (d_t - beta1 * d_(t-1)) * weight
weight = - z / d_t

For details of the update algorithm, see ftml_update.

This optimizer accepts the following parameters in addition to those accepted by Optimizer.

  • beta1 (float, optional) – 0 < beta1 < 1. Generally close to 0.5.

  • beta2 (float, optional) – 0 < beta2 < 1. Generally close to 1.

  • epsilon (float, optional) – Small value to avoid division by 0.

__init__(beta1=0.6, beta2=0.999, epsilon=1e-08, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.


__init__([beta1, beta2, epsilon])

Initialize self.

create_optimizer(name, **kwargs)

Instantiates an optimizer with a given name and kwargs.

create_state(index, weight)

Creates auxiliary state for a given weight.

create_state_multi_precision(index, weight)

Creates auxiliary state for a given weight, including FP32 high precision copy if original weight is FP16.


Registers a new optimizer.


Sets a new learning rate of the optimizer.


Sets an individual learning rate multiplier for each parameter.


[DEPRECATED] Sets lr scale.


Sets an individual weight decay multiplier for each parameter.

update(index, weight, grad, state)

Updates the given parameter using the corresponding gradient and state.

update_multi_precision(index, weight, grad, …)

Updates the given parameter using the corresponding gradient and state.