mxnet.ndarray.sparse.sgd_mom_update¶

mxnet.ndarray.sparse.sgd_mom_update(weight=None, grad=None, mom=None, lr=_Null, momentum=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, out=None, name=None, **kwargs)

Momentum update function for Stochastic Gradient Descent (SGD) optimizer.

Momentum update has better convergence rates on neural networks. Mathematically it looks like below:

$\begin{split}v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t\end{split}$

v = momentum * v - learning_rate * gradient
weight += v


Where the parameter momentum is the decay rate of momentum estimates at each epoch.

However, if grad’s storage type is row_sparse, lazy_update is True and weight’s storage type is the same as momentum’s storage type, only the row slices whose indices appear in grad.indices are updated (for both weight and momentum):

for row in gradient.indices:
v[row] = momentum[row] * v[row] - learning_rate * gradient[row]
weight[row] += v[row]


Defined in src/operator/optimizer_op.cc:L372

Parameters
• weight (NDArray) – Weight

• mom (NDArray) – Momentum

• lr (float, required) – Learning rate

• momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.

• wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.