Table Of Contents
Table Of Contents

Function

class mxnet.autograd.Function[source]

Customize differentiation in autograd.

If you don’t want to use the gradients computed by the default chain-rule, you can use Function to customize differentiation for computation. You define your computation in the forward method and provide the customized differentiation in the backward method. During gradient computation, autograd will use the user-defined backward function instead of the default chain-rule. You can also cast to numpy array and back for some operations in forward and backward.

For example, a stable sigmoid function can be defined as:

class sigmoid(mx.autograd.Function):
    def forward(self, x):
        y = 1 / (1 + mx.nd.exp(-x))
        self.save_for_backward(y)
        return y

    def backward(self, dy):
        # backward takes as many inputs as forward's return value,
        # and returns as many NDArrays as forward's arguments.
        y, = self.saved_tensors
        return dy * y * (1-y)

Then, the function can be used in the following way:

func = sigmoid()
x = mx.nd.random.uniform(shape=(10,))
x.attach_grad()

with mx.autograd.record():
    m = func(x)
    m.backward()
dx = x.grad.asnumpy()
__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__() Initialize self.
backward(*output_grads) Backward computation.
forward(*inputs) Forward computation.
save_for_backward(*args)