Table Of Contents
Table Of Contents

check_symbolic_backward

mxnet.test_utils.check_symbolic_backward(sym, location, out_grads, expected, rtol=1e-05, atol=None, aux_states=None, grad_req='write', ctx=None, grad_stypes=None, equal_nan=False, dtype=<class 'numpy.float32'>)[source]

Compares a symbol’s backward results with the expected ones. Prints error messages if the backward results are not the same as the expected results.

Parameters:
  • sym (Symbol) – output symbol
  • location (list of np.ndarray or dict of str to np.ndarray) –

    The evaluation point

    • if type is list of np.ndarray
      Contains all the NumPy arrays corresponding to mx.sym.list_arguments.
    • if type is dict of str to np.ndarray
      Contains the mapping between argument names and their values.
  • out_grads (None or list of np.ndarray or dict of str to np.ndarray) –

    NumPys arrays corresponding to sym.outputs for incomming gradient.

    • if type is list of np.ndarray
      Contains arrays corresponding to exe.outputs.
    • if type is dict of str to np.ndarray
      contains mapping between mxnet.sym.list_output() and Executor.outputs
  • expected (list of np.ndarray or dict of str to np.ndarray) –

    expected gradient values

    • if type is list of np.ndarray
      Contains arrays corresponding to exe.grad_arrays
    • if type is dict of str to np.ndarray
      Contains mapping between sym.list_arguments() and exe.outputs.
  • check_eps (float, optional) – Relative error to check to.
  • aux_states (list of np.ndarray or dict of str to np.ndarray) –
  • grad_req (str or list of str or dict of str to str, optional) – Gradient requirements. ‘write’, ‘add’ or ‘null’.
  • ctx (Context, optional) – Running context.
  • grad_stypes (dict of str->str) – dictionary of mapping argument name to stype for the gradient
  • equal_nan (Boolean) – if True, nan is a valid value for checking equivalency (ie nan == nan)
  • dtype (np.float16 or np.float32 or np.float64) – Datatype for mx.nd.array.

Example

>>> lhs = mx.symbol.Variable('lhs')
>>> rhs = mx.symbol.Variable('rhs')
>>> sym_add = mx.symbol.elemwise_add(lhs, rhs)
>>> mat1 = np.array([[1, 2], [3, 4]])
>>> mat2 = np.array([[5, 6], [7, 8]])
>>> grad1 = mx.nd.zeros(shape)
>>> grad2 = mx.nd.zeros(shape)
>>> exec_add = sym_add.bind(default_context(), args={'lhs': mat1, 'rhs': mat2},
... args_grad={'lhs': grad1, 'rhs': grad2}, grad_req={'lhs': 'write', 'rhs': 'write'})
>>> exec_add.forward(is_train=True)
>>> ograd = mx.nd.ones(shape)
>>> grad_expected = ograd.copy().asnumpy()
>>> check_symbolic_backward(sym_add, [mat1, mat2], [ograd], [grad_expected, grad_expected])