Table Of Contents
Table Of Contents

check_numeric_gradient

mxnet.test_utils.check_numeric_gradient(sym, location, aux_states=None, numeric_eps=0.001, rtol=0.01, atol=None, grad_nodes=None, use_forward_train=True, ctx=None, grad_stype_dict=None, dtype=<class 'numpy.float32'>)[source]

Verify an operation by checking backward pass via finite difference method.

Based on Theano’s theano.gradient.verify_grad [1]

Parameters:
  • sym (Symbol) – Symbol containing op to test
  • location (list or tuple or dict) –

    Argument values used as location to compute gradient

    • if type is list of numpy.ndarray
      inner elements should have the same order as mxnet.sym.list_arguments().
    • if type is dict of str -> numpy.ndarray
      maps the name of arguments to the corresponding numpy.ndarray.

    In either case, value of all the arguments must be provided.

  • aux_states (list or tuple or dict, optional) – The auxiliary states required when generating the executor for the symbol.
  • numeric_eps (float, optional) – Delta for the finite difference method that approximates the gradient.
  • check_eps (float, optional) – relative error eps used when comparing numeric grad to symbolic grad.
  • grad_nodes (None or list or tuple or dict, optional) – Names of the nodes to check gradient on
  • use_forward_train (bool) – Whether to use is_train=True when computing the finite-difference.
  • ctx (Context, optional) – Check the gradient computation on the specified device.
  • grad_stype_dict (dict of str->str, optional) – Storage type dictionary for gradient ndarrays.
  • dtype (np.float16 or np.float32 or np.float64) – Datatype for mx.nd.array.

References

..[1] https://github.com/Theano/Theano/blob/master/theano/gradient.py