Table Of Contents
Table Of Contents


class mxnet.metric.NegativeLogLikelihood(eps=1e-12, name='nll-loss', output_names=None, label_names=None)[source]

Computes the negative log-likelihood loss.

The negative log-likelihoodd loss over a batch of sample size \(N\) is given by

\[-\sum_{n=1}^{N}\sum_{k=1}^{K}t_{nk}\log (y_{nk}),\]

where \(K\) is the number of classes, \(y_{nk}\) is the prediceted probability for \(k\)-th class for \(n\)-th sample. \(t_{nk}=1\) if and only if sample \(n\) belongs to class \(k\).

  • eps (float) – Negative log-likelihood loss is undefined for predicted value is 0, so predicted values are added with the small constant.

  • name (str) – Name of this metric instance for display.

  • output_names (list of str, or None) – Name of predictions that should be used when updating with update_dict. By default include all predictions.

  • label_names (list of str, or None) – Name of labels that should be used when updating with update_dict. By default include all labels.


>>> predicts = [mx.nd.array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])]
>>> labels   = [mx.nd.array([0, 1, 1])]
>>> nll_loss = mx.metric.NegativeLogLikelihood()
>>> nll_loss.update(labels, predicts)
>>> print nll_loss.get()
('nll-loss', 0.57159948348999023)
__init__(eps=1e-12, name='nll-loss', output_names=None, label_names=None)[source]

Initialize self. See help(type(self)) for accurate signature.


__init__([eps, name, output_names, label_names])

Initialize self.


Gets the current evaluation result.


Save configurations of metric.


Gets the current global evaluation result.


Returns zipped name and value pairs for global results.


Returns zipped name and value pairs.


Resets the internal evaluation result to initial state.


Resets the local portion of the internal evaluation results to initial state.

update(labels, preds)

Updates the internal evaluation result.

update_dict(label, pred)

Update the internal evaluation with named label and pred