# mxnet.metric.CrossEntropy¶

class mxnet.metric.CrossEntropy(eps=1e-12, name='cross-entropy', output_names=None, label_names=None)[source]

Computes Cross Entropy loss.

The cross entropy over a batch of sample size $$N$$ is given by

$-\sum_{n=1}^{N}\sum_{k=1}^{K}t_{nk}\log (y_{nk}),$

where $$t_{nk}=1$$ if and only if sample $$n$$ belongs to class $$k$$. $$y_{nk}$$ denotes the probability of sample $$n$$ belonging to class $$k$$.

Parameters
• eps (float) – Cross Entropy loss is undefined for predicted value is 0 or 1, so predicted values are added with the small constant.

• name (str) – Name of this metric instance for display.

• output_names (list of str, or None) – Name of predictions that should be used when updating with update_dict. By default include all predictions.

• label_names (list of str, or None) – Name of labels that should be used when updating with update_dict. By default include all labels.

Examples

>>> predicts = [mx.nd.array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])]
>>> labels   = [mx.nd.array([0, 1, 1])]
>>> ce = mx.metric.CrossEntropy()
>>> ce.update(labels, predicts)
>>> print ce.get()
('cross-entropy', 0.57159948348999023)

__init__(eps=1e-12, name='cross-entropy', output_names=None, label_names=None)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

 __init__([eps, name, output_names, label_names]) Initialize self. get() Gets the current evaluation result. get_config() Save configurations of metric. get_global() Gets the current global evaluation result. get_global_name_value() Returns zipped name and value pairs for global results. get_name_value() Returns zipped name and value pairs. reset() Resets the internal evaluation result to initial state. reset_local() Resets the local portion of the internal evaluation results to initial state. update(labels, preds) Updates the internal evaluation result. update_dict(label, pred) Update the internal evaluation with named label and pred