Table Of Contents
Table Of Contents


class mxnet.metric.TopKAccuracy(top_k=1, name='top_k_accuracy', output_names=None, label_names=None)[source]

Computes top k predictions accuracy.

TopKAccuracy differs from Accuracy in that it considers the prediction to be True as long as the ground truth label is in the top K predicated labels.

If top_k = 1, then TopKAccuracy is identical to Accuracy.

  • top_k (int) – Whether targets are in top k predictions.

  • name (str) – Name of this metric instance for display.

  • output_names (list of str, or None) – Name of predictions that should be used when updating with update_dict. By default include all predictions.

  • label_names (list of str, or None) – Name of labels that should be used when updating with update_dict. By default include all labels.


>>> np.random.seed(999)
>>> top_k = 3
>>> labels = [mx.nd.array([2, 6, 9, 2, 3, 4, 7, 8, 9, 6])]
>>> predicts = [mx.nd.array(np.random.rand(10, 10))]
>>> acc = mx.metric.TopKAccuracy(top_k=top_k)
>>> acc.update(labels, predicts)
>>> print acc.get()
('top_k_accuracy', 0.3)
__init__(top_k=1, name='top_k_accuracy', output_names=None, label_names=None)[source]

Initialize self. See help(type(self)) for accurate signature.


__init__([top_k, name, output_names, …])

Initialize self.


Gets the current evaluation result.


Save configurations of metric.


Gets the current global evaluation result.


Returns zipped name and value pairs for global results.


Returns zipped name and value pairs.


Resets the internal evaluation result to initial state.


Resets the local portion of the internal evaluation results to initial state.

update(labels, preds)

Updates the internal evaluation result.

update_dict(label, pred)

Update the internal evaluation with named label and pred