Table Of Contents
Table Of Contents

mxnet.metric

Online evaluation metric module.

Metrics

Accuracy([axis, name, output_names, label_names])

Computes accuracy classification score.

Caffe([name, output_names, label_names])

Dummy metric for caffe criterions.

CompositeEvalMetric([metrics, name, …])

Manages multiple evaluation metrics.

CrossEntropy([eps, name, output_names, …])

Computes Cross Entropy loss.

CustomMetric(feval[, name, …])

Computes a customized evaluation metric.

EvalMetric(name[, output_names, label_names])

Base class for all evaluation metrics.

F1([name, output_names, label_names, average])

Computes the F1 score of a binary classification problem.

Loss([name, output_names, label_names])

Dummy metric for directly printing loss.

MAE([name, output_names, label_names])

Computes Mean Absolute Error (MAE) loss.

MCC([name, output_names, label_names, average])

Computes the Matthews Correlation Coefficient of a binary classification problem.

MSE([name, output_names, label_names])

Computes Mean Squared Error (MSE) loss.

NegativeLogLikelihood([eps, name, …])

Computes the negative log-likelihood loss.

PearsonCorrelation([name, output_names, …])

Computes Pearson correlation.

Perplexity(ignore_label[, axis, name, …])

Computes perplexity.

RMSE([name, output_names, label_names])

Computes Root Mean Squred Error (RMSE) loss.

TopKAccuracy([top_k, name, output_names, …])

Computes top k predictions accuracy.

Torch([name, output_names, label_names])

Dummy metric for torch criterions.

Helper functions

check_label_shapes(labels, preds[, wrap, shape])

Helper function for checking shape of label and prediction

create(metric, *args, **kwargs)

Creates evaluation metric from metric names or instances of EvalMetric or a custom metric function.

np(numpy_feval[, name, allow_extra_outputs])

Creates a custom evaluation metric that receives its inputs as numpy arrays.