Table Of Contents
Table Of Contents

MCC

class mxnet.metric.MCC(name='mcc', output_names=None, label_names=None, average='macro')[source]

Computes the Matthews Correlation Coefficient of a binary classification problem.

While slower to compute than F1 the MCC can give insight that F1 or Accuracy cannot. For instance, if the network always predicts the same result then the MCC will immeadiately show this. The MCC is also symetric with respect to positive and negative categorization, however, there needs to be both positive and negative examples in the labels or it will always return 0. MCC of 0 is uncorrelated, 1 is completely correlated, and -1 is negatively correlated.

\[\text{MCC} = \frac{ TP \times TN - FP \times FN } {\sqrt{ (TP + FP) ( TP + FN ) ( TN + FP ) ( TN + FN ) } }\]

where 0 terms in the denominator are replaced by 1.

Note

This version of MCC only supports binary classification.

Parameters:
  • name (str) – Name of this metric instance for display.
  • output_names (list of str, or None) – Name of predictions that should be used when updating with update_dict. By default include all predictions.
  • label_names (list of str, or None) – Name of labels that should be used when updating with update_dict. By default include all labels.
  • average (str, default 'macro') –
    Strategy to be used for aggregating across mini-batches.
    ”macro”: average the MCC for each batch. “micro”: compute a single MCC across all batches.

Examples

>>> # In this example the network almost always predicts positive
>>> false_positives = 1000
>>> false_negatives = 1
>>> true_positives = 10000
>>> true_negatives = 1
>>> predicts = [mx.nd.array(
    [[.3, .7]]*false_positives +
    [[.7, .3]]*true_negatives +
    [[.7, .3]]*false_negatives +
    [[.3, .7]]*true_positives
)]
>>> labels  = [mx.nd.array(
    [0.]*(false_positives + true_negatives) +
    [1.]*(false_negatives + true_positives)
)]
>>> f1 = mx.metric.F1()
>>> f1.update(preds = predicts, labels = labels)
>>> mcc = mx.metric.MCC()
>>> mcc.update(preds = predicts, labels = labels)
>>> print f1.get()
('f1', 0.95233560306652054)
>>> print mcc.get()
('mcc', 0.01917751877733392)
__init__(name='mcc', output_names=None, label_names=None, average='macro')[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__([name, output_names, label_names, …]) Initialize self.
get() Gets the current evaluation result.
get_config() Save configurations of metric.
get_name_value() Returns zipped name and value pairs.
reset() Resets the internal evaluation result to initial state.
update(labels, preds) Updates the internal evaluation result.
update_dict(label, pred) Update the internal evaluation with named label and pred