Table Of Contents
Table Of Contents

F1

class mxnet.metric.F1(name='f1', output_names=None, label_names=None, average='macro')[source]

Computes the F1 score of a binary classification problem.

The F1 score is equivalent to harmonic mean of the precision and recall, where the best value is 1.0 and the worst value is 0.0. The formula for F1 score is:

F1 = 2 * (precision * recall) / (precision + recall)

The formula for precision and recall is:

precision = true_positives / (true_positives + false_positives)
recall    = true_positives / (true_positives + false_negatives)

Note

This F1 score only supports binary classification.

Parameters:
  • name (str) – Name of this metric instance for display.
  • output_names (list of str, or None) – Name of predictions that should be used when updating with update_dict. By default include all predictions.
  • label_names (list of str, or None) – Name of labels that should be used when updating with update_dict. By default include all labels.
  • average (str, default 'macro') –
    Strategy to be used for aggregating across mini-batches.
    ”macro”: average the F1 scores for each batch. “micro”: compute a single F1 score across all batches.

Examples

>>> predicts = [mx.nd.array([[0.3, 0.7], [0., 1.], [0.4, 0.6]])]
>>> labels   = [mx.nd.array([0., 1., 1.])]
>>> f1 = mx.metric.F1()
>>> f1.update(preds = predicts, labels = labels)
>>> print f1.get()
('f1', 0.8)
__init__(name='f1', output_names=None, label_names=None, average='macro')[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__([name, output_names, label_names, …]) Initialize self.
get() Gets the current evaluation result.
get_config() Save configurations of metric.
get_name_value() Returns zipped name and value pairs.
reset() Resets the internal evaluation result to initial state.
update(labels, preds) Updates the internal evaluation result.
update_dict(label, pred) Update the internal evaluation with named label and pred