You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/04/17 13:13:44 UTC

[GitHub] [incubator-mxnet] acphile commented on a change in pull request #18083: Changes to mxnet.metric

acphile commented on a change in pull request #18083: Changes to mxnet.metric
URL: https://github.com/apache/incubator-mxnet/pull/18083#discussion_r410213003
 
 

 ##########
 File path: python/mxnet/gluon/metric.py
 ##########
 @@ -777,16 +815,113 @@ class F1(EvalMetric):
     --------
     >>> predicts = [mx.nd.array([[0.3, 0.7], [0., 1.], [0.4, 0.6]])]
     >>> labels   = [mx.nd.array([0., 1., 1.])]
-    >>> f1 = mx.metric.F1()
+    >>> f1 = mx.gluon.metric.F1()
     >>> f1.update(preds = predicts, labels = labels)
     >>> print f1.get()
     ('f1', 0.8)
     """
 
     def __init__(self, name='f1',
-                 output_names=None, label_names=None, average="macro"):
+                 output_names=None, label_names=None, threshold=0.5, average="macro"):
+        self.average = average
+        self.metrics = _BinaryClassificationMetrics(threshold=threshold)
+        EvalMetric.__init__(self, name=name,
+                            output_names=output_names, label_names=label_names,
+                            has_global_stats=True)
+
+    def update(self, labels, preds):
+        """Updates the internal evaluation result.
+
+        Parameters
+        ----------
+        labels : list of `NDArray`
+            The labels of the data.
+
+        preds : list of `NDArray`
+            Predicted values.
+        """
+        labels, preds = check_label_shapes(labels, preds, True)
+
+        for label, pred in zip(labels, preds):
+            self.metrics.update_binary_stats(label, pred)
+
+        if self.average == "macro":
 
 Review comment:
   Averaging F1 per batch here previously existed in metric.py before I made changes. This calculation also exists in MAE, MSE, RMSE, and PearsonCorrelation. Should I remove all of them accordingly? For the average "macro" in sklearn, it seems used in calculating F1 score for multiclass/multilabel targets. But currently our F1 only supports binary classification. I think I need to make extensions for F1.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services