You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/10/04 07:09:33 UTC

[GitHub] [incubator-mxnet] RuRo commented on issue #9582: Misleading calculation of mxnet.metric.Accuracy

RuRo commented on issue #9582: Misleading calculation of mxnet.metric.Accuracy
URL: https://github.com/apache/incubator-mxnet/issues/9582#issuecomment-538273489
 
 
   Why was this issue closed?
   
   The behavior of `Accuracy.update` is still wrong for one-hot labels. It still doesn't raise any error/warning and just silently gives the wrong values. The current documentation for the `Accuracy` class doesn't mention, whether preds/labels can be one-hots or indices.
   
   The doc string for the `update` method does mention, that labels should contain "class indices as values", but the way it's worded doesn't strongly imply, that it *can't* be a one-hot vector. Given, that `preds` **does** accept a probability vector, it's quite reasonable assumption, that `labels` also would.
   
   Also, I think, the docstring for the `update` method doesn't actually get rendered into the current [web docs](https://mxnet.incubator.apache.org/api/python/docs/api/gluon-related/_autogen/mxnet.metric.Accuracy.html#mxnet.metric.Accuracy). At least I can't find it anywhere.
   
   I don't see, what is the "use case" for the current behavior. For example, if `preds.shape == labels.shape == (32, 10)`, the current implementation would just truncate both `preds` and `labels` to integers and compare them for equality. Why would this be useful?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services