You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/08/15 21:30:50 UTC

[GitHub] muralibalki opened a new issue #12185: from_logits definition seems different from what is expected?

muralibalki opened a new issue #12185: from_logits definition seems different from what is expected?
URL: https://github.com/apache/incubator-mxnet/issues/12185
 
 
   In loss functions like SoftmaxCrossEntropy, the definition of from_logits is:
   from_logits (bool, default False) – Whether input is a log probability (usually from log_softmax) instead of unnormalized numbers.
   
   It seems mxnet calls the output of a log_softmax a logit, whereas others call (any) unscaled log probabilities logits.
   
   e.g. In tensorflow/keras api:
   e.g. https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits_v2
   from_logits: Boolean, whether output is the result of a softmax, or is a tensor of logits.
   
   I think mxnet is consistent with the definition of logits:
   https://en.wikipedia.org/wiki/Logit
   
   But highlighting the difference may be useful in general. I had a bit of trouble converting a model from Keras to Gluon because of this.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services