You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/07/13 02:00:44 UTC

[GitHub] [incubator-mxnet] braindotai removed a comment on issue #12185: from_logits definition seems different from what is expected?

braindotai removed a comment on issue #12185: from_logits definition seems different from what is expected?
URL: https://github.com/apache/incubator-mxnet/issues/12185#issuecomment-510939653
 
 
   [Check this out](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits)
   Here it says "logits: Per-label activations, typically a linear output", which means `nd.dot(x, w) + b` in terms of MXNet. 
   There are actually two versions of logits, first is simply the linear layer(as mentions in the above link), and second is the unscaled log probabilities. That is the reason Tensorflow provided 2 versions of "softmax_cross_entropy_with_logits".
   
   In MXNet mx.gluon.loss.SoftmaxCrossEntropyLoss accepts the linear output(`nd.dot(x, w) + b`) as output in the argument. 
   You can check [here](https://gluon.mxnet.io/chapter02_supervised-learning/softmax-regression-gluon.html), the layer definition is `net = gluon.nn.Dense(num_outputs)`,
   defining the loss as `gluon.loss.SoftmaxCrossEntropyLoss()` and then calculating loss as
   ```python
   output = net(data)
   loss = softmax_cross_entropy(output, label)
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services