You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/02/07 14:25:35 UTC

[GitHub] fhieber opened a new issue #14088: Confusing documentation for `mx.sym.MakeLoss`

fhieber opened a new issue #14088: Confusing documentation for `mx.sym.MakeLoss`
URL: https://github.com/apache/incubator-mxnet/issues/14088
 
 
   The [documentation of `mx.sym.MakeLoss`](http://mxnet.incubator.apache.org/api/python/symbol/symbol.html?highlight=makeloss#mxnet.symbol.MakeLoss) is highly confusing. To my understanding the only thing `MakeLoss` does is that it wraps an existing symbol and defines that it does not require a head gradient when used in optimization. Further, it seems that the output of the `forward()` call to a MakeLoss-symbol is the output of the `forward()` call of the wrapped symbol. That is, MakeLoss just passes through its input data in `forward()`.
   
   However, the documentation states the following:
   "The output of this function is the gradient of loss with respect to the input data."
   
   What does this mean? I read it as if the output of `forward()` is the same as the output of `backward()`, namely the gradient of the symbol, MakeLoss wraps. But this does not seem to be true.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services