You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2017/12/03 12:46:37 UTC

[GitHub] eldercrow opened a new issue #8925: Is nadam optimizer correctly implemented?

eldercrow opened a new issue #8925: Is nadam optimizer correctly implemented?
URL: https://github.com/apache/incubator-mxnet/issues/8925
 
 
   Correct me if I am wrong. 
   In python/mxnet/optimizer.py, line 1103:
   ``` python
           # preprocess grad
           # shouldn't it be grad = grad * self.rescale_grad + wd * weight?
           grad *= self.rescale_grad + wd * weight 
           if self.clip_gradient is not None:
               grad = clip(grad, -self.clip_gradient, self.clip_gradient)
   ```
   I reckon the commented line in the above code is how all the other optimizers are implemented with. Does nadam have a special update rule I could not understand or is this a bug?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services