You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/10/15 07:37:53 UTC

[GitHub] [incubator-mxnet] gilbertfrancois commented on issue #10002: General support of OPs for second-order gradient

gilbertfrancois commented on issue #10002:
URL: https://github.com/apache/incubator-mxnet/issues/10002#issuecomment-708962453


   What is the current status of support for second order derivatives in Gluon? I tried implementing the method from the paper [Improved Training of Wasserstein GANs](https://arxiv.org/pdf/1704.00028.pdf), but the training program returns an error when I add the gradient penalty to the loss function and do a back propagation. I noticed that, with mxnet version 1.7, it works for Dense layers without activation, but e.g. Conv2D and many other layers seem still unsupported. I saw a similar question here #5982, but that was around 3 years ago.
   
   Are there plans to add second order derivative support for e.g. gluon.nn.Conv2D, gluon.nn.BatchNorm, gluon.nn.Activation, gluon.nn.LeakyReLU?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org