You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/05/18 21:41:37 UTC

[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #14992: [MXNET-978] Support higher order gradient for `log`.

kshitij12345 commented on a change in pull request #14992: [MXNET-978] Support higher order gradient for `log`.
URL: https://github.com/apache/incubator-mxnet/pull/14992#discussion_r285357016
 
 

 ##########
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##########
 @@ -1016,7 +1016,28 @@ The storage type of ``log2`` output is always dense
 .set_attr<nnvm::FGradient>("FGradient", ElemwiseGradUseIn{"_backward_log2"});
 
 MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
-                                                  unary_bwd<mshadow_op::log_grad>);
+                                                  unary_bwd<mshadow_op::log_grad>)
 
 Review comment:
   I don't know much about this library but,
   
   I believe it would be better to have gradients defined for  existing backward, instead of a differentiable gradient (relying on  autograd machinery) at least on `ops`  where backward is not trivial. It will allow to use existing optimised fused kernels and make sure there is no regression in the backward.
   
   Note: `log` is relatively trivial (single reciprocal). But maybe we may see a performance regression for `sigmoid`, if we do it by relying on autograd machinery instead of the existing `_backward_sigmoid`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services