You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/07/23 18:00:09 UTC

[GitHub] [incubator-mxnet] anirudhacharya edited a comment on issue #11865: attach_grad of intermediate variables causes the gradient graph to be lost

anirudhacharya edited a comment on issue #11865: attach_grad of intermediate variables causes the gradient graph to be lost
URL: https://github.com/apache/incubator-mxnet/issues/11865#issuecomment-514316936
 
 
   Here is another usecase where using `attach_grad()` with intermediate variables gives erroneous results - 
   
   With the following example I would expect `x.grad` to be `[10, 24, 42, 64]` but using head gradients and chain rule as per the [autograd documentation](https://www.d2l.ai/chapter_crashcourse/autograd.html#head-gradients) gives me `[5, 12, 21, 32]`
   
   ```python
   from mxnet import ndarray as nd
   from mxnet import autograd as ag
   x = nd.array([1,2,3,4])
   x.attach_grad()
   y = nd.array([5,6,7,8])
   y.attach_grad()
   
   ag.set_recording(True)
   u = x * y
   v = u.detach()
   v.attach_grad()
   z = v * x
   ag.set_recording(False)
   z.backward()
   u.backward(v.grad)
   print(x.grad, y.grad)
   ```
   
   But when I do it without using head gradients like as follows I get the correct gradients -
    
   ```python
   from mxnet import autograd as ag
   x = nd.array([1,2,3,4])
   x.attach_grad()
   y = nd.array([5,6,7,8])
   y.attach_grad()
   
   ag.set_recording(True)
   u = x * y
   z = u * x
   ag.set_recording(False)
   z.backward()
   print(x.grad, y.grad)
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services