You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/07/14 22:44:21 UTC

[GitHub] [incubator-tvm] huajsj commented on pull request #6049: [Pytorch] add operator copy_ support

huajsj commented on pull request #6049:
URL: https://github.com/apache/incubator-tvm/pull/6049#issuecomment-658449313


   Hi @t-vi  @masahi , @liangfu 
   
   Thanks for the review, following  are my comments
   
   Regards
   Hua
   
   #1 about use case and why copy_ ,  from my understanding copy_ is not only a shortcut of .to,  the biggest difference between  copy_ and .clone , .to is that the in place variable would keep it's stride&storage and size, for example for following use case, .clone and .to would cause b lost it's stride/storage information,  running environment is pytorch 1.5.
            a = torch.from_numpy(np.array((11.0,12.0, 13.0)).astype('float32'))
            a = a.expand(2)
            b = torch.from_numpy(np.array((1.0, 2.0,3.0)).astype('float32'))
            b = b.repeat(2)
            b.copy_(a)
            print(b.stride())
            b = torch.clone(a)
            print(b.stride())
            b = a.to("cpu")
            print(b.stride())
   
   #2.  about use case t.diag().copy_(a) , seems like currently pytorch front end not support diag operator, we may can not
         test it with copy_
   
   #3 about the copy operator, just like @t-vi mentioned, clone is the non in place one.
   
   #4. i agree with that clone is not capture the semantics of copy_ and find some problem during testing, would set this
         PR into WIP and push the fix and test case later.
   
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org