You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/11/03 13:39:31 UTC

[GitHub] [incubator-tvm] altanh opened a new pull request #6827: [RELAY][GRAD] Fix first-order AD on tuple arguments

altanh opened a new pull request #6827:
URL: https://github.com/apache/incubator-tvm/pull/6827


   The first-order AD currently incorrectly deals with functions with tuple arguments, in particular by trying to add tuples when summing the gradients. Notably, this causes errors in the gradients of functions like `stack` which take a tuple of tensors. This PR lifts addition to work on the tuples (which was already done by the higher-order AD).
   
   However, higher-order AD currently does not support tuples in the top-level function, and I added an xfail test to show this. I'm not sure how hard it is to change the higher-order code to support tuples at the top-level, so maybe someone else can take a look.
   
   cc @MarisaKirisame @t-vi 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] jroesch merged pull request #6827: [RELAY][GRAD] Fix first-order AD on tuple arguments

Posted by GitBox <gi...@apache.org>.
jroesch merged pull request #6827:
URL: https://github.com/apache/incubator-tvm/pull/6827


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] altanh commented on a change in pull request #6827: [RELAY][GRAD] Fix first-order AD on tuple arguments

Posted by GitBox <gi...@apache.org>.
altanh commented on a change in pull request #6827:
URL: https://github.com/apache/incubator-tvm/pull/6827#discussion_r516360197



##########
File path: src/relay/transforms/gradient.cc
##########
@@ -181,6 +181,22 @@ struct FirstOrderReverseAD : ExprFunctor<ADValue(const Expr&)> {
     return ret;
   }
 
+  Expr UpdateGrad(const Type& t, const Expr& arg, const Expr& grad, LetList* ll) {
+    if (t.as<TensorTypeNode>()) {
+      return ll->Push(Add(arg, grad));
+    } else if (auto* tt = t.as<TupleTypeNode>()) {
+      Array<Expr> updates;
+      for (size_t i = 0; i < tt->fields.size(); ++i) {
+        updates.push_back(this->UpdateGrad(tt->fields[i], ll->Push(GetField(arg, i)),
+                                           ll->Push(GetField(grad, i)), ll));
+      }
+      return ll->Push(Tuple(updates));
+    } else {
+      LOG(FATAL) << "unsupported arg type of operator: " << t;

Review comment:
       I agree on this but it will probably need some refactoring, we might as well do it for the whole pass (first-order and higher-order). I think a separate PR will be ideal.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] jroesch commented on a change in pull request #6827: [RELAY][GRAD] Fix first-order AD on tuple arguments

Posted by GitBox <gi...@apache.org>.
jroesch commented on a change in pull request #6827:
URL: https://github.com/apache/incubator-tvm/pull/6827#discussion_r516331362



##########
File path: src/relay/transforms/gradient.cc
##########
@@ -181,6 +181,22 @@ struct FirstOrderReverseAD : ExprFunctor<ADValue(const Expr&)> {
     return ret;
   }
 
+  Expr UpdateGrad(const Type& t, const Expr& arg, const Expr& grad, LetList* ll) {
+    if (t.as<TensorTypeNode>()) {
+      return ll->Push(Add(arg, grad));
+    } else if (auto* tt = t.as<TupleTypeNode>()) {
+      Array<Expr> updates;
+      for (size_t i = 0; i < tt->fields.size(); ++i) {
+        updates.push_back(this->UpdateGrad(tt->fields[i], ll->Push(GetField(arg, i)),
+                                           ll->Push(GetField(grad, i)), ll));
+      }
+      return ll->Push(Tuple(updates));
+    } else {
+      LOG(FATAL) << "unsupported arg type of operator: " << t;

Review comment:
       Can we try and do diagnostics here? we could put into improve AD with diagnostics 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org