You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/12/14 00:59:39 UTC

[GitHub] [tvm] AndrewZhaoLuo opened a new pull request #9735: [WIP] Add faster type inference

AndrewZhaoLuo opened a new pull request #9735:
URL: https://github.com/apache/tvm/pull/9735


   TODO


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo edited a comment on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo edited a comment on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-996973340


   Was trying to play around with replacing some type inference in `dynamic_to_static` pass and ran into some small errors related to funcnodes, so I'm going to add a few basic tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-997080358


   Added (or rather replaced) some tests. PTAL @mbs-octoml 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-1009191533


   @FrozenGene ah yes, so the type inference will work, but need to think about how to handle it properly for AMP, when I initially wrote AMP I ignored stuff not usually found in most real-life models. It is on list of todos here: https://github.com/apache/tvm/issues/8296


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-995097394


   This is now ready for review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-1007990697


   > > @AndrewZhaoLuo Sorry for later reply. Does this help us to solve ADT problem in our MixedPrecision? Let us imagine we have one `if` node in our relay graph, `if` will be converted two subgraphs mentioned by you in this pr. For example:
   > > ```python
   > > fn main():
   > >     let %1 = xxx;
   > >     let %2 = if (%1) {
   > >     let %3: = @func___inference_a(%4, %5, %6) 
   > >   } else {
   > >     let %7: = @func___inference_b(%8, %9)
   > >   };  
   > > ```
   > > 
   > > 
   > >     
   > >       
   > >     
   > > 
   > >       
   > >     
   > > 
   > >     
   > >   
   > > Then we have two subgraph `func___inference_a` and `func___inference_b`. Does this help us to make our two subgraph type infer correctly? As I see you have supported `GlobalVarNode`.
   > 
   > @FrozenGene not sure if I understand the concern 😅, global var nodes are just used to reference function calls right? These functions have a known type ahead of time right?
   
   @AndrewZhaoLuo Yes. In fact I saw your pr support global var node, I thought you will leverage it to solve this undo: https://github.com/apache/tvm/blob/main/src/relay/transforms/to_mixed_precision.cc#L297 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-995061866


   Discussed @jroesch and @mbs-octoml, main changes we want to do is change the name "Fast" --> "Local" and better documentating pre-conditions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#discussion_r771653869



##########
File path: src/relay/transforms/to_mixed_precision.cc
##########
@@ -381,6 +381,18 @@ class MixedPrecisionPass : public MixedModeMutator {
     return Call(cur_op, new_args, pre_call_node->attrs, new_arg_types, pre_call_node->span);
   }
 
+  Expr Rewrite_(const TupleGetItemNode* pre, const Expr& post) {
+    // The old checked type in the expression may not be valid so clear it
+    post->checked_type_ = Type(nullptr);

Review comment:
       https://github.com/apache/tvm/blob/main/src/relay/ir/expr_functor.cc#L248
   
   Here is the behavior for generating post, there is some Copy on write stuff which i don't quite understand the full mechanics of so 🤷

##########
File path: src/relay/transforms/to_mixed_precision.cc
##########
@@ -176,13 +176,13 @@ class MixedPrecisionPass : public MixedModeMutator {
   }
 
   Type GetType(const Expr& expr) const {
-    auto mod = IRModule::FromExpr(expr);
-    mod = transform::InferType()(mod);
-    if (expr.as<FunctionNode>()) {
-      return mod->Lookup("main")->checked_type();
-    } else {
-      return mod->Lookup("main").as<FunctionNode>()->body->checked_type();
+    Type checked_type = expr->checked_type_;
+    if (checked_type.defined()) {
+      return checked_type;

Review comment:
       Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on a change in pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on a change in pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#discussion_r771001558



##########
File path: src/relay/transforms/to_mixed_precision.cc
##########
@@ -381,6 +381,18 @@ class MixedPrecisionPass : public MixedModeMutator {
     return Call(cur_op, new_args, pre_call_node->attrs, new_arg_types, pre_call_node->span);
   }
 
+  Expr Rewrite_(const TupleGetItemNode* pre, const Expr& post) {
+    // The old checked type in the expression may not be valid so clear it
+    post->checked_type_ = Type(nullptr);

Review comment:
       am I missing something or will checked_type_ = null iff some sub-expression of post has been rewritten and thus it's type has changed?
   ie checked_type_ is non-null only if pre == post.get() ??
   

##########
File path: src/relay/transforms/to_mixed_precision.cc
##########
@@ -176,13 +176,13 @@ class MixedPrecisionPass : public MixedModeMutator {
   }
 
   Type GetType(const Expr& expr) const {
-    auto mod = IRModule::FromExpr(expr);
-    mod = transform::InferType()(mod);
-    if (expr.as<FunctionNode>()) {
-      return mod->Lookup("main")->checked_type();
-    } else {
-      return mod->Lookup("main").as<FunctionNode>()->body->checked_type();
+    Type checked_type = expr->checked_type_;
+    if (checked_type.defined()) {
+      return checked_type;

Review comment:
       // The expression has not been changed AND it's existing type
   // is known to still be valid. (See special handling for tuples etc
   // below for where we null out checked_type_ when we can not
   // sure it is still valid.
   
   (though see my comment below)

##########
File path: src/relay/transforms/type_infer.cc
##########
@@ -824,8 +824,107 @@ void AddGlobalTypes(IRModule mod) {
   }
 }
 
+class SameTypedSubgraphExtractor : public ExprMutator {
+  /*

Review comment:
       nit: Returns  the largest sub-graph who's inner nodes need types and leaves are vars standing in
   for already typed sub-expressions.

##########
File path: src/relay/transforms/type_infer.cc
##########
@@ -824,8 +824,107 @@ void AddGlobalTypes(IRModule mod) {
   }
 }
 
+class SameTypedSubgraphExtractor : public ExprMutator {
+  /*

Review comment:
       micro nit: move to before class, used /*! etc.

##########
File path: src/relay/transforms/type_infer.cc
##########
@@ -824,8 +824,107 @@ void AddGlobalTypes(IRModule mod) {
   }
 }
 
+class SameTypedSubgraphExtractor : public ExprMutator {
+  /*
+  Creates a small subgraph with the same type as the input expression. We attempt to do
+  by depending on existing type information being populated in expressions the target
+  node depends on. If a node with populated type information is found we simply
+  replace it with a variable of that type. In this way, we can avoid copying and
+  recursing through most of the expression graph. Note, this assumes that current
+  populated type information is correct!
+
+  ExprMutator is sufficient over MixedModemutator since we will not recurse much.
+  */
+
+  Expr VisitExpr_(const VarNode* op) { return Var(op->vid, op->type_annotation, op->span); }
+  Expr VisitExpr_(const ConstantNode* op) { return Constant(op->data, op->span); }
+  Expr VisitExpr_(const GlobalVarNode* op) { return GlobalVar(op->name_hint); }
+  Expr VisitExpr_(const OpNode* op) { return Op(GetRef<Op>(op)); }
+  Expr VisitExpr_(const TupleNode* op) {
+    return Tuple(get_analogous_expression(op->fields), op->span);
+  }
+  Expr VisitExpr_(const FunctionNode* op) {
+    // Here will be the only VisitExpr
+    return Function(op->params, get_analogous_expression(op->body), op->ret_type, op->type_params,
+                    op->attrs, op->span);
+  }
+  Expr VisitExpr_(const CallNode* op) {
+    return Call(op->op, get_analogous_expression(op->args), op->attrs, op->type_args, op->span);
+  }
+  Expr VisitExpr_(const LetNode* op) {
+    return Let(op->var, get_analogous_expression(op->value), get_analogous_expression(op->body),
+               op->span);
+  }
+  Expr VisitExpr_(const IfNode* op) {
+    return If(get_analogous_expression(op->cond), get_analogous_expression(op->true_branch),
+              get_analogous_expression(op->false_branch), op->span);
+  }
+  Expr VisitExpr_(const TupleGetItemNode* op) {
+    return TupleGetItem(get_analogous_expression(op->tuple), op->index, op->span);
+  }
+  Expr VisitExpr_(const RefCreateNode* op) {
+    return RefCreate(get_analogous_expression(op->value), op->span);
+  }
+  Expr VisitExpr_(const RefReadNode* op) {
+    return RefRead(get_analogous_expression(op->ref), op->span);
+  }
+  Expr VisitExpr_(const RefWriteNode* op) {
+    return RefWrite(get_analogous_expression(op->ref), get_analogous_expression(op->value),
+                    op->span);
+  }
+  Expr VisitExpr_(const ConstructorNode* op) {
+    return Constructor(op->name_hint, op->inputs, op->belong_to);
+  }
+  Expr VisitExpr_(const MatchNode* op) {
+    return Match(get_analogous_expression(op->data), op->clauses, op->complete, op->span);
+  }
+
+ private:
+  Expr get_analogous_expression(const Expr& expr) {
+    // Replace the expression with a potentially simpler expression of the same type
+    if (!expr->checked_type_.defined()) {
+      return VisitExpr(expr);
+    }
+
+    return Var("dummy_var", expr->checked_type(), expr->span);

Review comment:
       // Since the expression already has a checked_type which we trust we don't need
   // full type inference to enter it. So stub it out with a dummy var of the same type. 

##########
File path: src/relay/transforms/type_infer.cc
##########
@@ -824,8 +824,107 @@ void AddGlobalTypes(IRModule mod) {
   }
 }
 
+class SameTypedSubgraphExtractor : public ExprMutator {
+  /*
+  Creates a small subgraph with the same type as the input expression. We attempt to do
+  by depending on existing type information being populated in expressions the target
+  node depends on. If a node with populated type information is found we simply
+  replace it with a variable of that type. In this way, we can avoid copying and
+  recursing through most of the expression graph. Note, this assumes that current
+  populated type information is correct!
+
+  ExprMutator is sufficient over MixedModemutator since we will not recurse much.
+  */
+
+  Expr VisitExpr_(const VarNode* op) { return Var(op->vid, op->type_annotation, op->span); }
+  Expr VisitExpr_(const ConstantNode* op) { return Constant(op->data, op->span); }
+  Expr VisitExpr_(const GlobalVarNode* op) { return GlobalVar(op->name_hint); }
+  Expr VisitExpr_(const OpNode* op) { return Op(GetRef<Op>(op)); }
+  Expr VisitExpr_(const TupleNode* op) {
+    return Tuple(get_analogous_expression(op->fields), op->span);
+  }
+  Expr VisitExpr_(const FunctionNode* op) {
+    // Here will be the only VisitExpr
+    return Function(op->params, get_analogous_expression(op->body), op->ret_type, op->type_params,
+                    op->attrs, op->span);
+  }
+  Expr VisitExpr_(const CallNode* op) {
+    return Call(op->op, get_analogous_expression(op->args), op->attrs, op->type_args, op->span);
+  }
+  Expr VisitExpr_(const LetNode* op) {
+    return Let(op->var, get_analogous_expression(op->value), get_analogous_expression(op->body),
+               op->span);
+  }
+  Expr VisitExpr_(const IfNode* op) {
+    return If(get_analogous_expression(op->cond), get_analogous_expression(op->true_branch),
+              get_analogous_expression(op->false_branch), op->span);
+  }
+  Expr VisitExpr_(const TupleGetItemNode* op) {
+    return TupleGetItem(get_analogous_expression(op->tuple), op->index, op->span);
+  }
+  Expr VisitExpr_(const RefCreateNode* op) {
+    return RefCreate(get_analogous_expression(op->value), op->span);
+  }
+  Expr VisitExpr_(const RefReadNode* op) {
+    return RefRead(get_analogous_expression(op->ref), op->span);
+  }
+  Expr VisitExpr_(const RefWriteNode* op) {
+    return RefWrite(get_analogous_expression(op->ref), get_analogous_expression(op->value),
+                    op->span);
+  }
+  Expr VisitExpr_(const ConstructorNode* op) {
+    return Constructor(op->name_hint, op->inputs, op->belong_to);
+  }
+  Expr VisitExpr_(const MatchNode* op) {
+    return Match(get_analogous_expression(op->data), op->clauses, op->complete, op->span);
+  }
+
+ private:
+  Expr get_analogous_expression(const Expr& expr) {

Review comment:
       nit: GetAnalogousExpression




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#discussion_r771638737



##########
File path: src/relay/transforms/type_infer.cc
##########
@@ -824,8 +824,107 @@ void AddGlobalTypes(IRModule mod) {
   }
 }
 
+class SameTypedSubgraphExtractor : public ExprMutator {
+  /*
+  Creates a small subgraph with the same type as the input expression. We attempt to do
+  by depending on existing type information being populated in expressions the target
+  node depends on. If a node with populated type information is found we simply
+  replace it with a variable of that type. In this way, we can avoid copying and
+  recursing through most of the expression graph. Note, this assumes that current
+  populated type information is correct!
+
+  ExprMutator is sufficient over MixedModemutator since we will not recurse much.
+  */
+
+  Expr VisitExpr_(const VarNode* op) { return Var(op->vid, op->type_annotation, op->span); }
+  Expr VisitExpr_(const ConstantNode* op) { return Constant(op->data, op->span); }
+  Expr VisitExpr_(const GlobalVarNode* op) { return GlobalVar(op->name_hint); }
+  Expr VisitExpr_(const OpNode* op) { return Op(GetRef<Op>(op)); }
+  Expr VisitExpr_(const TupleNode* op) {
+    return Tuple(get_analogous_expression(op->fields), op->span);
+  }
+  Expr VisitExpr_(const FunctionNode* op) {
+    // Here will be the only VisitExpr
+    return Function(op->params, get_analogous_expression(op->body), op->ret_type, op->type_params,
+                    op->attrs, op->span);
+  }
+  Expr VisitExpr_(const CallNode* op) {
+    return Call(op->op, get_analogous_expression(op->args), op->attrs, op->type_args, op->span);
+  }
+  Expr VisitExpr_(const LetNode* op) {
+    return Let(op->var, get_analogous_expression(op->value), get_analogous_expression(op->body),
+               op->span);
+  }
+  Expr VisitExpr_(const IfNode* op) {
+    return If(get_analogous_expression(op->cond), get_analogous_expression(op->true_branch),
+              get_analogous_expression(op->false_branch), op->span);
+  }
+  Expr VisitExpr_(const TupleGetItemNode* op) {
+    return TupleGetItem(get_analogous_expression(op->tuple), op->index, op->span);
+  }
+  Expr VisitExpr_(const RefCreateNode* op) {
+    return RefCreate(get_analogous_expression(op->value), op->span);
+  }
+  Expr VisitExpr_(const RefReadNode* op) {
+    return RefRead(get_analogous_expression(op->ref), op->span);
+  }
+  Expr VisitExpr_(const RefWriteNode* op) {
+    return RefWrite(get_analogous_expression(op->ref), get_analogous_expression(op->value),
+                    op->span);
+  }
+  Expr VisitExpr_(const ConstructorNode* op) {
+    return Constructor(op->name_hint, op->inputs, op->belong_to);
+  }
+  Expr VisitExpr_(const MatchNode* op) {
+    return Match(get_analogous_expression(op->data), op->clauses, op->complete, op->span);
+  }
+
+ private:
+  Expr get_analogous_expression(const Expr& expr) {
+    // Replace the expression with a potentially simpler expression of the same type
+    if (!expr->checked_type_.defined()) {
+      return VisitExpr(expr);
+    }
+
+    return Var("dummy_var", expr->checked_type(), expr->span);

Review comment:
       Done

##########
File path: src/relay/transforms/type_infer.cc
##########
@@ -824,8 +824,107 @@ void AddGlobalTypes(IRModule mod) {
   }
 }
 
+class SameTypedSubgraphExtractor : public ExprMutator {
+  /*
+  Creates a small subgraph with the same type as the input expression. We attempt to do
+  by depending on existing type information being populated in expressions the target
+  node depends on. If a node with populated type information is found we simply
+  replace it with a variable of that type. In this way, we can avoid copying and
+  recursing through most of the expression graph. Note, this assumes that current
+  populated type information is correct!
+
+  ExprMutator is sufficient over MixedModemutator since we will not recurse much.
+  */
+
+  Expr VisitExpr_(const VarNode* op) { return Var(op->vid, op->type_annotation, op->span); }
+  Expr VisitExpr_(const ConstantNode* op) { return Constant(op->data, op->span); }
+  Expr VisitExpr_(const GlobalVarNode* op) { return GlobalVar(op->name_hint); }
+  Expr VisitExpr_(const OpNode* op) { return Op(GetRef<Op>(op)); }
+  Expr VisitExpr_(const TupleNode* op) {
+    return Tuple(get_analogous_expression(op->fields), op->span);
+  }
+  Expr VisitExpr_(const FunctionNode* op) {
+    // Here will be the only VisitExpr
+    return Function(op->params, get_analogous_expression(op->body), op->ret_type, op->type_params,
+                    op->attrs, op->span);
+  }
+  Expr VisitExpr_(const CallNode* op) {
+    return Call(op->op, get_analogous_expression(op->args), op->attrs, op->type_args, op->span);
+  }
+  Expr VisitExpr_(const LetNode* op) {
+    return Let(op->var, get_analogous_expression(op->value), get_analogous_expression(op->body),
+               op->span);
+  }
+  Expr VisitExpr_(const IfNode* op) {
+    return If(get_analogous_expression(op->cond), get_analogous_expression(op->true_branch),
+              get_analogous_expression(op->false_branch), op->span);
+  }
+  Expr VisitExpr_(const TupleGetItemNode* op) {
+    return TupleGetItem(get_analogous_expression(op->tuple), op->index, op->span);
+  }
+  Expr VisitExpr_(const RefCreateNode* op) {
+    return RefCreate(get_analogous_expression(op->value), op->span);
+  }
+  Expr VisitExpr_(const RefReadNode* op) {
+    return RefRead(get_analogous_expression(op->ref), op->span);
+  }
+  Expr VisitExpr_(const RefWriteNode* op) {
+    return RefWrite(get_analogous_expression(op->ref), get_analogous_expression(op->value),
+                    op->span);
+  }
+  Expr VisitExpr_(const ConstructorNode* op) {
+    return Constructor(op->name_hint, op->inputs, op->belong_to);
+  }
+  Expr VisitExpr_(const MatchNode* op) {
+    return Match(get_analogous_expression(op->data), op->clauses, op->complete, op->span);
+  }
+
+ private:
+  Expr get_analogous_expression(const Expr& expr) {

Review comment:
       Done

##########
File path: src/relay/transforms/type_infer.cc
##########
@@ -824,8 +824,107 @@ void AddGlobalTypes(IRModule mod) {
   }
 }
 
+class SameTypedSubgraphExtractor : public ExprMutator {
+  /*

Review comment:
       Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-996973340


   Was trying to play around with replacing some type inference in `dynamic_to_static` pass and ran into some small errors, so I'm going to add a few basic tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#discussion_r771646769



##########
File path: src/relay/transforms/to_mixed_precision.cc
##########
@@ -381,6 +381,18 @@ class MixedPrecisionPass : public MixedModeMutator {
     return Call(cur_op, new_args, pre_call_node->attrs, new_arg_types, pre_call_node->span);
   }
 
+  Expr Rewrite_(const TupleGetItemNode* pre, const Expr& post) {
+    // The old checked type in the expression may not be valid so clear it
+    post->checked_type_ = Type(nullptr);

Review comment:
       Hmm so you would think so, but it looks like the mutator does not by default invalidate the checked_type (and appears to reuse the node? giving us this problem).
   
   I can dig a little deeper, but if I remove this line for TupleGetItemNode the checked type will be wrong (it will be fp32 instead of fp16)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo edited a comment on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo edited a comment on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-1007133304


   > @AndrewZhaoLuo Sorry for later reply. Does this help us to solve ADT problem in our MixedPrecision? Let us imagine we have one `if` node in our relay graph, `if` will be converted two subgraphs mentioned by you in this pr. For example:
   > 
   > ```python
   > fn main():
   >     let %1 = xxx;
   >     let %2 = if (%1) {
   >     let %3: = @func___inference_a(%4, %5, %6) 
   >   } else {
   >     let %7: = @func___inference_b(%8, %9)
   >   };  
   > ```
   > 
   > Then we have two subgraph `func___inference_a` and `func___inference_b`. Does this help us to make our two subgraph type infer correctly? As I see you have supported `GlobalVarNode`.
   
   @FrozenGene not sure if I understand the concern 😅, global var nodes are just used to reference function calls right? These functions have a known type ahead of time right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #9735:
URL: https://github.com/apache/tvm/pull/9735


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene edited a comment on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
FrozenGene edited a comment on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-1004535763


   @AndrewZhaoLuo Sorry for later reply. Does this help us to solve ADT problem in our MixedPrecision? Let us imagine we have one `if` node in our relay graph, `if` will be converted two subgraphs mentioned by you in this pr. For example:
   
   ```python
   fn main():
       let %1 = xxx;
       let %2 = if (%1) {
       let %3: = @func___inference_a(%4, %5, %6) 
     } else {
       let %7: = @func___inference_b(%8, %9)
     };  
   ```
   Then we have two subgraph `func___inference_a` and `func___inference_b`. Does this help us to make our two subgraph type infer correctly? As I see you have supported `GlobalVarNode`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-1004535763


   @AndrewZhaoLuo Sorry for later reply. Does this help us to solve ADT problem in our MixedPrecision? Let us we have one `if` node in our relay graph, `if` will be converted two subgraphs mentioned by you in this pr. For example:
   
   ```python
   fn main():
       let %1 = xxx;
       let %2 = if (%1) {
       let %3: = @func___inference_a(%4, %5, %6) 
     } else {
       let %7: = @func___inference_b(%8, %9)
     };  
   ```
   Then we have two subgraph `func___inference_a` and `func___inference_b`. Does this help us to make our two subgraph type infer correctly? As I see you have supported `GlobalVarNode`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#discussion_r771646769



##########
File path: src/relay/transforms/to_mixed_precision.cc
##########
@@ -381,6 +381,18 @@ class MixedPrecisionPass : public MixedModeMutator {
     return Call(cur_op, new_args, pre_call_node->attrs, new_arg_types, pre_call_node->span);
   }
 
+  Expr Rewrite_(const TupleGetItemNode* pre, const Expr& post) {
+    // The old checked type in the expression may not be valid so clear it
+    post->checked_type_ = Type(nullptr);

Review comment:
       Hmm so you would think so, but it looks like the mutator does not by default invalidate the checked_type (and appears to reuse the reference? giving us this problem).
   
   I can dig a little deeper, but if I remove this line for TupleGetItemNode the checked type will be wrong (it will be fp32 instead of fp16)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on a change in pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on a change in pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#discussion_r777649836



##########
File path: src/relay/transforms/to_mixed_precision.cc
##########
@@ -381,6 +381,18 @@ class MixedPrecisionPass : public MixedModeMutator {
     return Call(cur_op, new_args, pre_call_node->attrs, new_arg_types, pre_call_node->span);
   }
 
+  Expr Rewrite_(const TupleGetItemNode* pre, const Expr& post) {
+    // The old checked type in the expression may not be valid so clear it
+    post->checked_type_ = Type(nullptr);

Review comment:
       Ah! It's the COW, that makes sense. I think that means we should be clearing checked_type_ on COW but let's not dig ourselves any deeper until we've thought about incremental type inference a bit more carefully.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #9735: [AMP][Pass][Typing] Add faster type inference

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on pull request #9735:
URL: https://github.com/apache/tvm/pull/9735#issuecomment-1007133304


   > @AndrewZhaoLuo Sorry for later reply. Does this help us to solve ADT problem in our MixedPrecision? Let us imagine we have one `if` node in our relay graph, `if` will be converted two subgraphs mentioned by you in this pr. For example:
   > 
   > ```python
   > fn main():
   >     let %1 = xxx;
   >     let %2 = if (%1) {
   >     let %3: = @func___inference_a(%4, %5, %6) 
   >   } else {
   >     let %7: = @func___inference_b(%8, %9)
   >   };  
   > ```
   > 
   > Then we have two subgraph `func___inference_a` and `func___inference_b`. Does this help us to make our two subgraph type infer correctly? As I see you have supported `GlobalVarNode`.
   
   @FrozenGene not sure if I understand the concern 😅, global var nodes are just used to reference function calls right? These functions have a known type ahead of time right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org