You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/09/10 20:20:32 UTC

[GitHub] [tvm] anwang2009 opened a new pull request #8985: [ONNX] Add Einsum converter

anwang2009 opened a new pull request #8985:
URL: https://github.com/apache/tvm/pull/8985


   As title
   
   cc @mbrookhart @AndrewZhaoLuo
   
   thanks @AndrewZhaoLuo for maintaining https://tvm.apache.org/docs/dev/relay_add_op.html, this doc was super helpful!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r706557112



##########
File path: python/tvm/relay/op/_transform.py
##########
@@ -182,6 +182,11 @@ def compute_unique(attrs, inputs, output_type):
 _reg.register_strategy("invert_permutation", strategy.invert_permutation_strategy)
 _reg.register_shape_func("invert_permutation", False, elemwise_shape_func)
 
+
+# einsum
+_reg.register_strategy("einsum", strategy.einsum_strategy)

Review comment:
       Move it to `_tensor.py` (or `_math.py`)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #8985:
URL: https://github.com/apache/tvm/pull/8985


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#issuecomment-920338681


   Thanks @anwang2009 @AndrewZhaoLuo @mbrookhart @junrushao1994 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] anwang2009 commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
anwang2009 commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r709375343



##########
File path: python/tvm/relay/op/strategy/cuda.py
##########
@@ -1210,3 +1210,16 @@ def invert_permutation_strategy_cuda(attrs, inputs, out_type, target):
         name="invert_permutation.cuda",
     )
     return strategy
+
+
+@einsum_strategy.register(["cuda", "gpu"])
+def einsum_strategy_cuda(attrs, inputs, out_type, target):
+    """einsum cuda strategy"""
+    strategy = _op.OpStrategy()
+    # TODO: Add cuda-specific op implementation for einsum
+    strategy.add_implementation(

Review comment:
       The `test_onnx_nodes` tests run on CUDA and have passed. I will remove if you think it's necessary to remove despite that?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r709533999



##########
File path: python/tvm/relay/op/strategy/cuda.py
##########
@@ -1210,3 +1210,16 @@ def invert_permutation_strategy_cuda(attrs, inputs, out_type, target):
         name="invert_permutation.cuda",
     )
     return strategy
+
+
+@einsum_strategy.register(["cuda", "gpu"])
+def einsum_strategy_cuda(attrs, inputs, out_type, target):
+    """einsum cuda strategy"""
+    strategy = _op.OpStrategy()
+    # TODO: Add cuda-specific op implementation for einsum
+    strategy.add_implementation(

Review comment:
       Oh interesting. I didn't know that `topi.generic.schedule_extern` somehow generates a valid schedule. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#issuecomment-917192566


   This is really nice! CC @ZihengJiang @hzfan 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] anwang2009 commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
anwang2009 commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r706469124



##########
File path: python/tvm/relay/op/_transform.py
##########
@@ -182,6 +182,12 @@ def compute_unique(attrs, inputs, output_type):
 _reg.register_strategy("invert_permutation", strategy.invert_permutation_strategy)
 _reg.register_shape_func("invert_permutation", False, elemwise_shape_func)
 
+
+# einsum
+_reg.register_strategy("einsum", strategy.einsum_strategy)
+_reg.register_shape_func("einsum", False, elemwise_shape_func)

Review comment:
       I'll just remove the shape registration line for now, if we ever encounter a model with einsum shape dynamism I'll take a look at this again




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r706454338



##########
File path: include/tvm/relay/attrs/transform.h
##########
@@ -489,6 +489,15 @@ struct UniqueAttrs : public tvm::AttrsNode<UniqueAttrs> {
   }
 };  // struct UniqueAttrs
 
+/*! \brief Attributes used in einsum operator */
+struct EinsumAttrs : public tvm::AttrsNode<EinsumAttrs> {
+  String equation;
+
+  TVM_DECLARE_ATTRS(EinsumAttrs, "relay.attrs.EinsumAttrs") {
+    TVM_ATTR_FIELD(equation).describe("The einsum expression string");
+  }
+};

Review comment:
       Add an ending delimiter // struct EinsumAttrs

##########
File path: include/tvm/relay/attrs/transform.h
##########
@@ -489,6 +489,15 @@ struct UniqueAttrs : public tvm::AttrsNode<UniqueAttrs> {
   }
 };  // struct UniqueAttrs
 
+/*! \brief Attributes used in einsum operator */
+struct EinsumAttrs : public tvm::AttrsNode<EinsumAttrs> {
+  String equation;
+
+  TVM_DECLARE_ATTRS(EinsumAttrs, "relay.attrs.EinsumAttrs") {
+    TVM_ATTR_FIELD(equation).describe("The einsum expression string");
+  }
+};

Review comment:
       Also add a comment under ScanopsAttr above if you can. I think I forgot to do that when I made it 

##########
File path: python/tvm/relay/op/_transform.py
##########
@@ -182,6 +182,12 @@ def compute_unique(attrs, inputs, output_type):
 _reg.register_strategy("invert_permutation", strategy.invert_permutation_strategy)
 _reg.register_shape_func("invert_permutation", False, elemwise_shape_func)
 
+
+# einsum
+_reg.register_strategy("einsum", strategy.einsum_strategy)
+_reg.register_shape_func("einsum", False, elemwise_shape_func)

Review comment:
       NumpyEinsumShape might be a lead you want to explore

##########
File path: python/tvm/relay/op/_transform.py
##########
@@ -182,6 +182,12 @@ def compute_unique(attrs, inputs, output_type):
 _reg.register_strategy("invert_permutation", strategy.invert_permutation_strategy)
 _reg.register_shape_func("invert_permutation", False, elemwise_shape_func)
 
+
+# einsum
+_reg.register_strategy("einsum", strategy.einsum_strategy)
+_reg.register_shape_func("einsum", False, elemwise_shape_func)

Review comment:
       `elemwise_shape_func` is incorrect I believe as the output shape can be pretty arbitrary. 
   
   I don't think you need a shape func registered (I believe this is only used for dynamic inputs)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r706557112



##########
File path: python/tvm/relay/op/_transform.py
##########
@@ -182,6 +182,11 @@ def compute_unique(attrs, inputs, output_type):
 _reg.register_strategy("invert_permutation", strategy.invert_permutation_strategy)
 _reg.register_shape_func("invert_permutation", False, elemwise_shape_func)
 
+
+# einsum
+_reg.register_strategy("einsum", strategy.einsum_strategy)

Review comment:
       Move it to `_tensor.py`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r709112076



##########
File path: python/tvm/relay/op/strategy/cuda.py
##########
@@ -1210,3 +1210,16 @@ def invert_permutation_strategy_cuda(attrs, inputs, out_type, target):
         name="invert_permutation.cuda",
     )
     return strategy
+
+
+@einsum_strategy.register(["cuda", "gpu"])
+def einsum_strategy_cuda(attrs, inputs, out_type, target):
+    """einsum cuda strategy"""
+    strategy = _op.OpStrategy()
+    # TODO: Add cuda-specific op implementation for einsum
+    strategy.add_implementation(

Review comment:
       This would lead to an error if you try to run it on cuda right? It is better to remove this strategy until we have a CUDA schedule ready.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r709112076



##########
File path: python/tvm/relay/op/strategy/cuda.py
##########
@@ -1210,3 +1210,16 @@ def invert_permutation_strategy_cuda(attrs, inputs, out_type, target):
         name="invert_permutation.cuda",
     )
     return strategy
+
+
+@einsum_strategy.register(["cuda", "gpu"])
+def einsum_strategy_cuda(attrs, inputs, out_type, target):
+    """einsum cuda strategy"""
+    strategy = _op.OpStrategy()
+    # TODO: Add cuda-specific op implementation for einsum
+    strategy.add_implementation(

Review comment:
       This would lead to an error if you try to run it on cuda right? It is better to remove this strategy until we have a CUDA schedule ready. The error message would be clearer than the one from incorrect scheduling. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r706557494



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -2431,6 +2432,85 @@ RELAY_REGISTER_OP("broadcast_to_like")
     .set_attr<FTVMCompute>("FTVMCompute", BroadCastToLikeCompute)
     .set_attr<TOpPattern>("TOpPattern", kBroadcast);
 
+// relay.einsum
+TVM_REGISTER_NODE_TYPE(EinsumAttrs);
+
+bool EinsumRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
+               const TypeReporter& reporter) {
+  // Check attrs
+  const EinsumAttrs* param = attrs.as<EinsumAttrs>();
+  if (param == nullptr) {
+    reporter->GetDiagCtx().EmitFatal(Diagnostic::Error(reporter->GetSpan())
+                                     << "the call attributes are not defined");
+    return false;
+  }
+
+  // types: [data, result]
+  ICHECK_EQ(types.size(), 2) << "the arity of einsum is 2, not " << types.size();
+
+  // Check input type is a tuple.
+  const auto* tensor_tuple = types[0].as<TupleTypeNode>();
+  if (tensor_tuple == nullptr) {
+    reporter->GetDiagCtx().EmitFatal(
+        Diagnostic::Error(reporter->GetSpan())
+        << "einsum requires a tuple of tensors as the first argument, found "
+        << PrettyPrint(types[0]));
+    return false;
+  }
+
+  // Check the input tuple consists of tensors with consistent dtype.
+  const auto& first = Downcast<TensorType>(tensor_tuple->fields[0]);
+  const DataType dtype = first->dtype;
+  std::vector<Array<PrimExpr>> input_shapes;
+  for (const Type& ele : tensor_tuple->fields) {
+    if (ele.as<IncompleteTypeNode>()) {
+      return false;
+    }
+
+    const auto& e = Downcast<TensorType>(ele);
+
+    const DataType& e_dtype = e->dtype;
+    if (e_dtype != dtype) {
+      throw Error("relay.einsum requires all tensors have the same dtype");
+    }
+    input_shapes.push_back(e->shape);
+  }
+
+  // Calculate output shape
+  Array<IndexExpr> oshape = topi::NumpyEinsumShape(param->equation, input_shapes);
+
+  auto rtype = TensorType(oshape, dtype);
+  reporter->Assign(types[1], rtype);
+  return true;
+}
+
+Array<te::Tensor> EinsumCompute(const Attrs& attrs, const Array<te::Tensor>& inputs,
+                                const Type& out_type) {
+  const EinsumAttrs* param = attrs.as<EinsumAttrs>();
+  ICHECK(param != nullptr);
+  return Array<te::Tensor>{topi::einsum(param->equation, inputs)};
+}
+
+Expr MakeEinsum(Expr data, String equation) {
+  auto attrs = make_object<EinsumAttrs>();
+  attrs->equation = std::move(equation);
+  static const Op& op = Op::Get("einsum");
+  return Call(op, {data}, Attrs(attrs), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op._make.einsum").set_body_typed(MakeEinsum);
+
+RELAY_REGISTER_OP("einsum")
+    .describe(R"doc(Evaluates the Einstein summation convention
+on the operands)doc" TVM_ADD_FILELINE)
+    .set_attrs_type<EinsumAttrs>()
+    .set_num_inputs(1)
+    .add_argument("data", "Tuple of Tensors", "The input list of tensors.")
+    .set_support_level(11)
+    .add_type_rel("Einsum", EinsumRel)
+    .set_attr<FTVMCompute>("FTVMCompute", EinsumCompute)
+    .set_attr<TOpPattern>("TOpPattern", kInjective);
+

Review comment:
       Add `math.cc` and put the changes there.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8985: [ONNX] Add Einsum converter

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r706556993



##########
File path: python/tvm/topi/generic/search.py
##########
@@ -86,3 +86,19 @@ def schedule_unique(outs):
       The computation schedule for the op.
     """
     return _default_schedule(outs, False)
+
+
+def schedule_einsum(outs):

Review comment:
       Let's introduce `math.py` and put it there.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org