You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/10/25 22:30:32 UTC

[GitHub] [incubator-tvm] masahi opened a new pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

masahi opened a new pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511689194



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       I see, do we have a test that exercise where shape func? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r512467692



##########
File path: python/tvm/topi/broadcast.py
##########
@@ -22,7 +22,7 @@
 def broadcast_to(data, shape):
     """Broadcast the src to the target shape
 
-    We follow the numpy broadcasting rule.
+    We follows the numpy broadcasting rule.

Review comment:
       Nit: follow seeemed better to me if we say "we"...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511757403



##########
File path: include/tvm/topi/broadcast.h
##########
@@ -69,6 +69,46 @@ inline tvm::te::Tensor broadcast_to(const tvm::te::Tensor& t,
   return tvm::te::compute(oshape, l, name, tag);
 }
 
+inline tvm::te::Tensor broadcast_shape_tensors(const tvm::te::Tensor& shape_tensor1,
+                                               const tvm::te::Tensor& shape_tensor2,
+                                               std::string name = "T_broadcast_shape_tensors",
+                                               std::string tag = kBroadcast) {
+  const auto rank1 = detail::GetConstInt(shape_tensor1->shape[0]);
+  const auto rank2 = detail::GetConstInt(shape_tensor2->shape[0]);
+  const auto out_rank = std::max<int32_t>(rank1, rank2);
+  const tvm::PrimExpr one = tvm::cast(shape_tensor1->dtype, PrimExpr(1));
+
+  auto select_dim = [&](const tvm::te::Tensor& shape_tensor, int rank,
+                        tvm::tir::Var index) -> PrimExpr {
+    if (rank < out_rank) {
+      // if the rank is smaller, dimension 1 is prepended according to
+      // the numpy broadcasting semantics.
+      return tvm::tir::Select(rank - (out_rank - index) < 0, one,
+                              shape_tensor[rank - (out_rank - index)]);
+    } else {
+      // rank == out_rank, safe to index directly
+      return shape_tensor[index];
+    }
+  };
+
+  auto func = [&](tvm::Array<tvm::tir::Var> ovars) {
+    auto index = ovars[0];
+    PrimExpr dim1 = select_dim(shape_tensor1, rank1, index);
+    PrimExpr dim2 = select_dim(shape_tensor2, rank2, index);
+    if (topi::detail::EqualCheck(one, dim1)) {
+      return dim2;
+    } else if (topi::detail::EqualCheck(one, dim2)) {
+      return dim1;
+    }
+    return tvm::max(dim1, dim2);

Review comment:
       Two comments (not sure if they need to be addressed in this PR):
   - Does this (with EqualCheck and C++ if) work as expected on dynamic shapes? Should it?
   - While this seems to work for valid broadcasting (save the potential dynamic caveat), it does fail to reject invalid broadcasting, i.e. when none of the dims is 1 but they're different. Of course, this might be intentional to cover dynamic (if they aren't covered in the two cases above, but it might lead to funny error messages etc., so I think it would be neat to have a comment of what each code path is handling).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#issuecomment-716253253


   Nice work @masahi!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r512467692



##########
File path: python/tvm/topi/broadcast.py
##########
@@ -22,7 +22,7 @@
 def broadcast_to(data, shape):
     """Broadcast the src to the target shape
 
-    We follow the numpy broadcasting rule.
+    We follows the numpy broadcasting rule.

Review comment:
       follow seeemed better to me if we say "we"...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r512504566



##########
File path: python/tvm/topi/broadcast.py
##########
@@ -22,7 +22,7 @@
 def broadcast_to(data, shape):
     """Broadcast the src to the target shape
 
-    We follow the numpy broadcasting rule.
+    We follows the numpy broadcasting rule.

Review comment:
       yeah this is actually a typo I fixed yesterday, but I reverted the commit that fixed this typo (to remove the change I made to `broadcast.h`, `broadcast.py` inside topi) . This change is not a part of this PR.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#issuecomment-717010375


   @t-vi In https://github.com/apache/incubator-tvm/pull/6759/commits/d8c5076aec16e32ea7f00d89af8eb82c0e0984a2, I ported c++ `broadcast_shape_tensors` function to hybridscript, to be able to generate runtime assertion for broadcasting validity check. 
   
   Now, invalid shapes are catched at runtime and result in an error. I added one test that intentionally tries to run a compiled model with invalid combination of shapes, and verifies that invaild broadcast are properly caught. Please have a look!  


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511681816



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  CHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "

Review comment:
       good find, fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511680047



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  CHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "

Review comment:
       Do we prefer to use `ICHECK` or `CHECK` here?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#issuecomment-717647329


   Thanks @masahi @junrushao1994 @t-vi 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511750722



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       @kevinthesun In this commit, https://github.com/apache/incubator-tvm/pull/6759/commits/5423c70152c088015a6e51c3c9805b12df2e1153
   
   I added tests for `where` + any, and updated `where` shape func. To compute a shape tensor corresponding to the broadcasted shapes from condition, x, and y, I added a new function in topi cpp. 
   
   It pass all new tests I added that exercise the new shape func. But I'm not completely sure if the logic in `broadcast_shape_tensors` is correct. Can you have a look?
   
   Also it would be great if I can get more eyes on broadcasting + shape func logic @junrushao1994 @icemelon9  




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r512504566



##########
File path: python/tvm/topi/broadcast.py
##########
@@ -22,7 +22,7 @@
 def broadcast_to(data, shape):
     """Broadcast the src to the target shape
 
-    We follow the numpy broadcasting rule.
+    We follows the numpy broadcasting rule.

Review comment:
       yeah this is actually a typo (by someone else) I fixed yesterday, but I reverted the commit that fixed this typo (to remove the change I made to `broadcast.h`, `broadcast.py` inside topi) . This change is not a part of this PR.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r512504566



##########
File path: python/tvm/topi/broadcast.py
##########
@@ -22,7 +22,7 @@
 def broadcast_to(data, shape):
     """Broadcast the src to the target shape
 
-    We follow the numpy broadcasting rule.
+    We follows the numpy broadcasting rule.

Review comment:
       yeah this is actually a typo (by someone else) I fixed yesterday, but I reverted the commit that fixed this typo (to remove the change I made to `broadcast.h`, `broadcast.py` inside topi) . That's probably why this diff showed up, but it this change is not a part of this PR.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#issuecomment-717191747


   @kevinthesun I've updated `where` shape function, I think it is ready to merge. Can you take a final look?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511781645



##########
File path: include/tvm/topi/broadcast.h
##########
@@ -69,6 +69,46 @@ inline tvm::te::Tensor broadcast_to(const tvm::te::Tensor& t,
   return tvm::te::compute(oshape, l, name, tag);
 }
 
+inline tvm::te::Tensor broadcast_shape_tensors(const tvm::te::Tensor& shape_tensor1,
+                                               const tvm::te::Tensor& shape_tensor2,
+                                               std::string name = "T_broadcast_shape_tensors",
+                                               std::string tag = kBroadcast) {
+  const auto rank1 = detail::GetConstInt(shape_tensor1->shape[0]);
+  const auto rank2 = detail::GetConstInt(shape_tensor2->shape[0]);
+  const auto out_rank = std::max<int32_t>(rank1, rank2);
+  const tvm::PrimExpr one = tvm::cast(shape_tensor1->dtype, PrimExpr(1));
+
+  auto select_dim = [&](const tvm::te::Tensor& shape_tensor, int rank,
+                        tvm::tir::Var index) -> PrimExpr {
+    if (rank < out_rank) {
+      // if the rank is smaller, dimension 1 is prepended according to
+      // the numpy broadcasting semantics.
+      return tvm::tir::Select(rank - (out_rank - index) < 0, one,
+                              shape_tensor[rank - (out_rank - index)]);
+    } else {
+      // rank == out_rank, safe to index directly
+      return shape_tensor[index];
+    }
+  };
+
+  auto func = [&](tvm::Array<tvm::tir::Var> ovars) {
+    auto index = ovars[0];
+    PrimExpr dim1 = select_dim(shape_tensor1, rank1, index);
+    PrimExpr dim2 = select_dim(shape_tensor2, rank2, index);
+    if (topi::detail::EqualCheck(one, dim1)) {
+      return dim2;
+    } else if (topi::detail::EqualCheck(one, dim2)) {
+      return dim1;
+    }
+    return tvm::max(dim1, dim2);

Review comment:
       @tqchen Is it possible to generate `AssertStmt` here? I want add a runtime assert that checks 
   ```
   dim1 == dim2 or dim1 == 1 or dim2 == 1
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r512467692



##########
File path: python/tvm/topi/broadcast.py
##########
@@ -22,7 +22,7 @@
 def broadcast_to(data, shape):
     """Broadcast the src to the target shape
 
-    We follow the numpy broadcasting rule.
+    We follows the numpy broadcasting rule.

Review comment:
       Nit: "we follow" seeemed more traditional to me here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#issuecomment-717051186


   Looks awesome to me. Thank you @masahi!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511712905



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       Probably we can add a test case in test_any.py




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r512297653



##########
File path: include/tvm/topi/broadcast.h
##########
@@ -69,6 +69,46 @@ inline tvm::te::Tensor broadcast_to(const tvm::te::Tensor& t,
   return tvm::te::compute(oshape, l, name, tag);
 }
 
+inline tvm::te::Tensor broadcast_shape_tensors(const tvm::te::Tensor& shape_tensor1,
+                                               const tvm::te::Tensor& shape_tensor2,
+                                               std::string name = "T_broadcast_shape_tensors",
+                                               std::string tag = kBroadcast) {
+  const auto rank1 = detail::GetConstInt(shape_tensor1->shape[0]);
+  const auto rank2 = detail::GetConstInt(shape_tensor2->shape[0]);
+  const auto out_rank = std::max<int32_t>(rank1, rank2);
+  const tvm::PrimExpr one = tvm::cast(shape_tensor1->dtype, PrimExpr(1));
+
+  auto select_dim = [&](const tvm::te::Tensor& shape_tensor, int rank,
+                        tvm::tir::Var index) -> PrimExpr {
+    if (rank < out_rank) {
+      // if the rank is smaller, dimension 1 is prepended according to
+      // the numpy broadcasting semantics.
+      return tvm::tir::Select(rank - (out_rank - index) < 0, one,
+                              shape_tensor[rank - (out_rank - index)]);
+    } else {
+      // rank == out_rank, safe to index directly
+      return shape_tensor[index];
+    }
+  };
+
+  auto func = [&](tvm::Array<tvm::tir::Var> ovars) {
+    auto index = ovars[0];
+    PrimExpr dim1 = select_dim(shape_tensor1, rank1, index);
+    PrimExpr dim2 = select_dim(shape_tensor2, rank2, index);
+    if (topi::detail::EqualCheck(one, dim1)) {
+      return dim2;
+    } else if (topi::detail::EqualCheck(one, dim2)) {
+      return dim1;
+    }
+    return tvm::max(dim1, dim2);

Review comment:
       It seems it is possible to generate such assertions using hybrid script, so I'm thinking about porting this cpp topi function to python + hybrid script.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511750722



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       @kevinthesun In this commit, https://github.com/apache/incubator-tvm/pull/6759/commits/5423c70152c088015a6e51c3c9805b12df2e1153, I added tests for `where` + any, and updated `where` shape func. To compute a shape tensor corresponding to the broadcasted shapes from condition, x, and y, I added a new helper function in topi cpp. 
   
   It pass all new tests I added that exercise the new shape func. But I'm not completely sure if the logic in `broadcast_shape_tensors` is correct. Can you have a look?
   
   Also it would be great if I can get more eyes on broadcasting + shape func logic @junrushao1994 @icemelon9  




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511757403



##########
File path: include/tvm/topi/broadcast.h
##########
@@ -69,6 +69,46 @@ inline tvm::te::Tensor broadcast_to(const tvm::te::Tensor& t,
   return tvm::te::compute(oshape, l, name, tag);
 }
 
+inline tvm::te::Tensor broadcast_shape_tensors(const tvm::te::Tensor& shape_tensor1,
+                                               const tvm::te::Tensor& shape_tensor2,
+                                               std::string name = "T_broadcast_shape_tensors",
+                                               std::string tag = kBroadcast) {
+  const auto rank1 = detail::GetConstInt(shape_tensor1->shape[0]);
+  const auto rank2 = detail::GetConstInt(shape_tensor2->shape[0]);
+  const auto out_rank = std::max<int32_t>(rank1, rank2);
+  const tvm::PrimExpr one = tvm::cast(shape_tensor1->dtype, PrimExpr(1));
+
+  auto select_dim = [&](const tvm::te::Tensor& shape_tensor, int rank,
+                        tvm::tir::Var index) -> PrimExpr {
+    if (rank < out_rank) {
+      // if the rank is smaller, dimension 1 is prepended according to
+      // the numpy broadcasting semantics.
+      return tvm::tir::Select(rank - (out_rank - index) < 0, one,
+                              shape_tensor[rank - (out_rank - index)]);
+    } else {
+      // rank == out_rank, safe to index directly
+      return shape_tensor[index];
+    }
+  };
+
+  auto func = [&](tvm::Array<tvm::tir::Var> ovars) {
+    auto index = ovars[0];
+    PrimExpr dim1 = select_dim(shape_tensor1, rank1, index);
+    PrimExpr dim2 = select_dim(shape_tensor2, rank2, index);
+    if (topi::detail::EqualCheck(one, dim1)) {
+      return dim2;
+    } else if (topi::detail::EqualCheck(one, dim2)) {
+      return dim1;
+    }
+    return tvm::max(dim1, dim2);

Review comment:
       Two comments (I'd lean to not them not needing to be addressed in this PR):
   - Does this (with EqualCheck and C++ if) work as expected on dynamic shapes? Should it?
   - While this seems to work for valid broadcasting (save the potential dynamic caveat), it does fail to reject invalid broadcasting, i.e. when none of the dims is 1 but they're different. Of course, this might be intentional to cover dynamic (if they aren't covered in the two cases above, but it might lead to funny error messages etc., so I think it would be neat to have a comment of what each code path is handling).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511750722



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       @kevinthesun In this commit, https://github.com/apache/incubator-tvm/pull/6759/commits/5423c70152c088015a6e51c3c9805b12df2e1153, I added tests for `where` + any, and updated `where` shape func. To compute a shape tensor corresponding to the broadcasted shapes from condition, x, and y, I added a new helper function in topi cpp. 
   
   It pass all new tests I added that exercise the new shape func. But I'm not completely sure if the logic in `broadcast_shape_tensors` is correct. Can you have a look?
   
   Also it would be great if I can get more eyes on broadcasting + shape func logic @junrushao1994 @icemelon9  
   
   UPDATE: Removed useless check `topi::detail::EqualCheck(one, dim2)`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511770690



##########
File path: include/tvm/topi/broadcast.h
##########
@@ -69,6 +69,46 @@ inline tvm::te::Tensor broadcast_to(const tvm::te::Tensor& t,
   return tvm::te::compute(oshape, l, name, tag);
 }
 
+inline tvm::te::Tensor broadcast_shape_tensors(const tvm::te::Tensor& shape_tensor1,
+                                               const tvm::te::Tensor& shape_tensor2,
+                                               std::string name = "T_broadcast_shape_tensors",
+                                               std::string tag = kBroadcast) {
+  const auto rank1 = detail::GetConstInt(shape_tensor1->shape[0]);
+  const auto rank2 = detail::GetConstInt(shape_tensor2->shape[0]);
+  const auto out_rank = std::max<int32_t>(rank1, rank2);
+  const tvm::PrimExpr one = tvm::cast(shape_tensor1->dtype, PrimExpr(1));
+
+  auto select_dim = [&](const tvm::te::Tensor& shape_tensor, int rank,
+                        tvm::tir::Var index) -> PrimExpr {
+    if (rank < out_rank) {
+      // if the rank is smaller, dimension 1 is prepended according to
+      // the numpy broadcasting semantics.
+      return tvm::tir::Select(rank - (out_rank - index) < 0, one,
+                              shape_tensor[rank - (out_rank - index)]);
+    } else {
+      // rank == out_rank, safe to index directly
+      return shape_tensor[index];
+    }
+  };
+
+  auto func = [&](tvm::Array<tvm::tir::Var> ovars) {
+    auto index = ovars[0];
+    PrimExpr dim1 = select_dim(shape_tensor1, rank1, index);
+    PrimExpr dim2 = select_dim(shape_tensor2, rank2, index);
+    if (topi::detail::EqualCheck(one, dim1)) {
+      return dim2;
+    } else if (topi::detail::EqualCheck(one, dim2)) {
+      return dim1;
+    }
+    return tvm::max(dim1, dim2);

Review comment:
       Yeah, good points:) I think these are subtle but interesting issues
   
   >     * Does this (with EqualCheck and C++ if) work as expected on dynamic shapes? Should it?
   I was having a wishful thinking that it could optimize certain cases, but since this shape function is only called on dynamic shapes, I realized this condition is useless. For example, while running the new tests in `test_any.py` that exercises this function, this condition never fires. I could replace this static if with dynamic if checked at runtime, but I don't think it would be any faster than just calling `max` once. So I'll remove them
   
    
   
   
   >     * While this seems to work for valid broadcasting (save the potential dynamic caveat), it does fail to reject invalid broadcasting
   
   yes, since this function is called only on dynamic shapes, all I could do is to insert code that checks for invalid shapes at runtime and raise an error in some way. I don't know if there is a way to raise exceptions or add runtime assertions using TE primitives.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511705412



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       I tried running relay `where` op tests with VM executor, but it didn't call `where_shape_func`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun merged pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
kevinthesun merged pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511685770



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       We also need to modify shape func? https://github.com/apache/incubator-tvm/blob/main/python/tvm/relay/op/_transform.py#L814




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511689194



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       I see, do we have a test that exercises where shape func? 
   If not, I don't know how to test this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511750722



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       @kevinthesun In this commit, https://github.com/apache/incubator-tvm/pull/6759/commits/5423c70152c088015a6e51c3c9805b12df2e1153, I added tests for `where` + any, and updated `where` shape func. To compute a shape tensor corresponding to the broadcasted shapes from condition, x, and y, I added a new function in topi cpp. 
   
   It pass all new tests I added that exercise the new shape func. But I'm not completely sure if the logic in `broadcast_shape_tensors` is correct. Can you have a look?
   
   Also it would be great if I can get more eyes on broadcasting + shape func logic @junrushao1994 @icemelon9  




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511689194



##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -1685,30 +1686,17 @@ bool WhereRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
-  const auto& cond_shape = condition->shape;
-  const auto& x_shape = x->shape;
-  const auto& y_shape = y->shape;
-  ICHECK(x_shape.size() == y_shape.size()) << "x and y must have the same size";
+  ICHECK_EQ(x->dtype, y->dtype) << "x and y must have the same dtype: " << x->dtype << " vs "
+                                << y->dtype;
 
-  if (cond_shape.size() != x_shape.size()) {
-    ICHECK_EQ(cond_shape.size(), 1) << "Shape of condition " << condition->shape
-                                    << " must be either equal to x or has dimension of 1.";
-  }
-  for (size_t i = 0; i < x_shape.size(); i++) {
-    ICHECK(reporter->AssertEQ(x_shape[i], y_shape[i]))
-        << "x and y must have the same shape: " << x_shape << " vs " << y_shape;
+  auto tensor_ty_condition = GetRef<TensorType>(condition);
+  auto tensor_ty_x = GetRef<TensorType>(x);
+  auto tensor_ty_y = GetRef<TensorType>(y);
 
-    if (i < cond_shape.size()) {
-      ICHECK(reporter->AssertEQ(cond_shape[i], x_shape[i]))
-          << "condition and x must have the same shape: " << cond_shape << " vs " << x_shape;
-    }
-  }
-  if (x_shape.size() == 0) {
-    // if x and y are scalar, the condition shape becomes the output shape
-    reporter->Assign(types[3], TensorType(cond_shape, x->dtype));
-  } else {
-    reporter->Assign(types[3], TensorType(x_shape, x->dtype));
-  }
+  auto b_ty = ConcreteBroadcast(tensor_ty_x, tensor_ty_y, x->dtype);
+  auto ret_ty = ConcreteBroadcast(tensor_ty_condition, b_ty, b_ty->dtype);

Review comment:
       I see, do we have a test that exercises where shape func? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r511770690



##########
File path: include/tvm/topi/broadcast.h
##########
@@ -69,6 +69,46 @@ inline tvm::te::Tensor broadcast_to(const tvm::te::Tensor& t,
   return tvm::te::compute(oshape, l, name, tag);
 }
 
+inline tvm::te::Tensor broadcast_shape_tensors(const tvm::te::Tensor& shape_tensor1,
+                                               const tvm::te::Tensor& shape_tensor2,
+                                               std::string name = "T_broadcast_shape_tensors",
+                                               std::string tag = kBroadcast) {
+  const auto rank1 = detail::GetConstInt(shape_tensor1->shape[0]);
+  const auto rank2 = detail::GetConstInt(shape_tensor2->shape[0]);
+  const auto out_rank = std::max<int32_t>(rank1, rank2);
+  const tvm::PrimExpr one = tvm::cast(shape_tensor1->dtype, PrimExpr(1));
+
+  auto select_dim = [&](const tvm::te::Tensor& shape_tensor, int rank,
+                        tvm::tir::Var index) -> PrimExpr {
+    if (rank < out_rank) {
+      // if the rank is smaller, dimension 1 is prepended according to
+      // the numpy broadcasting semantics.
+      return tvm::tir::Select(rank - (out_rank - index) < 0, one,
+                              shape_tensor[rank - (out_rank - index)]);
+    } else {
+      // rank == out_rank, safe to index directly
+      return shape_tensor[index];
+    }
+  };
+
+  auto func = [&](tvm::Array<tvm::tir::Var> ovars) {
+    auto index = ovars[0];
+    PrimExpr dim1 = select_dim(shape_tensor1, rank1, index);
+    PrimExpr dim2 = select_dim(shape_tensor2, rank2, index);
+    if (topi::detail::EqualCheck(one, dim1)) {
+      return dim2;
+    } else if (topi::detail::EqualCheck(one, dim2)) {
+      return dim1;
+    }
+    return tvm::max(dim1, dim2);

Review comment:
       Yeah, good points:) I think these are subtle but interesting issues
   
   >     * Does this (with EqualCheck and C++ if) work as expected on dynamic shapes? Should it?
   I was having a wishful thinking that it could optimize certain cases, but since this shape function is only called on dynamic shapes, I realized this condition is useless. For example, while running the new tests in `test_any.py` that exercises this function, this condition never fires. I could replace this static if with dynamic if checked at runtime, but I don't think it would be any faster than just calling `max` once. So I'll remove them
   
    
   
   
   >     * While this seems to work for valid broadcasting (save the potential dynamic caveat), it does fail to reject invalid broadcasting
   
   yes, since this function is called only on dynamic shapes, all I could do is to insert code that checks for invalid shapes at runtime and raise an error in some way. I don't know if there is a way to raise exceptions using TE primitives.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6759: [Relay, TOPI] Complete rewrite of where op to support broadcasting

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6759:
URL: https://github.com/apache/incubator-tvm/pull/6759#discussion_r512504566



##########
File path: python/tvm/topi/broadcast.py
##########
@@ -22,7 +22,7 @@
 def broadcast_to(data, shape):
     """Broadcast the src to the target shape
 
-    We follow the numpy broadcasting rule.
+    We follows the numpy broadcasting rule.

Review comment:
       yeah this is actually a typo I fixed yesterday, but I reverted the commit that fixed this typo (to remove the change I made to `broadcast.h`, `broadcast.py` inside topi)  




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org