You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/09/15 11:19:24 UTC

[GitHub] [incubator-tvm] BhushanIMG opened a new pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

BhushanIMG opened a new pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477


   Hi,
   
   I am Bhushan from Imagination Technologies.
   
   This PR adds space_to_batch_nd and batch_to_space_nd operators to Relay and TOPI.
   I have added the tests for these operators and Tensorflow frontend is also modified to use these operators.
   
   Please review @tqchen  @masahi  @Huyuwei  and let me know your comments.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] BhushanIMG commented on a change in pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
BhushanIMG commented on a change in pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#discussion_r498397727



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -3151,3 +3151,60 @@ def correlation(
     return _make.correlation(
         data1, data2, kernel_size, max_displacement, stride1, stride2, padding, is_multiply, layout
     )
+
+
+def space_to_batch_nd(data, block_shape, paddings):
+    r"""Divide spatial dimensions of the data into a grid of blocks and interleave them into batch dim.

Review comment:
       I have corrected this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] BhushanIMG commented on pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
BhushanIMG commented on pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#issuecomment-728853701


   @FrozenGene Could you please spare some time to review this.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] u99127 commented on pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
u99127 commented on pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#issuecomment-694526316


   A very quick review : I think as part of this we should update the tflite frontend at the same time. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#discussion_r502164406



##########
File path: include/tvm/topi/nn.h
##########
@@ -459,6 +460,178 @@ inline tvm::te::Tensor group_conv2d_ngchw(const tvm::te::Tensor& I, const tvm::t
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/*!
+ * \brief Divide spatial dimensions of the input into a grid of blocks.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param pad_before The zero-padding size before each spatial dimension.
+ * \param pad_after The zero-padding size after each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the space_to_batch_nd operation
+ */
+inline tvm::te::Tensor space_to_batch_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& pad_before,
+                                         const tvm::Array<tvm::PrimExpr>& pad_after,
+                                         std::string name = "space_to_batch_nd",
+                                         std::string tag = kInjective) {
+  tvm::te::Tensor padded_t;
+  CHECK_EQ(pad_before.size(), pad_after.size());
+  CHECK_EQ(block_shape.size(), pad_before.size())
+      << "Paddings must be provided for each spatial dimension";
+  tvm::Array<tvm::PrimExpr> pad_before_int32;
+  tvm::Array<tvm::PrimExpr> pad_after_int32;
+
+  // pad size for batch dimension is 0
+  pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  // insert pad sizes given for spatial dimensions
+  for (const auto& ele : pad_before) {
+    pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+  for (const auto& ele : pad_after) {
+    pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+
+  // pad the input with paddings provided
+  padded_t = pad(data, pad_before_int32, pad_after_int32, make_const(DataType::Int(32), 0));

Review comment:
       How about adding one `pad_value` parameter whose default value is 0 for `space_to_batch`? I imagine if we use it in the quantized model, its pad value will be `zero_point`, not 0.https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/tflite.py#L2423

##########
File path: python/tvm/topi/testing/space_to_batch_nd.py
##########
@@ -0,0 +1,90 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, line-too-long, unused-variable, too-many-locals
+"""Space to batch ND in python"""
+import numpy as np
+
+
+def space_to_batch_nd_python(data, block_shape, pad_before, pad_after):

Review comment:
       Add `pad_value = 0` like previous comment said

##########
File path: include/tvm/topi/nn.h
##########
@@ -459,6 +460,178 @@ inline tvm::te::Tensor group_conv2d_ngchw(const tvm::te::Tensor& I, const tvm::t
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/*!
+ * \brief Divide spatial dimensions of the input into a grid of blocks.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param pad_before The zero-padding size before each spatial dimension.
+ * \param pad_after The zero-padding size after each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the space_to_batch_nd operation
+ */
+inline tvm::te::Tensor space_to_batch_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& pad_before,
+                                         const tvm::Array<tvm::PrimExpr>& pad_after,
+                                         std::string name = "space_to_batch_nd",
+                                         std::string tag = kInjective) {
+  tvm::te::Tensor padded_t;
+  CHECK_EQ(pad_before.size(), pad_after.size());
+  CHECK_EQ(block_shape.size(), pad_before.size())
+      << "Paddings must be provided for each spatial dimension";
+  tvm::Array<tvm::PrimExpr> pad_before_int32;
+  tvm::Array<tvm::PrimExpr> pad_after_int32;
+
+  // pad size for batch dimension is 0
+  pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  // insert pad sizes given for spatial dimensions
+  for (const auto& ele : pad_before) {
+    pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+  for (const auto& ele : pad_after) {
+    pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+
+  // pad the input with paddings provided
+  padded_t = pad(data, pad_before_int32, pad_after_int32, make_const(DataType::Int(32), 0));
+
+  auto input_shape = data->shape;
+  auto padded_shape = padded_t->shape;
+
+  // infer shapes
+  tvm::Array<PrimExpr> r_shape;
+  tvm::Array<Integer> axis;
+  tvm::Array<PrimExpr> o_shape;
+
+  size_t M = block_shape.size();

Review comment:
       I think we shouldn't use capital letter here

##########
File path: include/tvm/topi/nn.h
##########
@@ -459,6 +460,178 @@ inline tvm::te::Tensor group_conv2d_ngchw(const tvm::te::Tensor& I, const tvm::t
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/*!
+ * \brief Divide spatial dimensions of the input into a grid of blocks.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param pad_before The zero-padding size before each spatial dimension.
+ * \param pad_after The zero-padding size after each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the space_to_batch_nd operation
+ */
+inline tvm::te::Tensor space_to_batch_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& pad_before,
+                                         const tvm::Array<tvm::PrimExpr>& pad_after,
+                                         std::string name = "space_to_batch_nd",
+                                         std::string tag = kInjective) {
+  tvm::te::Tensor padded_t;
+  CHECK_EQ(pad_before.size(), pad_after.size());
+  CHECK_EQ(block_shape.size(), pad_before.size())
+      << "Paddings must be provided for each spatial dimension";
+  tvm::Array<tvm::PrimExpr> pad_before_int32;
+  tvm::Array<tvm::PrimExpr> pad_after_int32;
+
+  // pad size for batch dimension is 0
+  pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  // insert pad sizes given for spatial dimensions
+  for (const auto& ele : pad_before) {
+    pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+  for (const auto& ele : pad_after) {
+    pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+
+  // pad the input with paddings provided
+  padded_t = pad(data, pad_before_int32, pad_after_int32, make_const(DataType::Int(32), 0));
+
+  auto input_shape = data->shape;
+  auto padded_shape = padded_t->shape;
+
+  // infer shapes
+  tvm::Array<PrimExpr> r_shape;
+  tvm::Array<Integer> axis;
+  tvm::Array<PrimExpr> o_shape;
+
+  size_t M = block_shape.size();
+  int batch = static_cast<int>(GetConstInt(input_shape[0]));
+  tvm::PrimExpr block_shape_prod(1);
+  r_shape.push_back(batch);
+
+  for (size_t i = 1; i <= M; i++) {
+    int padded_input = static_cast<int>(GetConstInt(padded_shape[i]));
+    int block_size = static_cast<int>(GetConstInt(block_shape[i - 1]));
+    CHECK_EQ((padded_input % block_size), 0)
+        << "(" << i
+        << ")th "
+           "Input dimension after padding ("
+        << padded_input << ")"
+        << " must be divisible by its block size (" << block_size << ")";
+
+    r_shape.push_back(div(padded_shape[i], block_shape[i - 1]));
+    r_shape.push_back(block_shape[i - 1]);
+    block_shape_prod *= block_shape[i - 1];
+    axis.push_back(Integer(r_shape.size() - 1));  // index of block_shape[i - 1]
+  }
+
+  size_t n = axis.size();
+  axis.push_back(0);  // batch is at index 0
+  // index of (padded_shape[i] / block_shape[i - 1]) in r_shape
+  for (size_t i = 0; i < n; i++) {
+    axis.push_back(static_cast<int>(GetConstInt(axis[i] - 1)));
+  }
+  o_shape.push_back(tvm::PrimExpr(batch) * block_shape_prod);
+  for (size_t i = 1; i <= M; i++) {
+    o_shape.push_back(div(padded_shape[i], block_shape[i - 1]));
+  }
+  // append remaining shape
+  for (size_t i = M + 1; i < input_shape.size(); i++) {
+    r_shape.push_back(input_shape[i]);
+    axis.push_back(Integer(r_shape.size() - 1));  // index of remaining shape in r_shape
+    o_shape.push_back(input_shape[i]);
+  }
+
+  tvm::te::Tensor output = reshape(padded_t, r_shape);
+  output = transpose(output, axis);
+  output = reshape(output, o_shape);
+
+  return output;
+}
+
+/*!
+ * \brief Reshape the batch dimension into spatial dimensions.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param crop_begin_list The begin crop size for each spatial dimension.
+ * \param crop_end_list The end crop size for each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the batch_to_space_nd operation
+ */
+inline tvm::te::Tensor batch_to_space_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& crop_begin_list,
+                                         const tvm::Array<tvm::PrimExpr>& crop_end_list,
+                                         std::string name = "batch_to_space_nd",
+                                         std::string tag = kInjective) {
+  // Construct shapes for reshape and transpose operation
+  Array<PrimExpr> in_shape = data->shape;
+  Array<PrimExpr> r_shape;
+  Array<Integer> axis;
+  size_t M = block_shape.size();
+  size_t N = in_shape.size();

Review comment:
       ditto




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#discussion_r502164406



##########
File path: include/tvm/topi/nn.h
##########
@@ -459,6 +460,178 @@ inline tvm::te::Tensor group_conv2d_ngchw(const tvm::te::Tensor& I, const tvm::t
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/*!
+ * \brief Divide spatial dimensions of the input into a grid of blocks.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param pad_before The zero-padding size before each spatial dimension.
+ * \param pad_after The zero-padding size after each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the space_to_batch_nd operation
+ */
+inline tvm::te::Tensor space_to_batch_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& pad_before,
+                                         const tvm::Array<tvm::PrimExpr>& pad_after,
+                                         std::string name = "space_to_batch_nd",
+                                         std::string tag = kInjective) {
+  tvm::te::Tensor padded_t;
+  CHECK_EQ(pad_before.size(), pad_after.size());
+  CHECK_EQ(block_shape.size(), pad_before.size())
+      << "Paddings must be provided for each spatial dimension";
+  tvm::Array<tvm::PrimExpr> pad_before_int32;
+  tvm::Array<tvm::PrimExpr> pad_after_int32;
+
+  // pad size for batch dimension is 0
+  pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  // insert pad sizes given for spatial dimensions
+  for (const auto& ele : pad_before) {
+    pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+  for (const auto& ele : pad_after) {
+    pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+
+  // pad the input with paddings provided
+  padded_t = pad(data, pad_before_int32, pad_after_int32, make_const(DataType::Int(32), 0));

Review comment:
       How about adding one `pad_value` parameter whose default value is 0 for `space_to_batch`? I imagine if we use it in the quantized model, its pad value will be `zero_point`, not 0.https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/tflite.py#L2423

##########
File path: python/tvm/topi/testing/space_to_batch_nd.py
##########
@@ -0,0 +1,90 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, line-too-long, unused-variable, too-many-locals
+"""Space to batch ND in python"""
+import numpy as np
+
+
+def space_to_batch_nd_python(data, block_shape, pad_before, pad_after):

Review comment:
       Add `pad_value = 0` like previous comment said

##########
File path: include/tvm/topi/nn.h
##########
@@ -459,6 +460,178 @@ inline tvm::te::Tensor group_conv2d_ngchw(const tvm::te::Tensor& I, const tvm::t
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/*!
+ * \brief Divide spatial dimensions of the input into a grid of blocks.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param pad_before The zero-padding size before each spatial dimension.
+ * \param pad_after The zero-padding size after each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the space_to_batch_nd operation
+ */
+inline tvm::te::Tensor space_to_batch_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& pad_before,
+                                         const tvm::Array<tvm::PrimExpr>& pad_after,
+                                         std::string name = "space_to_batch_nd",
+                                         std::string tag = kInjective) {
+  tvm::te::Tensor padded_t;
+  CHECK_EQ(pad_before.size(), pad_after.size());
+  CHECK_EQ(block_shape.size(), pad_before.size())
+      << "Paddings must be provided for each spatial dimension";
+  tvm::Array<tvm::PrimExpr> pad_before_int32;
+  tvm::Array<tvm::PrimExpr> pad_after_int32;
+
+  // pad size for batch dimension is 0
+  pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  // insert pad sizes given for spatial dimensions
+  for (const auto& ele : pad_before) {
+    pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+  for (const auto& ele : pad_after) {
+    pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+
+  // pad the input with paddings provided
+  padded_t = pad(data, pad_before_int32, pad_after_int32, make_const(DataType::Int(32), 0));
+
+  auto input_shape = data->shape;
+  auto padded_shape = padded_t->shape;
+
+  // infer shapes
+  tvm::Array<PrimExpr> r_shape;
+  tvm::Array<Integer> axis;
+  tvm::Array<PrimExpr> o_shape;
+
+  size_t M = block_shape.size();

Review comment:
       I think we shouldn't use capital letter here

##########
File path: include/tvm/topi/nn.h
##########
@@ -459,6 +460,178 @@ inline tvm::te::Tensor group_conv2d_ngchw(const tvm::te::Tensor& I, const tvm::t
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/*!
+ * \brief Divide spatial dimensions of the input into a grid of blocks.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param pad_before The zero-padding size before each spatial dimension.
+ * \param pad_after The zero-padding size after each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the space_to_batch_nd operation
+ */
+inline tvm::te::Tensor space_to_batch_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& pad_before,
+                                         const tvm::Array<tvm::PrimExpr>& pad_after,
+                                         std::string name = "space_to_batch_nd",
+                                         std::string tag = kInjective) {
+  tvm::te::Tensor padded_t;
+  CHECK_EQ(pad_before.size(), pad_after.size());
+  CHECK_EQ(block_shape.size(), pad_before.size())
+      << "Paddings must be provided for each spatial dimension";
+  tvm::Array<tvm::PrimExpr> pad_before_int32;
+  tvm::Array<tvm::PrimExpr> pad_after_int32;
+
+  // pad size for batch dimension is 0
+  pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  // insert pad sizes given for spatial dimensions
+  for (const auto& ele : pad_before) {
+    pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+  for (const auto& ele : pad_after) {
+    pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+
+  // pad the input with paddings provided
+  padded_t = pad(data, pad_before_int32, pad_after_int32, make_const(DataType::Int(32), 0));
+
+  auto input_shape = data->shape;
+  auto padded_shape = padded_t->shape;
+
+  // infer shapes
+  tvm::Array<PrimExpr> r_shape;
+  tvm::Array<Integer> axis;
+  tvm::Array<PrimExpr> o_shape;
+
+  size_t M = block_shape.size();
+  int batch = static_cast<int>(GetConstInt(input_shape[0]));
+  tvm::PrimExpr block_shape_prod(1);
+  r_shape.push_back(batch);
+
+  for (size_t i = 1; i <= M; i++) {
+    int padded_input = static_cast<int>(GetConstInt(padded_shape[i]));
+    int block_size = static_cast<int>(GetConstInt(block_shape[i - 1]));
+    CHECK_EQ((padded_input % block_size), 0)
+        << "(" << i
+        << ")th "
+           "Input dimension after padding ("
+        << padded_input << ")"
+        << " must be divisible by its block size (" << block_size << ")";
+
+    r_shape.push_back(div(padded_shape[i], block_shape[i - 1]));
+    r_shape.push_back(block_shape[i - 1]);
+    block_shape_prod *= block_shape[i - 1];
+    axis.push_back(Integer(r_shape.size() - 1));  // index of block_shape[i - 1]
+  }
+
+  size_t n = axis.size();
+  axis.push_back(0);  // batch is at index 0
+  // index of (padded_shape[i] / block_shape[i - 1]) in r_shape
+  for (size_t i = 0; i < n; i++) {
+    axis.push_back(static_cast<int>(GetConstInt(axis[i] - 1)));
+  }
+  o_shape.push_back(tvm::PrimExpr(batch) * block_shape_prod);
+  for (size_t i = 1; i <= M; i++) {
+    o_shape.push_back(div(padded_shape[i], block_shape[i - 1]));
+  }
+  // append remaining shape
+  for (size_t i = M + 1; i < input_shape.size(); i++) {
+    r_shape.push_back(input_shape[i]);
+    axis.push_back(Integer(r_shape.size() - 1));  // index of remaining shape in r_shape
+    o_shape.push_back(input_shape[i]);
+  }
+
+  tvm::te::Tensor output = reshape(padded_t, r_shape);
+  output = transpose(output, axis);
+  output = reshape(output, o_shape);
+
+  return output;
+}
+
+/*!
+ * \brief Reshape the batch dimension into spatial dimensions.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param crop_begin_list The begin crop size for each spatial dimension.
+ * \param crop_end_list The end crop size for each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the batch_to_space_nd operation
+ */
+inline tvm::te::Tensor batch_to_space_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& crop_begin_list,
+                                         const tvm::Array<tvm::PrimExpr>& crop_end_list,
+                                         std::string name = "batch_to_space_nd",
+                                         std::string tag = kInjective) {
+  // Construct shapes for reshape and transpose operation
+  Array<PrimExpr> in_shape = data->shape;
+  Array<PrimExpr> r_shape;
+  Array<Integer> axis;
+  size_t M = block_shape.size();
+  size_t N = in_shape.size();

Review comment:
       ditto




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] yongwww commented on a change in pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
yongwww commented on a change in pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#discussion_r494665076



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -3151,3 +3151,60 @@ def correlation(
     return _make.correlation(
         data1, data2, kernel_size, max_displacement, stride1, stride2, padding, is_multiply, layout
     )
+
+
+def space_to_batch_nd(data, block_shape, paddings):
+    r"""Divide spatial dimensions of the data into a grid of blocks and interleave them into batch dim.

Review comment:
       ci failed due to too long of this line




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] zhiics commented on pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
zhiics commented on pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#issuecomment-694946881


   cc @kevinthesun @yongwww @srkreddy1238, please help review. Thanks. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] BhushanIMG commented on a change in pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
BhushanIMG commented on a change in pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#discussion_r513479696



##########
File path: include/tvm/topi/nn.h
##########
@@ -459,6 +460,178 @@ inline tvm::te::Tensor group_conv2d_ngchw(const tvm::te::Tensor& I, const tvm::t
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/*!
+ * \brief Divide spatial dimensions of the input into a grid of blocks.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param pad_before The zero-padding size before each spatial dimension.
+ * \param pad_after The zero-padding size after each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the space_to_batch_nd operation
+ */
+inline tvm::te::Tensor space_to_batch_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& pad_before,
+                                         const tvm::Array<tvm::PrimExpr>& pad_after,
+                                         std::string name = "space_to_batch_nd",
+                                         std::string tag = kInjective) {
+  tvm::te::Tensor padded_t;
+  CHECK_EQ(pad_before.size(), pad_after.size());
+  CHECK_EQ(block_shape.size(), pad_before.size())
+      << "Paddings must be provided for each spatial dimension";
+  tvm::Array<tvm::PrimExpr> pad_before_int32;
+  tvm::Array<tvm::PrimExpr> pad_after_int32;
+
+  // pad size for batch dimension is 0
+  pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  // insert pad sizes given for spatial dimensions
+  for (const auto& ele : pad_before) {
+    pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+  for (const auto& ele : pad_after) {
+    pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+
+  // pad the input with paddings provided
+  padded_t = pad(data, pad_before_int32, pad_after_int32, make_const(DataType::Int(32), 0));

Review comment:
       @FrozenGene Added `pad_value ` parameter to space_to_batch_nd operator with default value 0.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] BhushanIMG commented on a change in pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
BhushanIMG commented on a change in pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#discussion_r513479979



##########
File path: include/tvm/topi/nn.h
##########
@@ -459,6 +460,178 @@ inline tvm::te::Tensor group_conv2d_ngchw(const tvm::te::Tensor& I, const tvm::t
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/*!
+ * \brief Divide spatial dimensions of the input into a grid of blocks.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param pad_before The zero-padding size before each spatial dimension.
+ * \param pad_after The zero-padding size after each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the space_to_batch_nd operation
+ */
+inline tvm::te::Tensor space_to_batch_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& pad_before,
+                                         const tvm::Array<tvm::PrimExpr>& pad_after,
+                                         std::string name = "space_to_batch_nd",
+                                         std::string tag = kInjective) {
+  tvm::te::Tensor padded_t;
+  CHECK_EQ(pad_before.size(), pad_after.size());
+  CHECK_EQ(block_shape.size(), pad_before.size())
+      << "Paddings must be provided for each spatial dimension";
+  tvm::Array<tvm::PrimExpr> pad_before_int32;
+  tvm::Array<tvm::PrimExpr> pad_after_int32;
+
+  // pad size for batch dimension is 0
+  pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  // insert pad sizes given for spatial dimensions
+  for (const auto& ele : pad_before) {
+    pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+  for (const auto& ele : pad_after) {
+    pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+
+  // pad the input with paddings provided
+  padded_t = pad(data, pad_before_int32, pad_after_int32, make_const(DataType::Int(32), 0));
+
+  auto input_shape = data->shape;
+  auto padded_shape = padded_t->shape;
+
+  // infer shapes
+  tvm::Array<PrimExpr> r_shape;
+  tvm::Array<Integer> axis;
+  tvm::Array<PrimExpr> o_shape;
+
+  size_t M = block_shape.size();

Review comment:
       Corrected.

##########
File path: include/tvm/topi/nn.h
##########
@@ -459,6 +460,178 @@ inline tvm::te::Tensor group_conv2d_ngchw(const tvm::te::Tensor& I, const tvm::t
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/*!
+ * \brief Divide spatial dimensions of the input into a grid of blocks.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param pad_before The zero-padding size before each spatial dimension.
+ * \param pad_after The zero-padding size after each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the space_to_batch_nd operation
+ */
+inline tvm::te::Tensor space_to_batch_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& pad_before,
+                                         const tvm::Array<tvm::PrimExpr>& pad_after,
+                                         std::string name = "space_to_batch_nd",
+                                         std::string tag = kInjective) {
+  tvm::te::Tensor padded_t;
+  CHECK_EQ(pad_before.size(), pad_after.size());
+  CHECK_EQ(block_shape.size(), pad_before.size())
+      << "Paddings must be provided for each spatial dimension";
+  tvm::Array<tvm::PrimExpr> pad_before_int32;
+  tvm::Array<tvm::PrimExpr> pad_after_int32;
+
+  // pad size for batch dimension is 0
+  pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), 0));
+  // insert pad sizes given for spatial dimensions
+  for (const auto& ele : pad_before) {
+    pad_before_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+  for (const auto& ele : pad_after) {
+    pad_after_int32.push_back(tvm::cast(tvm::DataType::Int(32), ele));
+  }
+
+  // pad the input with paddings provided
+  padded_t = pad(data, pad_before_int32, pad_after_int32, make_const(DataType::Int(32), 0));
+
+  auto input_shape = data->shape;
+  auto padded_shape = padded_t->shape;
+
+  // infer shapes
+  tvm::Array<PrimExpr> r_shape;
+  tvm::Array<Integer> axis;
+  tvm::Array<PrimExpr> o_shape;
+
+  size_t M = block_shape.size();
+  int batch = static_cast<int>(GetConstInt(input_shape[0]));
+  tvm::PrimExpr block_shape_prod(1);
+  r_shape.push_back(batch);
+
+  for (size_t i = 1; i <= M; i++) {
+    int padded_input = static_cast<int>(GetConstInt(padded_shape[i]));
+    int block_size = static_cast<int>(GetConstInt(block_shape[i - 1]));
+    CHECK_EQ((padded_input % block_size), 0)
+        << "(" << i
+        << ")th "
+           "Input dimension after padding ("
+        << padded_input << ")"
+        << " must be divisible by its block size (" << block_size << ")";
+
+    r_shape.push_back(div(padded_shape[i], block_shape[i - 1]));
+    r_shape.push_back(block_shape[i - 1]);
+    block_shape_prod *= block_shape[i - 1];
+    axis.push_back(Integer(r_shape.size() - 1));  // index of block_shape[i - 1]
+  }
+
+  size_t n = axis.size();
+  axis.push_back(0);  // batch is at index 0
+  // index of (padded_shape[i] / block_shape[i - 1]) in r_shape
+  for (size_t i = 0; i < n; i++) {
+    axis.push_back(static_cast<int>(GetConstInt(axis[i] - 1)));
+  }
+  o_shape.push_back(tvm::PrimExpr(batch) * block_shape_prod);
+  for (size_t i = 1; i <= M; i++) {
+    o_shape.push_back(div(padded_shape[i], block_shape[i - 1]));
+  }
+  // append remaining shape
+  for (size_t i = M + 1; i < input_shape.size(); i++) {
+    r_shape.push_back(input_shape[i]);
+    axis.push_back(Integer(r_shape.size() - 1));  // index of remaining shape in r_shape
+    o_shape.push_back(input_shape[i]);
+  }
+
+  tvm::te::Tensor output = reshape(padded_t, r_shape);
+  output = transpose(output, axis);
+  output = reshape(output, o_shape);
+
+  return output;
+}
+
+/*!
+ * \brief Reshape the batch dimension into spatial dimensions.
+ *
+ * \param data The input tensor.
+ * \param block_shape The size of the spatial block.
+ * \param crop_begin_list The begin crop size for each spatial dimension.
+ * \param crop_end_list The end crop size for each spatial dimension.
+ * \param name The name of the operation.
+ * \param tag The tag to mark the operation.
+ *
+ * \return A Tensor whose op member is the batch_to_space_nd operation
+ */
+inline tvm::te::Tensor batch_to_space_nd(const tvm::te::Tensor& data,
+                                         const tvm::Array<Integer>& block_shape,
+                                         const tvm::Array<tvm::PrimExpr>& crop_begin_list,
+                                         const tvm::Array<tvm::PrimExpr>& crop_end_list,
+                                         std::string name = "batch_to_space_nd",
+                                         std::string tag = kInjective) {
+  // Construct shapes for reshape and transpose operation
+  Array<PrimExpr> in_shape = data->shape;
+  Array<PrimExpr> r_shape;
+  Array<Integer> axis;
+  size_t M = block_shape.size();
+  size_t N = in_shape.size();

Review comment:
       corrected.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] BhushanIMG commented on a change in pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
BhushanIMG commented on a change in pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#discussion_r513480331



##########
File path: python/tvm/topi/testing/space_to_batch_nd.py
##########
@@ -0,0 +1,90 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, line-too-long, unused-variable, too-many-locals
+"""Space to batch ND in python"""
+import numpy as np
+
+
+def space_to_batch_nd_python(data, block_shape, pad_before, pad_after):

Review comment:
       Done.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] yongwww commented on a change in pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
yongwww commented on a change in pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#discussion_r494665076



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -3151,3 +3151,60 @@ def correlation(
     return _make.correlation(
         data1, data2, kernel_size, max_displacement, stride1, stride2, padding, is_multiply, layout
     )
+
+
+def space_to_batch_nd(data, block_shape, paddings):
+    r"""Divide spatial dimensions of the data into a grid of blocks and interleave them into batch dim.

Review comment:
       ci failed due to too long of this line




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene merged pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
FrozenGene merged pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene commented on pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#issuecomment-728913714


   thanks @BhushanIMG 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] tqchen commented on pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
tqchen commented on pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#issuecomment-704983895


   also cc @masahi @FrozenGene please help manage this PR


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] BhushanIMG commented on pull request #6477: [Relay] Add space_to_batch_nd and batch_to_space_nd operators

Posted by GitBox <gi...@apache.org>.
BhushanIMG commented on pull request #6477:
URL: https://github.com/apache/incubator-tvm/pull/6477#issuecomment-704684302


   > A very quick review : I think as part of this we should update the tflite frontend at the same time.
   
   @u99127 Modified tflite frontend to use batch_to_space_nd and space_to_batch_nd operators.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org