You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/09/02 09:17:23 UTC

[GitHub] [incubator-tvm] jainris opened a new pull request #6381: [TFLite] Support for 'SAME' Padding option for TRANSPOSE_CONV operator of TFLite.

jainris opened a new pull request #6381:
URL: https://github.com/apache/incubator-tvm/pull/6381


   * Added support for 'SAME' Padding option for TRANSPOSE_CONV operator for all
     valid kernel sizes.
   * Added tests for 'SAME' Padding option for TRANSPOSE_CONV operator.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] jainris commented on pull request #6381: [TFLite] Support for 'SAME' Padding option for TRANSPOSE_CONV operator of TFLite.

Posted by GitBox <gi...@apache.org>.
jainris commented on pull request #6381:
URL: https://github.com/apache/incubator-tvm/pull/6381#issuecomment-686318269


   cc @anijain2305 @u99127 @mbaret @FrozenGene @tqchen


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6381: [TFLite] Support for 'SAME' Padding option for TRANSPOSE_CONV operator of TFLite.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #6381:
URL: https://github.com/apache/incubator-tvm/pull/6381#discussion_r482808667



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2742,13 +2742,16 @@ def convert_transpose_conv(self, op):
         assert out_channels == output_shape_value[3], \
             "Output channel in the filter should match to channel in the output_shape"
 
-        # TF frontend supports 'SAME' padding for kernel 1x1 only. Lets do the same here
         if padding == Padding.SAME:
-            assert (kernel_h, kernel_w) == (1, 1), \
-                "SAME padding is supported for kernel (1,1) only"
+            pad_top, pad_bottom = get_pad_value(input_shape[1], kernel_h, stride_h)
+            pad_left, pad_right = get_pad_value(input_shape[2], kernel_w, stride_w)
+            pads = (pad_top, pad_left, pad_bottom, pad_right)

Review comment:
       I think it is fine to use `padding` like we do https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/tflite.py#L1906. We could do `_op.nn.conv2d_transpose(..., padding=padding)`. I think it is a common usage in the python.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] jainris commented on a change in pull request #6381: [TFLite] Support for 'SAME' Padding option for TRANSPOSE_CONV operator of TFLite.

Posted by GitBox <gi...@apache.org>.
jainris commented on a change in pull request #6381:
URL: https://github.com/apache/incubator-tvm/pull/6381#discussion_r482804535



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2742,13 +2742,16 @@ def convert_transpose_conv(self, op):
         assert out_channels == output_shape_value[3], \
             "Output channel in the filter should match to channel in the output_shape"
 
-        # TF frontend supports 'SAME' padding for kernel 1x1 only. Lets do the same here
         if padding == Padding.SAME:
-            assert (kernel_h, kernel_w) == (1, 1), \
-                "SAME padding is supported for kernel (1,1) only"
+            pad_top, pad_bottom = get_pad_value(input_shape[1], kernel_h, stride_h)
+            pad_left, pad_right = get_pad_value(input_shape[2], kernel_w, stride_w)
+            pads = (pad_top, pad_left, pad_bottom, pad_right)

Review comment:
       `padding` is already being used to denote the attribute `padding` of the TFLite operator.
   Although it is not used after this block (we can reuse it), I wanted to avoid any confusion due to reuse.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6381: [TFLite] Support for 'SAME' Padding option for TRANSPOSE_CONV operator of TFLite.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #6381:
URL: https://github.com/apache/incubator-tvm/pull/6381#discussion_r482795512



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2696,11 +2696,11 @@ def convert_transpose_conv(self, op):
 
         # Input (data) Tensor. NHWC layout
         input_tensor = input_tensors[2]
-        _, _, _, input_c = input_tensor.tensor.ShapeAsNumpy()
+        input_shape = input_tensor.tensor.ShapeAsNumpy()

Review comment:
       Don't you think we create `_, input_h, input_w, input_c` is more make sense if you want to use `input_shape[1] / input_shape[2] / input_shape[3]`?

##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2742,13 +2742,16 @@ def convert_transpose_conv(self, op):
         assert out_channels == output_shape_value[3], \
             "Output channel in the filter should match to channel in the output_shape"
 
-        # TF frontend supports 'SAME' padding for kernel 1x1 only. Lets do the same here
         if padding == Padding.SAME:
-            assert (kernel_h, kernel_w) == (1, 1), \
-                "SAME padding is supported for kernel (1,1) only"
+            pad_top, pad_bottom = get_pad_value(input_shape[1], kernel_h, stride_h)
+            pad_left, pad_right = get_pad_value(input_shape[2], kernel_w, stride_w)
+            pads = (pad_top, pad_left, pad_bottom, pad_right)

Review comment:
       better name `pads` to `padding`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] tqchen merged pull request #6381: [TFLite] Support for 'SAME' Padding option for TRANSPOSE_CONV operator of TFLite.

Posted by GitBox <gi...@apache.org>.
tqchen merged pull request #6381:
URL: https://github.com/apache/incubator-tvm/pull/6381


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] jainris commented on a change in pull request #6381: [TFLite] Support for 'SAME' Padding option for TRANSPOSE_CONV operator of TFLite.

Posted by GitBox <gi...@apache.org>.
jainris commented on a change in pull request #6381:
URL: https://github.com/apache/incubator-tvm/pull/6381#discussion_r482802740



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2696,11 +2696,11 @@ def convert_transpose_conv(self, op):
 
         # Input (data) Tensor. NHWC layout
         input_tensor = input_tensors[2]
-        _, _, _, input_c = input_tensor.tensor.ShapeAsNumpy()
+        input_shape = input_tensor.tensor.ShapeAsNumpy()

Review comment:
       Thanks for reviewing.
   That seems reasonable. Will do it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org