You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/12/10 14:23:46 UTC

[GitHub] [tvm] ekalda commented on a change in pull request #9682: [CMSIS-NN] Fixed return data type from pattern callback function

ekalda commented on a change in pull request #9682:
URL: https://github.com/apache/tvm/pull/9682#discussion_r766624479



##########
File path: tests/python/contrib/test_cmsisnn/utils.py
##########
@@ -86,18 +86,20 @@ def make_module(func):
     return mod
 
 
-def get_same_padding(data, kernel, dilation, stride, cmsisnn_padding=True):
+def get_padding(data, kernel, dilation, stride, padding):
     """Provides CMSIS-NN padding when output dim == input dim"""

Review comment:
       For the enlightenment, what is CMSIS-NN padding and why is it provided only when `output dim == input dim`?

##########
File path: tests/python/contrib/test_cmsisnn/utils.py
##########
@@ -86,18 +86,20 @@ def make_module(func):
     return mod
 
 
-def get_same_padding(data, kernel, dilation, stride, cmsisnn_padding=True):
+def get_padding(data, kernel, dilation, stride, padding):

Review comment:
       What does `data` represent here? If it is a shape of a tensor, then I think the parameter name should represent that (or some documentation could help) 

##########
File path: tests/python/contrib/test_cmsisnn/test_pooling.py
##########
@@ -45,10 +45,9 @@
 def make_model(pool_op, shape, pool_size, strides, padding, dtype, scale, zero_point, relu_type):
     """Return a model and any parameters it may have"""
     op = relay.var("input", shape=shape, dtype=dtype)
-    pad_ = (0, 0, 0, 0)
-    if padding == "SAME":
-        dilation = (1, 1)
-        pad_ = get_same_padding((shape[1], shape[2]), pool_size, dilation, strides)
+    dilation = (1, 1)
+    pad_, result = get_padding((shape[1], shape[2]), pool_size, dilation, strides, padding)
+    if result:
         op = relay.nn.pad(
             op,
             pad_width=[(0, 0), (pad_[0], pad_[2]), (pad_[1], pad_[3]), (0, 0)],

Review comment:
       For enlightenment, why do we add a pad operator to the graph instead of using the padding attribute of pool2d?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org