You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/09/17 21:34:49 UTC

[GitHub] [incubator-tvm] kevinthesun opened a new pull request #6509: [Relay]Allow dynamic batch for arm conv2d

kevinthesun opened a new pull request #6509:
URL: https://github.com/apache/incubator-tvm/pull/6509


   This PR enables Pytorch OD model compilation for arm cpu.
   
   @zhiics @yongwww @icemelon9 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6509: [Relay]Allow dynamic batch for arm conv2d

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6509:
URL: https://github.com/apache/incubator-tvm/pull/6509#issuecomment-694524518


   @zhiics Yeah. Dynamic shape kernel optimization can be in quite different shape with current tuning system. We will have a more clear roadmap for this when more and more dynamic shape op features are ready.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] zhiics commented on pull request #6509: [Relay]Allow dynamic batch for arm conv2d

Posted by GitBox <gi...@apache.org>.
zhiics commented on pull request #6509:
URL: https://github.com/apache/incubator-tvm/pull/6509#issuecomment-694940550


   Thanks @kevinthesun @yongwww @junrushao1994 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] zhiics merged pull request #6509: [Relay]Allow dynamic batch for arm conv2d

Posted by GitBox <gi...@apache.org>.
zhiics merged pull request #6509:
URL: https://github.com/apache/incubator-tvm/pull/6509


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6509: [Relay]Allow dynamic batch for arm conv2d

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6509:
URL: https://github.com/apache/incubator-tvm/pull/6509#discussion_r490587838



##########
File path: tests/python/relay/test_any.py
##########
@@ -423,6 +424,48 @@ def test_any_reshape_like():
     check_result([data_np, shape_like_np], mod, shape_like_np.shape, assert_shape=True)
 
 
+def verify_any_conv2d(
+    data_shape,
+    kernel_shape,
+    strides,
+    padding,
+    dilation,
+    static_data_shape,
+    ref_out_shape,
+):
+    mod = tvm.IRModule()
+    dtype = "float32"
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    kernel = relay.var("kernel", shape=kernel_shape, dtype=dtype)
+    y = relay.nn.conv2d(data, kernel, strides, padding, dilation, kernel_size=kernel_shape[2:4])
+    mod["main"] = relay.Function([data, kernel], y)
+    data_np = np.random.uniform(size=static_data_shape).astype(dtype)
+    kernel_np = np.random.uniform(size=kernel_shape).astype(dtype)
+    check_result([data_np, kernel_np], mod, ref_out_shape, assert_shape=True)
+
+
+# TODO(@kevinthesun): Support dynamic input height and width.

Review comment:
       I think gpu also requires some changes for this. We can do that in a seperate PR and I'll add a TODO here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6509: [Relay]Allow dynamic batch for arm conv2d

Posted by GitBox <gi...@apache.org>.
zhiics commented on a change in pull request #6509:
URL: https://github.com/apache/incubator-tvm/pull/6509#discussion_r490578631



##########
File path: tests/python/relay/test_any.py
##########
@@ -41,6 +41,7 @@ def check_result(
 ):
     for kind in ["debug", "vm"]:
         targets = targets or tvm.testing.enabled_targets()
+        print(targets)

Review comment:
       remove

##########
File path: tests/python/relay/test_any.py
##########
@@ -423,6 +424,48 @@ def test_any_reshape_like():
     check_result([data_np, shape_like_np], mod, shape_like_np.shape, assert_shape=True)
 
 
+def verify_any_conv2d(
+    data_shape,
+    kernel_shape,
+    strides,
+    padding,
+    dilation,
+    static_data_shape,
+    ref_out_shape,
+):
+    mod = tvm.IRModule()
+    dtype = "float32"
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    kernel = relay.var("kernel", shape=kernel_shape, dtype=dtype)
+    y = relay.nn.conv2d(data, kernel, strides, padding, dilation, kernel_size=kernel_shape[2:4])
+    mod["main"] = relay.Function([data, kernel], y)
+    data_np = np.random.uniform(size=static_data_shape).astype(dtype)
+    kernel_np = np.random.uniform(size=kernel_shape).astype(dtype)
+    check_result([data_np, kernel_np], mod, ref_out_shape, assert_shape=True)
+
+
+# TODO(@kevinthesun): Support dynamic input height and width.

Review comment:
       decorate gpu?

##########
File path: python/tvm/topi/arm_cpu/conv2d_transpose.py
##########
@@ -84,7 +89,8 @@ def _decl_spatial_pack(
     data_pad = pad(dilated_input, [0, 0, bpad_top, bpad_left], [0, 0, bpad_bottom, bpad_right])
 
     # ==================== define configuration space ====================
-    n, co, oh, ow = cfg.axis(N), cfg.axis(CO), cfg.axis(OH), cfg.axis(OW)
+    n_tuning_axis = N if isinstance(N, int) else 1

Review comment:
       ditto

##########
File path: python/tvm/topi/arm_cpu/conv2d_spatial_pack.py
##########
@@ -54,7 +58,8 @@ def conv2d_spatial_pack_nchw(cfg, data, kernel, strides, padding, dilation, out_
     data_pad = nn.pad(data, [0, 0, pad_top, pad_left], [0, 0, pad_bottom, pad_right])
 
     # ==================== define configuration space ====================
-    n, co, oh, ow = cfg.axis(N), cfg.axis(CO), cfg.axis(OH), cfg.axis(OW)
+    n_tuning_axis = N if isinstance(N, int) else 1

Review comment:
       should we add a TODO for symbolic kernel tuning?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org