You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/06/12 18:22:14 UTC

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures

anijain2305 commented on a change in pull request #5754:
URL: https://github.com/apache/incubator-tvm/pull/5754#discussion_r439561237



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -1976,6 +1976,74 @@ def contrib_conv2d_winograd_without_weight_transform(data,
         kernel_layout, out_layout, out_dtype)
 
 
+def contrib_conv2d_gemm_without_weight_transform(data,
+                                                 weight,
+                                                 strides=(1, 1),
+                                                 padding=(0, 0),
+                                                 dilation=(1, 1),
+                                                 groups=1,
+                                                 channels=None,
+                                                 kernel_size=None,
+                                                 data_layout="NCHW",
+                                                 kernel_layout="OIHW",
+                                                 out_layout="",
+                                                 out_dtype=""):
+    r"""2D convolution with gemm algorithm.

Review comment:
       Is r necessary?

##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2134,6 +2202,25 @@ def contrib_conv2d_winograd_weight_transform(weight,
     return _make.contrib_conv2d_winograd_weight_transform(weight, tile_size)
 
 
+def contrib_conv2d_gemm_weight_transform(weights):

Review comment:
       Does this need layout?

##########
File path: python/tvm/relay/op/nn/_nn.py
##########
@@ -421,6 +421,24 @@ def compute_mirror_pad(attrs, inputs, out_dtype):
 reg.register_pattern("nn.contrib_conv2d_winograd_without_weight_transform",
                      OpPattern.OUT_ELEMWISE_FUSABLE)
 
+# conv2d_gemm related operators
+reg.register_strategy("nn.contrib_conv2d_gemm_without_weight_transform",
+                      strategy.conv2d_gemm_without_weight_transform_strategy)
+reg.register_pattern("nn.contrib_conv2d_gemm_without_weight_transform",
+                     OpPattern.OUT_ELEMWISE_FUSABLE)
+
+
+@reg.register_compute("nn.contrib_conv2d_gemm_weight_transform")
+def compute_contrib_conv2d_gemm_weight_transform(attrs, inputs, out_dtype):
+    """Compute definition of contrib_conv2d_gemm_weight_transform"""
+    out = topi.nn.conv2d_gemm_weight_transform(
+        inputs[0])

Review comment:
       Can we can move this to previous line?

##########
File path: topi/python/topi/arm_cpu/conv2d_alter_op.py
##########
@@ -235,5 +239,37 @@ def _alter_conv2d_layout(attrs, inputs, tinfos, out_type):
              new_attrs['out_layout'], out_dtype], topi_tmpl)
         dispatch_ctx.update(target, new_workload, cfg)
         return relay.nn.contrib_depthwise_conv2d_nchwc(*inputs, **new_attrs)
+    if topi_tmpl == "conv2d_NHWC_quantized.arm_cpu":
+        assert (data.dtype == 'int8' and kernel.dtype == 'int8' or
+                data.dtype == 'uint8' and kernel.dtype == 'uint8')
+        CO, IC, KH, KW = get_const_tuple(kernel.shape)
+
+        K = KH * KW * IC
+        N = CO
+
+        pad_k = 0
+        pad_n = 0
+
+        if N % 4 != 0:
+            pad_n = 4 - (N % 4)
+
+        if K % 16 != 0:
+            pad_k = 16 - (K % 16)
+
+        N_padded = N + pad_n
+        K_padded = K + pad_k
+
+        kernel_expr = relay.nn.contrib_conv2d_gemm_weight_transform(inputs[1])

Review comment:
       Was wondering if it is possible to represent weight transform as a sequence of existing relay ops? In that case, we would not need a new contrib op, we can put that sequence here, and foldConstant will optimize the sequence away.
   
   If not, do you think we need to pass kernel layout information. Also should we call it on the lines of `contrib_conv2d_gemm_hwio_weight_transform`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org