You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/10/21 11:53:11 UTC

[GitHub] [incubator-tvm] d-smirnov opened a new pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

d-smirnov opened a new pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724


   This fix intended to prevent execution of operations via ACL runtime
   in case if its arguments require memory padding. This fix is tempŠ¾rary
   and intended for ACL 20.05 and should be removed after migration
   to ACL 20.11
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518232783



##########
File path: tests/python/contrib/test_arm_compute_lib/test_dense.py
##########
@@ -20,8 +20,8 @@
 
 import tvm
 from tvm import relay
-
-from .infrastructure import (
+from tvm import testing
+from test_arm_compute_lib.infrastructure import (

Review comment:
       I hope so




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518320817



##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -305,14 +307,41 @@ def max_pool2d(expr):
     typ = args[0].checked_type
     if typ.dtype not in ["float32", "uint8"]:
         return False
-    return True
+    return not require_padding([*args, expr.checked_type])
+
+
+def require_padding(inputs):
+    """Checks whether supplied data will require padding.
+    Most of the operators ACL up to 20.11 uses padded data.
+    """
+
+    def _check(shape, dtype):
+        """NEON has 128bits/16bytes per vector"""
+        if len(shape) == 0:
+            return False
+        return (shape[-1] * np.dtype(dtype).itemsize) % 16 != 0
+
+    for i in inputs:
+        if isinstance(i, (tvm.relay.expr.Var, tvm.relay.expr.Call)):
+            if _check(i.checked_type.shape, i.checked_type.dtype):
+                return True
+        elif isinstance(i, tvm.relay.expr.Constant):
+            if _check(i.data.shape, i.data.dtype):
+                return True
+        elif isinstance(i, tvm.ir.tensor_type.TensorType):
+            if _check(i.shape, i.dtype):
+                return True
+        else:
+            raise Exception("Not supported")

Review comment:
       Updated

##########
File path: tests/python/contrib/test_arm_compute_lib/test_pooling.py
##########
@@ -166,7 +167,12 @@ def test_pooling():
     fp32_dtype = ("float32", -127, 128, 0.001, 0.001)
     uint8_dtype = ("uint8", 0, 255, 1, 0)
 
+    # nn.max_pool2d(%arm_compute_lib_11_i0, pool_size=[3, 3], strides=[2, 2], padding=[0, 0, 0, 0], layout="NHWC") /* ty=Tensor[(1, 27, 27, 256), float32] */

Review comment:
       Yes. Removed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
comaniac commented on pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#issuecomment-713827380


   See #6723 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac merged pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
comaniac merged pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#issuecomment-714580934


   @lhutton Some operations of ACL up to 20.11 requires data to be vector-friendly. In squeezenet there is an avg_pool2d operator with specific tensor layout ((1, 13, 13, 1001), float32). 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518182386



##########
File path: tests/python/contrib/test_arm_compute_lib/test_dense.py
##########
@@ -323,7 +357,7 @@ def test_codegen_qnn_dense():
             has_bias=composite,
         )
         exp_codegen = _get_expected_codegen(*args, has_bias=composite)
-        verify_codegen(func, exp_codegen, 1)
+        verify_codegen(func, exp_codegen, acl_partitions, 2 - 2 * acl_partitions)

Review comment:
       test was extended to incorporate situations where no acl partition be generated

##########
File path: tests/python/contrib/test_arm_compute_lib/test_dense.py
##########
@@ -215,18 +231,18 @@ def test_codegen_dense():
     np.random.seed(0)
 
     dtype = ["float32"]
-    shape = [((1, 128), (16, 128), 16), ((32, 32), (32, 32), 32), ((1, 64), (1, 64), 1)]
+    shape = [(1, (1, 128), (16, 128), 16), (1, (32, 32), (32, 32), 32), (0, (1, 64), (1, 64), 1)]
     composite = [False, True]
     trials = generate_trials([dtype, shape, composite], 3)
 
-    for dtype, (shape, weight_shape, units), composite in trials:
+    for dtype, (acl_partitions, shape, weight_shape, units), composite in trials:
         inputs = {"a"}
 
         args = (shape, weight_shape, units, dtype)
 
         func, params = _get_model(*args, var_names=iter(inputs), has_bias=composite)
         exp_codegen = _get_expected_codegen(*args, has_bias=composite)
-        verify_codegen(func, exp_codegen, 1)
+        verify_codegen(func, exp_codegen, acl_partitions, 1 - acl_partitions)

Review comment:
       test was extended to incorporate situations where no acl partition be generated




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#issuecomment-714597596


   > @lhutton Some operations of ACL up to 20.11 requires data to be vector-friendly. In squeezenet there is an avg_pool2d operator with specific tensor layout ((1, 13, 13, 1001), float32).
   
   Makes sense, thanks for the clarification :)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
comaniac commented on pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#issuecomment-722760931


   Thanks @d-smirnov @lhutton1 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r511013652



##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -289,24 +293,57 @@ def qnn_dense(attrs, args):
         return False
     if attrs.out_dtype != "int32":
         return False
-    return True
+
+    return not padding_required([*args, out_type])
 
 
 @tvm.ir.register_op_attr("nn.max_pool2d", "target.arm_compute_lib")
-def max_pool2d(attrs, args):
+def max_pool2d(attrs, args, out_type):
     """Check if the external ACL codegen for maxpool2d should be used."""
     if attrs.layout != "NHWC":
         return False
     typ = args[0].checked_type
     if typ.dtype not in ["float32", "uint8"]:
         return False
-    return True
+    return not padding_required([*args, out_type])
+
+
+def padding_required(inputs):

Review comment:
       A fix for approximately 6 weeks should not be "temporary" IMHO...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518083808



##########
File path: tests/python/contrib/test_arm_compute_lib/test_network.py
##########
@@ -151,7 +152,34 @@ def get_model():
     )
 
 
+# Test disabled as it requires attontate target not promote unannotated tuples under previous operations annotation
+@pytest.mark.skip(reason="no way of currently testing this")
+def test_squeezenet():
+    Device.load("test_config.json")
+
+    if skip_runtime_test():

Review comment:
       test removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518234666



##########
File path: include/tvm/relay/op_attr_types.h
##########
@@ -185,6 +185,20 @@ using FTVMLegalize = runtime::TypedPackedFunc<Expr(const Attrs& attrs, const Arr
 using FTVMAnnotateTarget = runtime::TypedPackedFunc<bool(const Attrs& attrs,  // NOLINT(*)
                                                          const Array<Expr>& args)>;
 
+/*!
+ * \brief Annotates an expression to indicate if an op should be compiled using
+ *  the given compiler/target.
+ *  \param attrs The attribute of the original expr.
+ *  \param args The arguments of the original expr.
+ *  \param out_type The return type of the original expr.
+ *
+ *  \return true if this op should be registered to invoke a specific compiler
+ *  for codegen, otherwise, false.
+ */
+using FTVMAnnotateTarget3 =

Review comment:
       extracted to [#6786](https://github.com/apache/incubator-tvm/pull/6786)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518070818



##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -227,9 +229,10 @@ def _func_wrapper(attrs, args):
     return _func_wrapper
 
 
+# Reshape does not need padding check in 20.05
 _register_external_op_helper("reshape")
 
-
+# conv2d does not need padding check in 20.05

Review comment:
       removed

##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -227,9 +229,10 @@ def _func_wrapper(attrs, args):
     return _func_wrapper
 
 
+# Reshape does not need padding check in 20.05

Review comment:
       removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r510173933



##########
File path: tests/python/contrib/test_arm_compute_lib/test_dense.py
##########
@@ -215,18 +231,18 @@ def test_codegen_dense():
     np.random.seed(0)
 
     dtype = ["float32"]
-    shape = [((1, 128), (16, 128), 16), ((32, 32), (32, 32), 32), ((1, 64), (1, 64), 1)]
+    shape = [(1, (1, 128), (16, 128), 16), (1, (32, 32), (32, 32), 32), (0, (1, 64), (1, 64), 1)]
     composite = [False, True]
     trials = generate_trials([dtype, shape, composite], 3)
 
-    for dtype, (shape, weight_shape, units), composite in trials:
+    for dtype, (acl_partitions, shape, weight_shape, units), composite in trials:
         inputs = {"a"}
 
         args = (shape, weight_shape, units, dtype)
 
         func, params = _get_model(*args, var_names=iter(inputs), has_bias=composite)
         exp_codegen = _get_expected_codegen(*args, has_bias=composite)
-        verify_codegen(func, exp_codegen, 1)
+        verify_codegen(func, exp_codegen, acl_partitions, 1 - acl_partitions)

Review comment:
       Why do we have a different calculation for the number of tvm ops compared to the test above? Intuitively, I feel like these should be the same

##########
File path: tests/python/contrib/test_arm_compute_lib/test_dense.py
##########
@@ -20,8 +20,8 @@
 
 import tvm
 from tvm import relay
-
-from .infrastructure import (
+from tvm import testing
+from test_arm_compute_lib.infrastructure import (

Review comment:
       Is it worth moving to this format in all other tests for consistency?

##########
File path: tests/python/contrib/test_arm_compute_lib/test_dense.py
##########
@@ -323,7 +357,7 @@ def test_codegen_qnn_dense():
             has_bias=composite,
         )
         exp_codegen = _get_expected_codegen(*args, has_bias=composite)
-        verify_codegen(func, exp_codegen, 1)
+        verify_codegen(func, exp_codegen, acl_partitions, 2 - 2 * acl_partitions)

Review comment:
       Similar to above

##########
File path: tests/python/contrib/test_arm_compute_lib/test_pooling.py
##########
@@ -175,7 +181,8 @@ def test_pooling():
         ["nn.avg_pool2d", fp32_dtype, (2, 2), (2, 2), (1, 1), False, False, (16, 16, 16)],
         ["nn.avg_pool2d", fp32_dtype, (2, 2), (2, 2), (0, 0), False, True, (16, 16, 16)],
         ["nn.avg_pool2d", fp32_dtype, (3, 3), (2, 2), (0, 1), True, False, (15, 15, 16)],
-        ["nn.avg_pool2d", uint8_dtype, (2, 2), (2, 2), (1, 1), False, True, (16, 16, 16)],

Review comment:
       Did this test work previously to this PR?

##########
File path: tests/python/contrib/test_arm_compute_lib/test_network.py
##########
@@ -151,7 +152,34 @@ def get_model():
     )
 
 
+# Test disabled as it requires attontate target not promote unannotated tuples under previous operations annotation
+@pytest.mark.skip(reason="no way of currently testing this")
+def test_squeezenet():
+    Device.load("test_config.json")
+
+    if skip_runtime_test():

Review comment:
       Just wondering why we explicitly skip this above using `pytest.mark.skip` as well as skip here? If it is purely because we don't have the runtime in CI then this check should be enough




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r509636638



##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -248,6 +251,7 @@ def conv2d(attrs, args):
     return True
 
 
+# conv2d does not need padding check in 20.05

Review comment:
       I cannot link this comment with the code.

##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -289,24 +293,57 @@ def qnn_dense(attrs, args):
         return False
     if attrs.out_dtype != "int32":
         return False
-    return True
+
+    return not padding_required([*args, out_type])
 
 
 @tvm.ir.register_op_attr("nn.max_pool2d", "target.arm_compute_lib")
-def max_pool2d(attrs, args):
+def max_pool2d(attrs, args, out_type):
     """Check if the external ACL codegen for maxpool2d should be used."""
     if attrs.layout != "NHWC":
         return False
     typ = args[0].checked_type
     if typ.dtype not in ["float32", "uint8"]:
         return False
-    return True
+    return not padding_required([*args, out_type])
+
+
+def padding_required(inputs):

Review comment:
       - `require_padding` might be more straightforward?
   - I feel that this checker can be more general to support both versions. For example, you can check the ACL version and always return True if it's < 20.05. In this way, this PR won't be a "temporary" fix anymore.
   - If we simply require ACL 20.11 or higher, can we get rid of this problem without any change?

##########
File path: include/tvm/relay/op_attr_types.h
##########
@@ -185,6 +185,20 @@ using FTVMLegalize = runtime::TypedPackedFunc<Expr(const Attrs& attrs, const Arr
 using FTVMAnnotateTarget = runtime::TypedPackedFunc<bool(const Attrs& attrs,  // NOLINT(*)
                                                          const Array<Expr>& args)>;
 
+/*!
+ * \brief Annotates an expression to indicate if an op should be compiled using
+ *  the given compiler/target.
+ *  \param attrs The attribute of the original expr.
+ *  \param args The arguments of the original expr.
+ *  \param out_type The return type of the original expr.
+ *
+ *  \return true if this op should be registered to invoke a specific compiler
+ *  for codegen, otherwise, false.
+ */
+using FTVMAnnotateTarget3 =

Review comment:
       It is not a good practice to have multiple annotator interfaces. If the type original expression is necessary, we may change the interface to `sing FTVMAnnotateTarget = runtime::TypedPackedFunc<bool(const Expr&)>;` that accepts the original expression directly. As a result, `attrs`, `args`, and `checked_type` can all be retrieved.

##########
File path: tests/python/contrib/test_arm_compute_lib/test_network.py
##########
@@ -151,7 +152,34 @@ def get_model():
     )
 
 
+# Test disabled as it requires attontate target not promote unannotated tuples under previous operations annotation

Review comment:
       ```suggestion
   # Test disabled as it requires annotate target not promote unannotated tuples under previous operations' annotation
   ```
   If this test cannot be executed in this PR, we should add it along with the one it can be.

##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -227,9 +229,10 @@ def _func_wrapper(attrs, args):
     return _func_wrapper
 
 
+# Reshape does not need padding check in 20.05

Review comment:
       I cannot link this comment with the code.

##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -289,24 +293,57 @@ def qnn_dense(attrs, args):
         return False
     if attrs.out_dtype != "int32":
         return False
-    return True
+
+    return not padding_required([*args, out_type])
 
 
 @tvm.ir.register_op_attr("nn.max_pool2d", "target.arm_compute_lib")
-def max_pool2d(attrs, args):
+def max_pool2d(attrs, args, out_type):
     """Check if the external ACL codegen for maxpool2d should be used."""
     if attrs.layout != "NHWC":
         return False
     typ = args[0].checked_type
     if typ.dtype not in ["float32", "uint8"]:
         return False
-    return True
+    return not padding_required([*args, out_type])
+
+
+def padding_required(inputs):
+    """Checks whether supplied data will require padding.
+    Most of the operators ACL up to 20.11 uses padded data.
+    """
+
+    def _check(shape, dtype):
+        """NEON has 128bits/16bytes per vector"""
+        if len(shape) == 0:
+            return False
+        return (shape[-1] * np.dtype(dtype).itemsize) % 16 != 0
+
+    def _padding_required():
+        for i in inputs:
+            if isinstance(i, (tvm.relay.expr.Var, tvm.relay.expr.Call)):
+                if _check(i.checked_type.shape, i.checked_type.dtype):
+                    return True
+            elif isinstance(i, tvm.relay.expr.Constant):
+                if _check(i.data.shape, i.data.dtype):
+                    return True
+            elif isinstance(i, tvm.ir.tensor_type.TensorType):
+                if _check(i.shape, i.dtype):
+                    return True
+            else:
+                raise Exception("Not supported")
+
+        return False
+
+    result = _padding_required()

Review comment:
       I didn't see the reason of using an internal function. You should be able to directly more L323-L336 out?

##########
File path: include/tvm/relay/op_attr_types.h
##########
@@ -176,7 +176,7 @@ using FTVMLegalize = runtime::TypedPackedFunc<Expr(const Attrs& attrs, const Arr
  * \brief Annotates an expression to indicate if an op should be compiled using
  * the given compiler/target.
  *
- * \param attrs The attribute of the original expr.
+ :* \param attrs The attribute of the original expr.

Review comment:
       Why adding a colon?
   

##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -227,9 +229,10 @@ def _func_wrapper(attrs, args):
     return _func_wrapper
 
 
+# Reshape does not need padding check in 20.05
 _register_external_op_helper("reshape")
 
-
+# conv2d does not need padding check in 20.05

Review comment:
       I cannot link this comment with the code.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518228552



##########
File path: tests/python/contrib/test_arm_compute_lib/test_pooling.py
##########
@@ -175,7 +181,8 @@ def test_pooling():
         ["nn.avg_pool2d", fp32_dtype, (2, 2), (2, 2), (1, 1), False, False, (16, 16, 16)],
         ["nn.avg_pool2d", fp32_dtype, (2, 2), (2, 2), (0, 0), False, True, (16, 16, 16)],
         ["nn.avg_pool2d", fp32_dtype, (3, 3), (2, 2), (0, 1), True, False, (15, 15, 16)],
-        ["nn.avg_pool2d", uint8_dtype, (2, 2), (2, 2), (1, 1), False, True, (16, 16, 16)],

Review comment:
       Since it is an ACL complaint ("exclude_padding equal false is not supported for AVG Pooling with padding on quantized types"), this line is likely never worked. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#issuecomment-713813365


   @merrymercy Please advise on failed **Integration Test / docs: GPU**. I cannot reproduce failure of tutorials/auto_scheduler/tune_matmul_x86.py locally


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r510443620



##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -289,24 +293,57 @@ def qnn_dense(attrs, args):
         return False
     if attrs.out_dtype != "int32":
         return False
-    return True
+
+    return not padding_required([*args, out_type])
 
 
 @tvm.ir.register_op_attr("nn.max_pool2d", "target.arm_compute_lib")
-def max_pool2d(attrs, args):
+def max_pool2d(attrs, args, out_type):
     """Check if the external ACL codegen for maxpool2d should be used."""
     if attrs.layout != "NHWC":
         return False
     typ = args[0].checked_type
     if typ.dtype not in ["float32", "uint8"]:
         return False
-    return True
+    return not padding_required([*args, out_type])
+
+
+def padding_required(inputs):

Review comment:
       >If we simply require ACL 20.11 
   ACL 20.11 is not available now, and will be ready approximately in 6 weeks. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r511012768



##########
File path: include/tvm/relay/op_attr_types.h
##########
@@ -185,6 +185,20 @@ using FTVMLegalize = runtime::TypedPackedFunc<Expr(const Attrs& attrs, const Arr
 using FTVMAnnotateTarget = runtime::TypedPackedFunc<bool(const Attrs& attrs,  // NOLINT(*)
                                                          const Array<Expr>& args)>;
 
+/*!
+ * \brief Annotates an expression to indicate if an op should be compiled using
+ *  the given compiler/target.
+ *  \param attrs The attribute of the original expr.
+ *  \param args The arguments of the original expr.
+ *  \param out_type The return type of the original expr.
+ *
+ *  \return true if this op should be registered to invoke a specific compiler
+ *  for codegen, otherwise, false.
+ */
+using FTVMAnnotateTarget3 =

Review comment:
       I can totally understand your point about minimizing the changes, but I personally don't think it's a big issue given that we only have a few existing codegen. Since BYOC is getting mature, we should avoid ad hoc implementation as we can. In fact, we are planning to do another refactoring to cleanup redundant code.
   
   The changing (or addition) of annotate target, which is one of the most important interfaces to codegen developers, are better to start with an RFC. It is fine even the RFC is small and simple. The main purposes are 1) making sure this change is compatible to all cases, and 2) letting everyone know you are going to make this change.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r510755137



##########
File path: include/tvm/relay/op_attr_types.h
##########
@@ -185,6 +185,20 @@ using FTVMLegalize = runtime::TypedPackedFunc<Expr(const Attrs& attrs, const Arr
 using FTVMAnnotateTarget = runtime::TypedPackedFunc<bool(const Attrs& attrs,  // NOLINT(*)
                                                          const Array<Expr>& args)>;
 
+/*!
+ * \brief Annotates an expression to indicate if an op should be compiled using
+ *  the given compiler/target.
+ *  \param attrs The attribute of the original expr.
+ *  \param args The arguments of the original expr.
+ *  \param out_type The return type of the original expr.
+ *
+ *  \return true if this op should be registered to invoke a specific compiler
+ *  for codegen, otherwise, false.
+ */
+using FTVMAnnotateTarget3 =

Review comment:
       The goal was to minimize changes to other existing runtimes (which is obviously not possible in case of `FTVMAnnotateTarget = runtime::TypedPackedFunc<bool(const Expr&)>`). Otherwise it looks to me as separate PR for this generalisation. What would be your advice here?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518307901



##########
File path: tests/python/contrib/test_arm_compute_lib/test_pooling.py
##########
@@ -166,7 +167,12 @@ def test_pooling():
     fp32_dtype = ("float32", -127, 128, 0.001, 0.001)
     uint8_dtype = ("uint8", 0, 255, 1, 0)
 
+    # nn.max_pool2d(%arm_compute_lib_11_i0, pool_size=[3, 3], strides=[2, 2], padding=[0, 0, 0, 0], layout="NHWC") /* ty=Tensor[(1, 27, 27, 256), float32] */

Review comment:
       Should this be removed?

##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -305,14 +307,41 @@ def max_pool2d(expr):
     typ = args[0].checked_type
     if typ.dtype not in ["float32", "uint8"]:
         return False
-    return True
+    return not require_padding([*args, expr.checked_type])
+
+
+def require_padding(inputs):
+    """Checks whether supplied data will require padding.
+    Most of the operators ACL up to 20.11 uses padded data.
+    """
+
+    def _check(shape, dtype):
+        """NEON has 128bits/16bytes per vector"""
+        if len(shape) == 0:
+            return False
+        return (shape[-1] * np.dtype(dtype).itemsize) % 16 != 0
+
+    for i in inputs:
+        if isinstance(i, (tvm.relay.expr.Var, tvm.relay.expr.Call)):
+            if _check(i.checked_type.shape, i.checked_type.dtype):
+                return True
+        elif isinstance(i, tvm.relay.expr.Constant):
+            if _check(i.data.shape, i.data.dtype):
+                return True
+        elif isinstance(i, tvm.ir.tensor_type.TensorType):
+            if _check(i.shape, i.dtype):
+                return True
+        else:
+            raise Exception("Not supported")

Review comment:
       ```suggestion
               raise RuntimeException("Not supported input type: %s"  % type(i))
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518071065



##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -248,6 +251,7 @@ def conv2d(attrs, args):
     return True
 
 
+# conv2d does not need padding check in 20.05

Review comment:
       removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518083188



##########
File path: tests/python/contrib/test_arm_compute_lib/test_network.py
##########
@@ -151,7 +152,34 @@ def get_model():
     )
 
 
+# Test disabled as it requires attontate target not promote unannotated tuples under previous operations annotation

Review comment:
       Test removed for PR it belongs




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6724: [BYOC] [ACL] 20.05 memory corruption temporarely fix

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6724:
URL: https://github.com/apache/incubator-tvm/pull/6724#discussion_r518072473



##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -289,24 +293,57 @@ def qnn_dense(attrs, args):
         return False
     if attrs.out_dtype != "int32":
         return False
-    return True
+
+    return not padding_required([*args, out_type])
 
 
 @tvm.ir.register_op_attr("nn.max_pool2d", "target.arm_compute_lib")
-def max_pool2d(attrs, args):
+def max_pool2d(attrs, args, out_type):
     """Check if the external ACL codegen for maxpool2d should be used."""
     if attrs.layout != "NHWC":
         return False
     typ = args[0].checked_type
     if typ.dtype not in ["float32", "uint8"]:
         return False
-    return True
+    return not padding_required([*args, out_type])
+
+
+def padding_required(inputs):

Review comment:
       >require_padding might be more straightforward?
   renamed
   
   Agreed, all the current versions of ACL should use the fix, until migration to ACL 20.11 (when it became available)

##########
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##########
@@ -289,24 +293,57 @@ def qnn_dense(attrs, args):
         return False
     if attrs.out_dtype != "int32":
         return False
-    return True
+
+    return not padding_required([*args, out_type])
 
 
 @tvm.ir.register_op_attr("nn.max_pool2d", "target.arm_compute_lib")
-def max_pool2d(attrs, args):
+def max_pool2d(attrs, args, out_type):
     """Check if the external ACL codegen for maxpool2d should be used."""
     if attrs.layout != "NHWC":
         return False
     typ = args[0].checked_type
     if typ.dtype not in ["float32", "uint8"]:
         return False
-    return True
+    return not padding_required([*args, out_type])
+
+
+def padding_required(inputs):
+    """Checks whether supplied data will require padding.
+    Most of the operators ACL up to 20.11 uses padded data.
+    """
+
+    def _check(shape, dtype):
+        """NEON has 128bits/16bytes per vector"""
+        if len(shape) == 0:
+            return False
+        return (shape[-1] * np.dtype(dtype).itemsize) % 16 != 0
+
+    def _padding_required():
+        for i in inputs:
+            if isinstance(i, (tvm.relay.expr.Var, tvm.relay.expr.Call)):
+                if _check(i.checked_type.shape, i.checked_type.dtype):
+                    return True
+            elif isinstance(i, tvm.relay.expr.Constant):
+                if _check(i.data.shape, i.data.dtype):
+                    return True
+            elif isinstance(i, tvm.ir.tensor_type.TensorType):
+                if _check(i.shape, i.dtype):
+                    return True
+            else:
+                raise Exception("Not supported")
+
+        return False
+
+    result = _padding_required()

Review comment:
       refactored




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org