You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/06/02 00:46:09 UTC

[GitHub] [tvm] junrushao1994 opened a new pull request, #11532: [Bugfix][TIR] Handle boolean tensor properly in Flatten-Buffer

junrushao1994 opened a new pull request, #11532:
URL: https://github.com/apache/tvm/pull/11532

   This PR fixes an existing bug in TIR lowering where the TIR below triggers an error:
   
   ```python
   @T.prim_func
   def func(a: T.Buffer[10, "bool"], b: T.Buffer[10, "bool"]) -> None:
       T.func_attr({"global_symbol": "main", "tir.noalias": True})
       for i in T.serial(10):
           with T.block("b"):
               vi = T.axis.spatial(10, i)
               b[vi] = a[vi]
   
   tvm.build(func, target="llvm")
   ```
   
   The error message is:
   
   ```
     File "/root/Projects/tvm-dev/src/tir/transforms/flatten_buffer.cc", line 173
   TVMError:
   ---------------------------------------------------------------
   An error occurred during the execution of TVM.
   For more information, please see: https://tvm.apache.org/docs/errors.html
   ---------------------------------------------------------------
   
   Check failed: store->buffer->dtype == DataType::Int(8) (bool vs. int8) : Expected int8 backing array for boolean tensor
   ```
   
   This PR fixes this behavior.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Hzfengsy commented on a diff in pull request #11532: [Bugfix][TIR] Handle boolean tensor properly in Flatten-Buffer

Posted by GitBox <gi...@apache.org>.
Hzfengsy commented on code in PR #11532:
URL: https://github.com/apache/tvm/pull/11532#discussion_r887447104


##########
tests/python/unittest/test_tir_transform_flatten_buffer.py:
##########
@@ -319,6 +346,13 @@ def test_annotated_loops():
     tvm.ir.assert_structural_equal(attr3.value, tvm.tir.FloatImm("float32", 0.0))
 
 
+def test_boolean_handling():
+    _check(boolean_handling_before, boolean_handling_after)
+    # mod = tvm.IRModule.from_expr(boolean_handling_before)

Review Comment:
   Please remove comments



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Lunderberg commented on a diff in pull request #11532: [Bugfix][TIR] Handle boolean tensor properly in Flatten-Buffer

Posted by GitBox <gi...@apache.org>.
Lunderberg commented on code in PR #11532:
URL: https://github.com/apache/tvm/pull/11532#discussion_r887974135


##########
tests/python/unittest/test_tir_transform_flatten_buffer.py:
##########
@@ -268,6 +268,33 @@ def annotated_loops(a: T.handle) -> None:
         A[i] = 0.0
 
 
+@T.prim_func
+def boolean_handling_before(a: T.Buffer[10, "bool"], b: T.Buffer[10, "bool"]) -> None:
+    for i0 in T.serial(10):
+        with T.block("b"):
+            T.reads(a[i0])
+            T.writes(b[i0])
+            b[i0] = a[i0]
+
+
+@T.prim_func
+def boolean_handling_after(a: T.Buffer[10, "int8"], b: T.Buffer[10, "int8"]) -> None:
+    T.preflattened_buffer(a, [10], dtype="bool", data=a.data)
+    T.preflattened_buffer(b, [10], dtype="bool", data=b.data)
+    # body
+    for i0 in T.serial(10):
+        b[i0] = T.cast(T.cast(a[i0], "bool"), "int8")
+
+
+@T.prim_func
+def boolean_handle_after(a: T.Buffer[10, "int8"], b: T.Buffer[10, "int8"]) -> None:
+    T.preflattened_buffer(a, [10], dtype="bool", data=a.data)
+    T.preflattened_buffer(b, [10], dtype="bool", data=b.data)
+    # body
+    for i0 in T.serial(10):
+        b[i0] = T.cast(T.cast(a[i0], "bool"), "int8")

Review Comment:
   I think it could be simplified, and would need to be done in the mutator for `BufferStoreNode`.  In the general case, the `int8` to `bool` to `int8` sequence of casts can't be simplified out as a no-op, because it does reduce all nonzero values to one.  In this case, since `b` also represents a boolean, it would be allowed, so that would be where this access type could be checked for.
   
   Long-term, I'd like to move all of the boolean-specific handling into a dedicated lowering pass.  As it is, there's quite a lot of special cases in multiple different locations which all need to know that boolean arrays are backed by int8 arrays.  (And even that should be target-specific, as not all Vulkan implementations support int8 datatypes.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] junrushao1994 commented on a diff in pull request #11532: [Bugfix][TIR] Handle boolean tensor properly in Flatten-Buffer

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on code in PR #11532:
URL: https://github.com/apache/tvm/pull/11532#discussion_r887523667


##########
tests/python/unittest/test_tir_transform_flatten_buffer.py:
##########
@@ -319,6 +346,13 @@ def test_annotated_loops():
     tvm.ir.assert_structural_equal(attr3.value, tvm.tir.FloatImm("float32", 0.0))
 
 
+def test_boolean_handling():
+    _check(boolean_handling_before, boolean_handling_after)
+    # mod = tvm.IRModule.from_expr(boolean_handling_before)

Review Comment:
   ooops



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] junrushao1994 merged pull request #11532: [Bugfix][TIR] Handle boolean tensor properly in Flatten-Buffer

Posted by GitBox <gi...@apache.org>.
junrushao1994 merged PR #11532:
URL: https://github.com/apache/tvm/pull/11532


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] junrushao1994 commented on a diff in pull request #11532: [Bugfix][TIR] Handle boolean tensor properly in Flatten-Buffer

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on code in PR #11532:
URL: https://github.com/apache/tvm/pull/11532#discussion_r887406995


##########
tests/python/unittest/test_tir_transform_flatten_buffer.py:
##########
@@ -268,6 +268,33 @@ def annotated_loops(a: T.handle) -> None:
         A[i] = 0.0
 
 
+@T.prim_func
+def boolean_handling_before(a: T.Buffer[10, "bool"], b: T.Buffer[10, "bool"]) -> None:
+    for i0 in T.serial(10):
+        with T.block("b"):
+            T.reads(a[i0])
+            T.writes(b[i0])
+            b[i0] = a[i0]
+
+
+@T.prim_func
+def boolean_handling_after(a: T.Buffer[10, "int8"], b: T.Buffer[10, "int8"]) -> None:
+    T.preflattened_buffer(a, [10], dtype="bool", data=a.data)
+    T.preflattened_buffer(b, [10], dtype="bool", data=b.data)
+    # body
+    for i0 in T.serial(10):
+        b[i0] = T.cast(T.cast(a[i0], "bool"), "int8")
+
+
+@T.prim_func
+def boolean_handle_after(a: T.Buffer[10, "int8"], b: T.Buffer[10, "int8"]) -> None:
+    T.preflattened_buffer(a, [10], dtype="bool", data=a.data)
+    T.preflattened_buffer(b, [10], dtype="bool", data=b.data)
+    # body
+    for i0 in T.serial(10):
+        b[i0] = T.cast(T.cast(a[i0], "bool"), "int8")

Review Comment:
   @Lunderberg The casting here is a bit duplicated. Do you think we should simplify it with some ad-hoc logic? Perhaps should leave for future optimization if later we found it a bottleneck?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] junrushao1994 commented on pull request #11532: [Bugfix][TIR] Handle boolean tensor properly in Flatten-Buffer

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on PR #11532:
URL: https://github.com/apache/tvm/pull/11532#issuecomment-1144296264

   CC: @Lunderberg @Hzfengsy


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org