You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/04/05 16:36:11 UTC

[GitHub] [tvm] Lunderberg opened a new pull request, #10905: [Hexagon][LLVM] Enable/test tensorized Hexagon DMA

Lunderberg opened a new pull request, #10905:
URL: https://github.com/apache/tvm/pull/10905

   - In the `CodeGenLLVM::CreateIntrinsic` handler for `builtin::address_of()`, pass N-d indices to `CodeGenLLVM::CreateBufferPtr`.  The base class implementation still asserts that there is a flat memory space, while the `CodeGenHexagon::CreateBufferPtr` override allows 2-d memory.
   
   - Enable tensorization in `test_cache_read_write.py`, using `tir.address_of` to pass the lowered value.
   
   Co-authored-by: Adam Straw <as...@octoml.ai>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Lunderberg commented on pull request #10905: [Hexagon][LLVM] Enable/test tensorized Hexagon DMA

Posted by GitBox <gi...@apache.org>.
Lunderberg commented on PR #10905:
URL: https://github.com/apache/tvm/pull/10905#issuecomment-1089003856

   This change shouldn't depend on the changes introduced in https://github.com/apache/tvm/pull/10878 and https://github.com/apache/tvm/pull/10903, but local testing was done including those changes as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] csullivan commented on a diff in pull request #10905: [Hexagon][LLVM] Enable/test tensorized Hexagon DMA on 2d transformed layout

Posted by GitBox <gi...@apache.org>.
csullivan commented on code in PR #10905:
URL: https://github.com/apache/tvm/pull/10905#discussion_r845630669


##########
tests/python/contrib/test_hexagon/test_cache_read_write.py:
##########
@@ -38,33 +37,49 @@ def intrin_mem_copy(shape, dtype, dst_scope, src_scope):
         dtype,
         scope=src_scope,
         offset_factor=1,
+        name="mem_copy_src_buffer",
     )
 
     dst_buffer = tvm.tir.decl_buffer(
         shape,
         dtype,
         scope=dst_scope,
         offset_factor=1,
+        name="mem_copy_dst_buffer",
     )
 
+    zero_indices = [0 for _ in shape]
+
     def intrin_func(ins, outs):
         ib = tvm.tir.ir_builder.create()
 
         _src = ins[0]
         _dst = outs[0]
+
+        dst_handle = ib.buffer_ptr(dst_buffer)
+        src_handle = ib.buffer_ptr(src_buffer)
+
         ib.emit(
             tvm.tir.call_intrin(
-                "handle", "tir.mem_copy", _dst.access_ptr("w"), _src.access_ptr("r"), size
+                "handle",
+                "tir.mem_copy",
+                tvm.tir.call_intrin("handle", "tir.address_of", dst_handle[zero_indices]),
+                tvm.tir.call_intrin("handle", "tir.address_of", src_handle[zero_indices]),
+                size,
             )
         )
         return ib.get()
 
     return te.decl_tensor_intrin(dst.op, intrin_func, binds={src: src_buffer, dst: dst_buffer})
 
 
+def layout_transform_2d(n):
+    return [n // 16, te.AXIS_SEPARATOR, n % 16]
+
+
 @requires_hexagon_toolchain
 def test_cache_read_write(hexagon_session):

Review Comment:
   Can we have a test demonstrating the approach for discontiguous memory _and_ contiguous memory? I notice you are overwriting the old test coverage with this change and it's likely useful to maintain coverage for both cases.



##########
src/tir/ir/buffer.cc:
##########
@@ -480,8 +479,14 @@ Buffer Buffer::MakeSlice(Array<PrimExpr> begins, Array<PrimExpr> extents) const
       return MakeStrideView().MakeSlice(begins, extents);
     }
   }
-  return Buffer(n->data, n->dtype, extents, strides, elem_offset[0], n->name + "_slice",
-                n->data_alignment, 0, n->buffer_type);
+  Buffer slice(n->data, n->dtype, extents, strides, elem_offset[0], n->name + "_slice",
+               n->data_alignment, 0, n->buffer_type);
+  if (elem_offset.size() != 1) {
+    // Sentinel value for ArgBinder::BindBuffer to state that any usage
+    // of element offset is invalid.
+    slice.CopyOnWrite()->elem_offset = PrimExpr();

Review Comment:
   Expanding this comment to explain why the use of element offset is invalid in the Nd case, or even better a short TODO to update once the IR/buffer is changed to support Nd offset would help. It took me a little while to understand why this was necessary.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Lunderberg commented on a diff in pull request #10905: [Hexagon][LLVM] Enable/test tensorized Hexagon DMA on 2d transformed layout

Posted by GitBox <gi...@apache.org>.
Lunderberg commented on code in PR #10905:
URL: https://github.com/apache/tvm/pull/10905#discussion_r848521532


##########
src/tir/ir/buffer.cc:
##########
@@ -480,8 +479,14 @@ Buffer Buffer::MakeSlice(Array<PrimExpr> begins, Array<PrimExpr> extents) const
       return MakeStrideView().MakeSlice(begins, extents);
     }
   }
-  return Buffer(n->data, n->dtype, extents, strides, elem_offset[0], n->name + "_slice",
-                n->data_alignment, 0, n->buffer_type);
+  Buffer slice(n->data, n->dtype, extents, strides, elem_offset[0], n->name + "_slice",
+               n->data_alignment, 0, n->buffer_type);
+  if (elem_offset.size() != 1) {
+    // Sentinel value for ArgBinder::BindBuffer to state that any usage
+    // of element offset is invalid.
+    slice.CopyOnWrite()->elem_offset = PrimExpr();

Review Comment:
   The change and the TODO look good to me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] csullivan commented on pull request #10905: [Hexagon][LLVM] Enable/test tensorized Hexagon DMA on 2d transformed layout

Posted by GitBox <gi...@apache.org>.
csullivan commented on PR #10905:
URL: https://github.com/apache/tvm/pull/10905#issuecomment-1096876149

   Thanks @adstraw @Lunderberg! This is merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] csullivan merged pull request #10905: [Hexagon][LLVM] Enable/test tensorized Hexagon DMA on 2d transformed layout

Posted by GitBox <gi...@apache.org>.
csullivan merged PR #10905:
URL: https://github.com/apache/tvm/pull/10905


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] adstraw commented on a diff in pull request #10905: [Hexagon][LLVM] Enable/test tensorized Hexagon DMA on 2d transformed layout

Posted by GitBox <gi...@apache.org>.
adstraw commented on code in PR #10905:
URL: https://github.com/apache/tvm/pull/10905#discussion_r847809305


##########
tests/python/contrib/test_hexagon/test_cache_read_write.py:
##########
@@ -38,33 +37,49 @@ def intrin_mem_copy(shape, dtype, dst_scope, src_scope):
         dtype,
         scope=src_scope,
         offset_factor=1,
+        name="mem_copy_src_buffer",
     )
 
     dst_buffer = tvm.tir.decl_buffer(
         shape,
         dtype,
         scope=dst_scope,
         offset_factor=1,
+        name="mem_copy_dst_buffer",
     )
 
+    zero_indices = [0 for _ in shape]
+
     def intrin_func(ins, outs):
         ib = tvm.tir.ir_builder.create()
 
         _src = ins[0]
         _dst = outs[0]
+
+        dst_handle = ib.buffer_ptr(dst_buffer)
+        src_handle = ib.buffer_ptr(src_buffer)
+
         ib.emit(
             tvm.tir.call_intrin(
-                "handle", "tir.mem_copy", _dst.access_ptr("w"), _src.access_ptr("r"), size
+                "handle",
+                "tir.mem_copy",
+                tvm.tir.call_intrin("handle", "tir.address_of", dst_handle[zero_indices]),
+                tvm.tir.call_intrin("handle", "tir.address_of", src_handle[zero_indices]),
+                size,
             )
         )
         return ib.get()
 
     return te.decl_tensor_intrin(dst.op, intrin_func, binds={src: src_buffer, dst: dst_buffer})
 
 
+def layout_transform_2d(n):
+    return [n // 16, te.AXIS_SEPARATOR, n % 16]
+
+
 @requires_hexagon_toolchain
 def test_cache_read_write(hexagon_session):

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] adstraw commented on a diff in pull request #10905: [Hexagon][LLVM] Enable/test tensorized Hexagon DMA on 2d transformed layout

Posted by GitBox <gi...@apache.org>.
adstraw commented on code in PR #10905:
URL: https://github.com/apache/tvm/pull/10905#discussion_r847809699


##########
src/tir/ir/buffer.cc:
##########
@@ -480,8 +479,14 @@ Buffer Buffer::MakeSlice(Array<PrimExpr> begins, Array<PrimExpr> extents) const
       return MakeStrideView().MakeSlice(begins, extents);
     }
   }
-  return Buffer(n->data, n->dtype, extents, strides, elem_offset[0], n->name + "_slice",
-                n->data_alignment, 0, n->buffer_type);
+  Buffer slice(n->data, n->dtype, extents, strides, elem_offset[0], n->name + "_slice",
+               n->data_alignment, 0, n->buffer_type);
+  if (elem_offset.size() != 1) {
+    // Sentinel value for ArgBinder::BindBuffer to state that any usage
+    // of element offset is invalid.
+    slice.CopyOnWrite()->elem_offset = PrimExpr();

Review Comment:
   @Lunderberg Let me know if you are OK with the comment rewrite here including the `TODO` I wrote related to PR #10816.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org