You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/04/07 23:36:31 UTC

[GitHub] [tvm] csullivan commented on a diff in pull request #10905: [Hexagon][LLVM] Enable/test tensorized Hexagon DMA on 2d transformed layout

csullivan commented on code in PR #10905:
URL: https://github.com/apache/tvm/pull/10905#discussion_r845630669


##########
tests/python/contrib/test_hexagon/test_cache_read_write.py:
##########
@@ -38,33 +37,49 @@ def intrin_mem_copy(shape, dtype, dst_scope, src_scope):
         dtype,
         scope=src_scope,
         offset_factor=1,
+        name="mem_copy_src_buffer",
     )
 
     dst_buffer = tvm.tir.decl_buffer(
         shape,
         dtype,
         scope=dst_scope,
         offset_factor=1,
+        name="mem_copy_dst_buffer",
     )
 
+    zero_indices = [0 for _ in shape]
+
     def intrin_func(ins, outs):
         ib = tvm.tir.ir_builder.create()
 
         _src = ins[0]
         _dst = outs[0]
+
+        dst_handle = ib.buffer_ptr(dst_buffer)
+        src_handle = ib.buffer_ptr(src_buffer)
+
         ib.emit(
             tvm.tir.call_intrin(
-                "handle", "tir.mem_copy", _dst.access_ptr("w"), _src.access_ptr("r"), size
+                "handle",
+                "tir.mem_copy",
+                tvm.tir.call_intrin("handle", "tir.address_of", dst_handle[zero_indices]),
+                tvm.tir.call_intrin("handle", "tir.address_of", src_handle[zero_indices]),
+                size,
             )
         )
         return ib.get()
 
     return te.decl_tensor_intrin(dst.op, intrin_func, binds={src: src_buffer, dst: dst_buffer})
 
 
+def layout_transform_2d(n):
+    return [n // 16, te.AXIS_SEPARATOR, n % 16]
+
+
 @requires_hexagon_toolchain
 def test_cache_read_write(hexagon_session):

Review Comment:
   Can we have a test demonstrating the approach for discontiguous memory _and_ contiguous memory? I notice you are overwriting the old test coverage with this change and it's likely useful to maintain coverage for both cases.



##########
src/tir/ir/buffer.cc:
##########
@@ -480,8 +479,14 @@ Buffer Buffer::MakeSlice(Array<PrimExpr> begins, Array<PrimExpr> extents) const
       return MakeStrideView().MakeSlice(begins, extents);
     }
   }
-  return Buffer(n->data, n->dtype, extents, strides, elem_offset[0], n->name + "_slice",
-                n->data_alignment, 0, n->buffer_type);
+  Buffer slice(n->data, n->dtype, extents, strides, elem_offset[0], n->name + "_slice",
+               n->data_alignment, 0, n->buffer_type);
+  if (elem_offset.size() != 1) {
+    // Sentinel value for ArgBinder::BindBuffer to state that any usage
+    // of element offset is invalid.
+    slice.CopyOnWrite()->elem_offset = PrimExpr();

Review Comment:
   Expanding this comment to explain why the use of element offset is invalid in the Nd case, or even better a short TODO to update once the IR/buffer is changed to support Nd offset would help. It took me a little while to understand why this was necessary.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org