You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/08/24 14:57:26 UTC

[GitHub] [tvm] cconvey opened a new pull request, #12574: [WIP] Alloc 2tier

cconvey opened a new pull request, #12574:
URL: https://github.com/apache/tvm/pull/12574

   Add support for "global.ddr" memory scope for Hexagon.
   
   (Much better commit message needed.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on pull request #12574: Alloc 2tier

Posted by GitBox <gi...@apache.org>.
cconvey commented on PR #12574:
URL: https://github.com/apache/tvm/pull/12574#issuecomment-1230640924

   @Lunderberg @csullivan @kparzysz-quic : Heads up, this should be ready for review once CI passes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on pull request #12574: alloc 2tier

Posted by GitBox <gi...@apache.org>.
cconvey commented on PR #12574:
URL: https://github.com/apache/tvm/pull/12574#issuecomment-1234861334

   @csullivan @jverma-quic : request for review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tmoreau89 commented on pull request #12574: [Hexagon][Runtime] Better support for 2-tier memory

Posted by GitBox <gi...@apache.org>.
tmoreau89 commented on PR #12574:
URL: https://github.com/apache/tvm/pull/12574#issuecomment-1238282412

   Thank you @cconvey @csullivan @adstraw @nverke, the PR has been merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on a diff in pull request #12574: Alloc 2tier

Posted by GitBox <gi...@apache.org>.
cconvey commented on code in PR #12574:
URL: https://github.com/apache/tvm/pull/12574#discussion_r957641819


##########
tests/python/contrib/test_hexagon/infrastructure.py:
##########
@@ -47,6 +47,19 @@ def allocate_hexagon_array(
     if axis_separators is None:
         axis_separators = []
 
+    # We're abusing the notion of "shape" here: it no longer means the logical shape of the
+    # tensor, but rather the number of memory-allocation tiers.
+    # So the new shape's rank will be (# of axis separators + 1).
+    #
+    # The Hexagon runtime will assume that this flatening has already occurred, and will
+    # therefore interpret the tensor's rank as an indicating of the # of allocation tiers.
+    #
+    # Note that when the # of axis separators == 1:
+    # - This host-side code still creates single memory allocation in which every
+    #   element is actual tensor content.
+    # - The Hexagon runtime code will need to allocate/populate member in a true
+    #   2D format, i.e., it will need to add a top-tier buffer holding pointers into
+    #   the bottom-tier tensor contents.

Review Comment:
   I may have misunderstood a detail about 2D memory allocations:  whether or not there _must_ be a top-level tensor-of-pointers to the individual tile allocations.  I might want to update this comment before merging the PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on pull request #12574: [WIP] alloc 2tier

Posted by GitBox <gi...@apache.org>.
cconvey commented on PR #12574:
URL: https://github.com/apache/tvm/pull/12574#issuecomment-1233171306

   This might need fixing: 
   ```
   tests/python/contrib/test_hexagon/test_2d_physical_buffers.py::TestElementWise::test_execute[nchw-8h8w32c-1d-nchw-8h8w32c-1d-int8-2-32-8x8-nhwc-global.vtcm-hexagon] [09:37:00] /home/cconvey/r/cconvey/tvm/alloc-2tier/src/target/target_info.cc:45: Warning: MemoryInfo for scope = global.vtcm is undefined
   [09:37:00] /home/cconvey/r/cconvey/tvm/alloc-2tier/src/target/target_info.cc:45: Warning: MemoryInfo for scope = global.vtcm is undefined
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tmoreau89 merged pull request #12574: [Hexagon][Runtime] Better support for 2-tier memory

Posted by GitBox <gi...@apache.org>.
tmoreau89 merged PR #12574:
URL: https://github.com/apache/tvm/pull/12574


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on pull request #12574: Alloc 2tier

Posted by GitBox <gi...@apache.org>.
cconvey commented on PR #12574:
URL: https://github.com/apache/tvm/pull/12574#issuecomment-1230832816

   @csullivan @Lunderberg @adstraw @kparzysz-quic: FYI if anyone wants to review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on pull request #12574: [WIP] alloc 2tier

Posted by GitBox <gi...@apache.org>.
cconvey commented on PR #12574:
URL: https://github.com/apache/tvm/pull/12574#issuecomment-1234706451

   > This might need fixing:
   > 
   > ```
   > tests/python/contrib/test_hexagon/test_2d_physical_buffers.py::TestElementWise::test_execute[nchw-8h8w32c-1d-nchw-8h8w32c-1d-int8-2-32-8x8-nhwc-global.vtcm-hexagon] [09:37:00] /home/cconvey/r/cconvey/tvm/alloc-2tier/src/target/target_info.cc:45: Warning: MemoryInfo for scope = global.vtcm is undefined
   > [09:37:00] /home/cconvey/r/cconvey/tvm/alloc-2tier/src/target/target_info.cc:45: Warning: MemoryInfo for scope = global.vtcm is undefined
   > ```
   
   I discussed this via DM with @csullivan.  His impression is that having supporting `MemoryInfo` for a given memory-scope can be useful, but isn't required.  So I'm not planning to deal with this warning in this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] csullivan commented on a diff in pull request #12574: alloc 2tier

Posted by GitBox <gi...@apache.org>.
csullivan commented on code in PR #12574:
URL: https://github.com/apache/tvm/pull/12574#discussion_r961840839


##########
tests/python/contrib/test_hexagon/test_memory_alloc.py:
##########
@@ -0,0 +1,85 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import os
+import os.path
+import sys
+import tempfile
+
+import numpy as np
+import pytest
+
+import tvm
+from tvm.script import tir as T
+
+from .infrastructure import allocate_hexagon_array
+
+_HEXAGON_TARGET = tvm.target.hexagon("v69", link_params=True)
+
+
+@tvm.testing.fixture
+def generated_func(shape, scope, dtype, axis_separators):
+    dim0, dim1 = shape
+
+    @T.prim_func
+    def elwise(a: T.handle, b: T.handle):
+        A = T.match_buffer(a, shape, dtype=dtype, axis_separators=axis_separators)
+        B = T.match_buffer(b, shape, dtype=dtype, axis_separators=axis_separators)
+
+        for i, j in T.grid(dim0, dim1):
+            with T.block("compute"):
+                B[i, j] = A[i, j] * T.cast(2, dtype=dtype)
+
+    return elwise
+
+
+class TestMemoryAlloc:
+    dtype = tvm.testing.parameter("int8")
+    shape = tvm.testing.parameter((128, 128))
+
+    (scope, axis_separators,) = tvm.testing.parameters(
+        ("global", []),

Review Comment:
   nit: Could add an xfail test for ("global", [1]) to indicate this expectation explicitly. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on a diff in pull request #12574: Alloc 2tier

Posted by GitBox <gi...@apache.org>.
cconvey commented on code in PR #12574:
URL: https://github.com/apache/tvm/pull/12574#discussion_r957641819


##########
tests/python/contrib/test_hexagon/infrastructure.py:
##########
@@ -47,6 +47,19 @@ def allocate_hexagon_array(
     if axis_separators is None:
         axis_separators = []
 
+    # We're abusing the notion of "shape" here: it no longer means the logical shape of the
+    # tensor, but rather the number of memory-allocation tiers.
+    # So the new shape's rank will be (# of axis separators + 1).
+    #
+    # The Hexagon runtime will assume that this flatening has already occurred, and will
+    # therefore interpret the tensor's rank as an indicating of the # of allocation tiers.
+    #
+    # Note that when the # of axis separators == 1:
+    # - This host-side code still creates single memory allocation in which every
+    #   element is actual tensor content.
+    # - The Hexagon runtime code will need to allocate/populate member in a true
+    #   2D format, i.e., it will need to add a top-tier buffer holding pointers into
+    #   the bottom-tier tensor contents.

Review Comment:
   I may have misunderstood a detail about 2D memory allocations:  whether or not there _must_ be a top-level tensor-of-pointers to the individual tile allocations.  I might want to update this comment before merging the PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org