You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/09/23 22:39:00 UTC

[GitHub] [tvm] denise-k commented on a change in pull request #9065: Move the allocates of AoT codegen to be TVMBAWs

denise-k commented on a change in pull request #9065:
URL: https://github.com/apache/tvm/pull/9065#discussion_r715202163



##########
File path: tests/python/relay/aot/test_crt_aot.py
##########
@@ -589,5 +590,41 @@ def test_memory_planning(workspace_byte_alignment, main_workspace_size, sum_work
     )
 
 
+def test_aot_codegen_backend_alloc_workspace_calls():
+    dtype = "float32"
+
+    # These shapes should create small tensors that would
+    # get lowered to stack allocations in the CPU PrimFuncs.
+    # However, the AoT executor codegen should retain them
+    # as TVMBAW calls
+    ishape = (1, 4, 4, 4)
+    wshape = (4, 4, 3, 3)
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+    output_list = generate_ref_data(mod, inputs)
+
+    compiled_runtime_modules = compile_models(

Review comment:
       @areusch roadmap item and task tracking have been created.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org