You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/12/23 12:53:16 UTC

[GitHub] [tvm] ashutosh-arm opened a new pull request, #13655: [AOT] Added a test for detecting output size post MLF export

ashutosh-arm opened a new pull request, #13655:
URL: https://github.com/apache/tvm/pull/13655

   Follow up: https://github.com/apache/tvm/pull/12789
   
   * Added a test to detect output size from MLF codegen.
   * Updated test harness AOTTestRunner to detect correct size from IO arrays.
   * MLF size was not used in aot.py because its unavailable in case of packed apis
   
   
   cc @Mousius 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius commented on pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
Mousius commented on PR #13655:
URL: https://github.com/apache/tvm/pull/13655#issuecomment-1376015692

   LGTM! Thanks @ashutosh-arm !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
Mousius commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1060783847


##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)

Review Comment:
   That sounds wrong, shouldn't there be a `tvmgen_model1.h` and a `tvmgen_model2.h` ? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ashutosh-arm commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
ashutosh-arm commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1060728337


##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)

Review Comment:
   Oh, there is just one header in those cases too. Both models' sizes appear in a single header. So, need not be tested additionally.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius merged pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
Mousius merged PR #13655:
URL: https://github.com/apache/tvm/pull/13655


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ashutosh-arm commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
ashutosh-arm commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1060568532


##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)

Review Comment:
   I will extend the check for inputs. We could directly look for the file, but I thought that check maynot work for multiple models. But it does, so I will update that too.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
Mousius commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1056828877


##########
python/tvm/micro/model_library_format.py:
##########
@@ -485,6 +485,12 @@ def _export_graph_model_library_format(
                     "functions"
                 ]["main"][0]["outputs"][key]
 
+            input_name_to_size_map = {}
+            output_name_to_size_map = {}
+            for name, property_map in inputs_sizes.items():
+                input_name_to_size_map.update({name: property_map["size"]})
+            for name, property_map in output_sizes.items():
+                output_name_to_size_map.update({name: property_map["size"]})

Review Comment:
   ```suggestion
               input_name_to_size_map = {
                  name: property_map["size"]
                  for name, property_map in inputs_sizes.items()
               }
               output_name_to_size_map = {
                 name: property_map["size"]
                 for name, property_map in output_sizes.items()
               }
   ```



##########
python/tvm/micro/model_library_format.py:
##########
@@ -494,8 +500,10 @@ def _export_graph_model_library_format(
                 devices,
                 workspace_size,
                 include_path,
-                inputs_sizes,
-                output_sizes,
+                # inputs_sizes,
+                # output_sizes,

Review Comment:
   ```suggestion
   ```
   
   (Just delete these, no need to leave them here)



##########
python/tvm/testing/aot.py:
##########
@@ -415,24 +415,23 @@ def _emit_main_compare(main_file, outputs, output_tolerance, mod_name, use_inter
             comparison_function = "fabs"
             tolerance = output_tolerance or 0.001
 
-        data_length_var_name = (
-            _mangle_name(mod_name, f"output_data_{sanitized_tensor_name}") + "_len"
+        actual_data_name = _mangle_name(mod_name, f"output_data_{sanitized_tensor_name}")
+        data_len_var = actual_data_name + "_len"
+        main_file.write(
+            f"const size_t {data_len_var}"
+            f"= sizeof({actual_data_name})/sizeof({actual_data_name}[0]);\n"

Review Comment:
   We generate `actual_data_name` and `data_length_var_name` in the inputs/outputs already, why are we recalculating it here based on types we've defined within the AOT test harness?
   
   We probably need to add a `_LEN` macro to the MLF header?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ashutosh-arm commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
ashutosh-arm commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1060568918


##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)
+    assert len(file_list) > 0
+
+    for path in file_list:
+        with open(path, "r") as header:
+            contents = header.readlines()
+            contents = "".join(map(str, contents))
+            assert contents.count("_SIZE") == 4
+            assert str(ref_output_size) in contents

Review Comment:
   I tried doing that initially. Any short cuts to do that?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
Mousius commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1060609692


##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)

Review Comment:
   I assume this just requires looking for both headers?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ashutosh-arm commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
ashutosh-arm commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1060925127


##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)

Review Comment:
   My bad. I confused this with the multi-model test which it is not. In case of multi model test, I do see two separate headers being produced.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ashutosh-arm commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
ashutosh-arm commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1058918167


##########
python/tvm/micro/model_library_format.py:
##########
@@ -485,6 +485,12 @@ def _export_graph_model_library_format(
                     "functions"
                 ]["main"][0]["outputs"][key]
 
+            input_name_to_size_map = {}
+            output_name_to_size_map = {}
+            for name, property_map in inputs_sizes.items():
+                input_name_to_size_map.update({name: property_map["size"]})
+            for name, property_map in output_sizes.items():
+                output_name_to_size_map.update({name: property_map["size"]})

Review Comment:
   ACK



##########
python/tvm/micro/model_library_format.py:
##########
@@ -494,8 +500,10 @@ def _export_graph_model_library_format(
                 devices,
                 workspace_size,
                 include_path,
-                inputs_sizes,
-                output_sizes,
+                # inputs_sizes,
+                # output_sizes,

Review Comment:
   ACK



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ashutosh-arm commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
ashutosh-arm commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1059341351


##########
python/tvm/testing/aot.py:
##########
@@ -415,24 +415,23 @@ def _emit_main_compare(main_file, outputs, output_tolerance, mod_name, use_inter
             comparison_function = "fabs"
             tolerance = output_tolerance or 0.001
 
-        data_length_var_name = (
-            _mangle_name(mod_name, f"output_data_{sanitized_tensor_name}") + "_len"
+        actual_data_name = _mangle_name(mod_name, f"output_data_{sanitized_tensor_name}")
+        data_len_var = actual_data_name + "_len"
+        main_file.write(
+            f"const size_t {data_len_var}"
+            f"= sizeof({actual_data_name})/sizeof({actual_data_name}[0]);\n"

Review Comment:
   As discussed offline, the two variables `LEN` and `SIZE` serve different purposes. `SIZE` will come from MLF export whereas `LEN` will be part of the AOT test harness.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
Mousius commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1060464940


##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)
+    assert len(file_list) > 0
+
+    for path in file_list:
+        with open(path, "r") as header:
+            contents = header.readlines()
+            contents = "".join(map(str, contents))
+            assert contents.count("_SIZE") == 4
+            assert str(ref_output_size) in contents

Review Comment:
   We should probably check the `_SIZE` values match with the appropriate constants rather than them just appearing in the same file together?



##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)

Review Comment:
   Can we also do this for the input sizes?



##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)

Review Comment:
   Given we know the `model_name` can we not just look for `tvmgen_{model_name}.h` rather than looping?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
Mousius commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1060610992


##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)
+    assert len(file_list) > 0
+
+    for path in file_list:
+        with open(path, "r") as header:
+            contents = header.readlines()
+            contents = "".join(map(str, contents))
+            assert contents.count("_SIZE") == 4
+            assert str(ref_output_size) in contents

Review Comment:
   Something like:
   ```
   assert contents.count("_SIZE") == 4
   assert f"INPUT_1_SIZE {ref_output_size}" in contents
   ```
   ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
Mousius commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1056973538


##########
python/tvm/testing/aot.py:
##########
@@ -415,24 +415,23 @@ def _emit_main_compare(main_file, outputs, output_tolerance, mod_name, use_inter
             comparison_function = "fabs"
             tolerance = output_tolerance or 0.001
 
-        data_length_var_name = (
-            _mangle_name(mod_name, f"output_data_{sanitized_tensor_name}") + "_len"
+        actual_data_name = _mangle_name(mod_name, f"output_data_{sanitized_tensor_name}")
+        data_len_var = actual_data_name + "_len"
+        main_file.write(
+            f"const size_t {data_len_var}"
+            f"= sizeof({actual_data_name})/sizeof({actual_data_name}[0]);\n"

Review Comment:
   `_LENGTH` to be clearer 😸 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tvm-bot commented on pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
tvm-bot commented on PR #13655:
URL: https://github.com/apache/tvm/pull/13655#issuecomment-1363929250

   <!---bot-comment-->
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @-ing them in a comment.
   
   <!--bot-comment-ccs-start-->
    * cc @Mousius, @alanmacd, @areusch, @lhutton1 <sub>See [#10317](https://github.com/apache/tvm/issues/10317) for details</sub><!--bot-comment-ccs-end-->
   
   <sub>Generated by [tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)</sub>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ashutosh-arm commented on pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
ashutosh-arm commented on PR #13655:
URL: https://github.com/apache/tvm/pull/13655#issuecomment-1369602070

   @Mousius could you please take a look again after the LEN was included from AOT test harness?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ashutosh-arm commented on a diff in pull request #13655: [AOT] Added a test for detecting output size post MLF export

Posted by GitBox <gi...@apache.org>.
ashutosh-arm commented on code in PR #13655:
URL: https://github.com/apache/tvm/pull/13655#discussion_r1060727759


##########
tests/python/relay/aot/test_crt_aot.py:
##########
@@ -225,6 +225,64 @@ def test_packed_global_variables():
             assert f"{func}_packed" not in tvmgen_names
 
 
+def test_io_size_definition():
+    """Check network IO size definitions in the codegen output."""
+    dtype = "float32"
+    ishape = (1, 32, 14, 14)
+    wshape = (32, 32, 3, 3)
+    interface_api = "c"
+    use_unpacked_api = True
+
+    data0 = relay.var("data", shape=ishape, dtype=dtype)
+    weight0 = relay.var("weight", shape=wshape, dtype=dtype)
+    out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1)
+    main_f = relay.Function([data0, weight0], out)
+    mod = tvm.IRModule()
+    mod["main"] = main_f
+    mod = transform.InferType()(mod)
+
+    i_data = np.random.uniform(0, 1, ishape).astype(dtype)
+    w1_data = np.random.uniform(0, 1, wshape).astype(dtype)
+
+    inputs = OrderedDict([("data", i_data), ("weight", w1_data)])
+
+    output_list = generate_ref_data(mod, inputs)
+    compiled_models_list = compile_models(
+        models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list),
+        interface_api=interface_api,
+        use_unpacked_api=use_unpacked_api,
+        workspace_byte_alignment=8,
+        enable_op_fusion=True,
+        pass_config=AOT_DEFAULT_RUNNER.pass_config,
+        use_runtime_executor=True,
+        target=tvm.target.Target("c"),
+    )
+    ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize
+    compiled_model = compiled_models_list[0]
+
+    tmp_path = utils.tempdir()
+    base_path = tmp_path.temp_dir
+
+    model = compiled_model.model
+    tar_file = os.path.join(base_path, f"{model.name}.tar")
+    export_model_library_format(compiled_model.executor_factory, tar_file)
+    t = tarfile.open(tar_file)
+    t.extractall(base_path)
+
+    file_list = []
+    for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir():
+        if path.is_file():
+            file_list.append(path)
+    assert len(file_list) > 0
+
+    for path in file_list:
+        with open(path, "r") as header:
+            contents = header.readlines()
+            contents = "".join(map(str, contents))
+            assert contents.count("_SIZE") == 4
+            assert str(ref_output_size) in contents

Review Comment:
   Ah ok. I misunderstood what you were asking for. This makes sense. Thanks for the help.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org