You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/11/04 11:13:43 UTC

[GitHub] [incubator-tvm] leandron commented on a change in pull request #6831: [TVMC] add cl support in tvmc runner

leandron commented on a change in pull request #6831:
URL: https://github.com/apache/incubator-tvm/pull/6831#discussion_r517214239



##########
File path: tests/python/driver/tvmc/conftest.py
##########
@@ -70,6 +89,19 @@ def tflite_mobilenet_v1_1_quant(tmpdir_factory):
     return model_file
 
 
+@pytest.fixture(scope="session")
+def tflite_mobilenet_v1_0_25_128(tmpdir_factory):

Review comment:
       I think it is OK to add more models here, just want to check whether there is anything special with this model, that prevents using the existing mobilenet, already here?

##########
File path: tests/python/driver/tvmc/conftest.py
##########
@@ -54,6 +54,25 @@ def get_sample_compiled_module(target_dir):
     return tvmc.compiler.compile_model(model_file, target="llvm")
 
 
+def get_sample_opencl_compiled_module(target_dir):
+    """Support function that returns a TFLite compiled module"""
+    base_url = "https://storage.googleapis.com/download.tensorflow.org/models"
+    model_url = "mobilenet_v1_2018_02_22/mobilenet_v1_0.25_128.tgz"
+    model_file = download_and_untar(
+        "{}/{}".format(base_url, model_url),
+        "mobilenet_v1_0.25_128.tflite",
+        temp_dir=target_dir,
+    )
+    return tvmc.compiler.compile_model(
+        model_file,
+        target="opencl",
+        target_host="llvm",
+        model_format="tflite",
+        dump_code="ll",

Review comment:
       Do we need `dump_code` here? If not needed, I suggest removing it.

##########
File path: python/tvm/driver/tvmc/runner.py
##########
@@ -50,7 +50,7 @@ def add_run_parser(subparsers):
     #      like 'cl', 'webgpu', etc (@leandron)

Review comment:
       ```suggestion
       #      like 'webgpu', etc (@leandron)
   ```

##########
File path: python/tvm/driver/tvmc/compiler.py
##########
@@ -185,11 +185,11 @@ def compile_model(
         with autotvm.apply_history_best(tuning_records):
             with tvm.transform.PassContext(opt_level=3):
                 logger.debug("building relay graph with tuning records")
-                graph_module = relay.build(mod, tvm_target, params=params, target_host=tvm_target)
+                graph_module = relay.build(mod, tvm_target, params=params, target_host=target_host)

Review comment:
       This looks unrelated to the overall CL addition, I guess as it is a minor variable replacement fix, we can leave it here - I'm just pointing it out.

##########
File path: tests/python/driver/tvmc/test_runner.py
##########
@@ -96,3 +96,14 @@ def test_run_tflite_module__with_profile__valid_input(
     assert type(outputs) is dict
     assert type(times) is tuple
     assert "output_0" in outputs.keys()
+
+
+def test_run_tflite_module_on_opencl(tflite_opencl_compiled_module_as_tarfile):
+    # some CI environments wont offer TFLite, so skip in case it is not present
+    pytest.importorskip("tflite")
+
+    outputs, times = tvmc.runner.run_module(
+        tflite_opencl_compiled_module_as_tarfile, hostname=None, device="cl"
+    )
+    assert type(outputs) is dict
+    assert type(times) is tuple

Review comment:
       Can you add one more check, to see whether the output tensor was produced? (see line 98 for an example)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org