You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/09/10 11:32:02 UTC

[GitHub] [incubator-tvm] u99127 commented on a change in pull request #6355: [BYOC][ETHOSN] Introduce further operator support

u99127 commented on a change in pull request #6355:
URL: https://github.com/apache/incubator-tvm/pull/6355#discussion_r486264884



##########
File path: tests/python/contrib/test_ethosn/test_networks.py
##########
@@ -0,0 +1,163 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Ethos-N integration end-to-end network tests"""
+
+import pytest
+pytest.importorskip('tflite')
+pytest.importorskip('tensorflow')
+
+from tvm import relay
+from tvm.relay.op.contrib.ethosn import ethosn_available, Available
+from tvm.contrib import download
+import tvm.relay.testing.tf as tf_testing
+import tflite.Model
+from . import infrastructure as tei
+
+
+def _get_tflite_model(tflite_model_path, inputs_dict, dtype):
+    with open(tflite_model_path, 'rb') as f:
+        tflite_model_buffer = f.read()
+
+    try:
+        tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buffer, 0)
+    except AttributeError:
+        tflite_model = tflite.Model.GetRootAsModel(tflite_model_buffer, 0)
+    shape_dict = {}
+    dtype_dict = {}
+    for input in inputs_dict:
+        input_shape = inputs_dict[input]
+        shape_dict[input] = input_shape
+        dtype_dict[input] = dtype
+
+    return relay.frontend.from_tflite(
+        tflite_model,
+        shape_dict=shape_dict,
+        dtype_dict=dtype_dict,
+    )
+
+
+def _test_image_network(model_url, model_sub_path, input_dict, compile_hash, output_count, run=True, host_ops=0, npu_partitions=1):
+    if not ethosn_available():
+        return
+
+    def get_model():
+        if model_url[-3:] in ("tgz", "zip"):
+            model_path = tf_testing.get_workload_official(
+                model_url,
+                model_sub_path,
+            )
+        else:
+            model_path = download.download_testdata(
+                model_url,
+                model_sub_path,
+            )
+        return _get_tflite_model(model_path, input_dict, 'uint8')
+
+    outputs = []
+    inputs = {}
+    for input_name in input_dict:
+        input_shape = input_dict[input_name]
+        inputs[input_name] = tei.get_real_image(input_shape[1], input_shape[2])
+
+    for npu in [False, True]:
+        mod, params = get_model()
+        graph, lib, params = tei.build(mod, params, npu=npu, expected_host_ops=host_ops, npu_partitions=npu_partitions)
+        if npu:
+            tei.assert_lib_hash(lib, compile_hash)

Review comment:
       Hi Zhi,
   
   In an ideal world we would run this with hardware in the CI and then known good runtime output does the right thing.
   
   However in the absence of testing of runtime outputs of an inference, I would be less comfortable without checking against known good compile time output. In static compilers we approximate this by checking against known good assembler output . I view the check against the hashes in a similar vein. Checking against the json gives us confidence that something is offloaded but there isn't enough confidence that the code generated continues to remain suitable for Ethos-N77. 
   
   The hashes have been relatively stable and have changed in my memory for 1 of 2 reasons below. @mbaret and @Leo-arm  can correct me if I've missed something below. 
   
   1. Changes to the NPUSW library underneath. but that changes only with changes to the docker file and thus is controlled . 
   2. Changes for adding newer operators and again thus changes to the Ethos-N port of TVM. 
   
   There is a theoretical possibility that these change because of fixups to API changes in TVM but we haven't seen this in the last 3 months IIRC with pretty regular (more than twice a week) rebasing when working on this activity. @mbaret and @Leo-arm can correct my memory.
   
   If it looks like the hashes are creating friction for developers in the community while developing  maybe we can revisit this. 
   
   regards
   Ramana
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org