You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/09/21 18:49:27 UTC

[GitHub] [tvm] areusch commented on a change in pull request #9046: Support verisilicon's NPU with BYOC framework

areusch commented on a change in pull request #9046:
URL: https://github.com/apache/tvm/pull/9046#discussion_r713264409



##########
File path: cmake/modules/contrib/VsiNpu.cmake
##########
@@ -0,0 +1,30 @@
+if(NOT USE_VSI_NPU STREQUAL "OFF")
+
+if(NOT TIM_VX_INSTALL_DIR OR NOT EXISTS ${TIM_VX_INSTALL_DIR})
+message(FATAL_ERROR "TIM_VX_INSTALL_DIR should be set")
+endif()
+
+set(OVXLIB_API_ATTR "__attribute__\(\(visibility\(\"default\"\)\)\)")
+add_definitions(-DOVXLIB_API=${OVXLIB_API_ATTR})

Review comment:
       can you add this to a specific target? same for the below definition

##########
File path: python/tvm/relay/op/contrib/vsi_npu_ffi_api.py
##########
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import tvm._ffi
+
+tvm._ffi._init_api("relay.vsi_npu.support", __name__)

Review comment:
       nit: add a newline at the end of the file. here and elsewhere.

##########
File path: cmake/modules/contrib/VsiNpu.cmake
##########
@@ -0,0 +1,30 @@
+if(NOT USE_VSI_NPU STREQUAL "OFF")

Review comment:
       can you update `cmake/config.cmake` to contain the default and an example docstring? also, can you make this work for the case that USE_VSI_NPU is not set? i thiiink STREQUAL "OFF" would fail in that case (but i am not a cmake expert)

##########
File path: src/relay/backend/contrib/vsi_npu/README.md
##########
@@ -0,0 +1,97 @@
+# Versilicon NPU solution on TVM
+
+In this implementation, we enabled offload AI-workloads to versilicon's neural network processor.
+
+# Terms
+NBG(network binary graph)
+
+    NBG is the executeble format for the NPU, we can compile it from host server and deployment it to a target.
+
+TIM-VX: (**T**ensor **I**nterface **M**odule)[https://github.com/VeriSilicon/TIM-VX]
+
+# Implementation details
+We have four parts in this implemetation.
+1. register vsi-npu supported operator 
+    python/tvm/relay/op/contrib/vsi_npu.py defined supported operator and specific patterns we can support in the NPU.
+2. implemented nbg codegen in compilation
+    src/relay/backend/contrib/vsi_npu/
+3. implemented runtime to execute nbg
+    src/runtime/contrib/vsi_npu/
+4. test scripts
+    test/python/contrib/test_vsi_npu/
+5. CMake build script
+    cmake/modules/contrib/VsiNpu.cmake
+
+# Build from source 
+
+## Build TIM-VX from source
+
+## Build tvm as compiler
+This step can be executed with a x86 host or arm based target. If you do cross build for your target,
+just add toolchain configuration for cmake.
+
+```sh
+    mkdir host_compiler_build
+    cd host_compiler_build
+    cp ../cmake/config.cmake ./
+    # NOTE: config llvm by set USE_LLVM to the llvm-config
+    # add set(USE_VSI_NPU ON) to config.cmake, you can do it with cmake command option too
+    # To speed up build, we can disable other backend in this configuration file
+    cmake -DCMAKE_BUILD_TYPE=Debug -DTIM_VX_INSTALL_DIR=<full_path_to_tim_vx_install> ..
+    make tvm -j12
+```
+
+## Build tvm as runtime 
+Usually, NBG runtime will be deployed to embedded device. We need to prepare cross-compile-toolchain for cmake firstly.
+
+```bash
+   mkdir target_runtime_build
+   cd target_runtime_build
+   cp ../cmake/config.cmake ./
+    # add set(USE_VSI_NPU ON) to config.cmake, you can do it with cmake command option too
+   cmake -DCMAKE_BUILD_TYPE=Debug -DTIM_VX_INSTALL_DIR=<full_path_to_tim_vx_target_build_install_dir> \

Review comment:
       just wondering how come you do a Debug build to deploy?

##########
File path: src/relay/backend/contrib/vsi_npu/README.md
##########
@@ -0,0 +1,97 @@
+# Versilicon NPU solution on TVM
+
+In this implementation, we enabled offload AI-workloads to versilicon's neural network processor.
+
+# Terms
+NBG(network binary graph)
+
+    NBG is the executeble format for the NPU, we can compile it from host server and deployment it to a target.
+
+TIM-VX: (**T**ensor **I**nterface **M**odule)[https://github.com/VeriSilicon/TIM-VX]
+
+# Implementation details
+We have four parts in this implemetation.
+1. register vsi-npu supported operator 
+    python/tvm/relay/op/contrib/vsi_npu.py defined supported operator and specific patterns we can support in the NPU.
+2. implemented nbg codegen in compilation
+    src/relay/backend/contrib/vsi_npu/
+3. implemented runtime to execute nbg
+    src/runtime/contrib/vsi_npu/
+4. test scripts
+    test/python/contrib/test_vsi_npu/
+5. CMake build script
+    cmake/modules/contrib/VsiNpu.cmake
+
+# Build from source 
+
+## Build TIM-VX from source
+
+## Build tvm as compiler
+This step can be executed with a x86 host or arm based target. If you do cross build for your target,
+just add toolchain configuration for cmake.
+
+```sh
+    mkdir host_compiler_build
+    cd host_compiler_build
+    cp ../cmake/config.cmake ./
+    # NOTE: config llvm by set USE_LLVM to the llvm-config
+    # add set(USE_VSI_NPU ON) to config.cmake, you can do it with cmake command option too
+    # To speed up build, we can disable other backend in this configuration file
+    cmake -DCMAKE_BUILD_TYPE=Debug -DTIM_VX_INSTALL_DIR=<full_path_to_tim_vx_install> ..
+    make tvm -j12
+```
+
+## Build tvm as runtime 
+Usually, NBG runtime will be deployed to embedded device. We need to prepare cross-compile-toolchain for cmake firstly.
+
+```bash
+   mkdir target_runtime_build
+   cd target_runtime_build
+   cp ../cmake/config.cmake ./
+    # add set(USE_VSI_NPU ON) to config.cmake, you can do it with cmake command option too
+   cmake -DCMAKE_BUILD_TYPE=Debug -DTIM_VX_INSTALL_DIR=<full_path_to_tim_vx_target_build_install_dir> \
+         -DCMAKE_TOOLCHAIN_FILE=<path_to_cross_compile_toolchain.make> ..
+   make runtime -j12
+```
+
+# Run the test
+
+## Option: prepare test models
+{todo: model and download link, tensorflow hosted models}

Review comment:
       if you need to contribute test models, you could open a PR to tlc-pack/web-data

##########
File path: src/relay/backend/contrib/vsi_npu/README.md
##########
@@ -0,0 +1,97 @@
+# Versilicon NPU solution on TVM

Review comment:
       would you like to contribute some of this content as a tutorial in tutorials/ somewhere?

##########
File path: src/relay/backend/contrib/vsi_npu/codegen.cc
##########
@@ -0,0 +1,425 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#include "codegen_vsi_npu.h"
+
+#include "../../../../runtime/contrib/vsi_npu/vsi_npu_runtime.h"
+#include "../../utils.h"
+#include "../codegen_c/codegen_c.h"
+#include "op_map/op_setup.h"
+
+#include <tvm/relay/attrs/image.h>
+#include <tvm/relay/attrs/nn.h>
+#include <tvm/relay/attrs/reduce.h>
+#include <tvm/relay/expr_functor.h>
+#include <tvm/relay/transform.h>
+#include <tvm/relay/type.h>
+#include <tvm/runtime/module.h>
+#include <tvm/runtime/registry.h>
+
+#include <fstream>
+#include <iostream>
+#include <numeric>
+#include <cassert>
+#include <sstream>
+
+#include "tim/transform/layout_inference.h"
+
+namespace tvx = tim::vx;
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace vsi_npu {
+
+using TensorInfoTable = std::map<Expr, std::vector<tim::vx::TensorSpec>>;
+
+void quant_info_infer(VxOpTable& op_tb, Expr now_expr, bool is_input) {
+  auto now_opsetup = op_tb[now_expr];
+  Expr pre_expr;
+  if ((now_opsetup->pCallbackexpr_ == nullptr ||
+      now_opsetup->pCallbackexpr_->ptr_pre_callback_ == nullptr) && is_input
+      ) {
+    return;
+  } else if((now_opsetup->pCallbackexpr_ == nullptr ||
+      now_opsetup->pCallbackexpr_->ptr_pre_callback_ == nullptr
+      || op_tb[now_expr]->specs_[0].quantization_.ZeroPoints().size() == 0)&& !is_input ){
+      return;
+  } else {
+    pre_expr = now_opsetup->pCallbackexpr_->ptr_pre_callback_->expr_;
+  }
+
+  auto pre_opsetup = op_tb[pre_expr];
+  auto ptr_callback = pre_opsetup->pCallbackexpr_;
+
+  if (now_opsetup->specs_[0].datatype_ == tvx::DataType::FLOAT32 ||
+      pre_opsetup->specs_[0].datatype_ == tvx::DataType::FLOAT32 ||
+      now_opsetup->specs_[0].datatype_ == tvx::DataType::BOOL8 ||
+      pre_opsetup->specs_[0].datatype_ == tvx::DataType::BOOL8) {
+    return;
+  }
+
+  tvx::Quantization& now_quant_info = now_opsetup->specs_[0].quantization_;
+
+  std::vector<int32_t> zps;
+  std::vector<float> scales;
+  if (now_quant_info.Type() == tvx::QuantType::NONE) {
+    zps = {0};
+    scales = {1.0};
+    now_quant_info.SetType(tvx::QuantType::ASYMMETRIC).SetScales({1.0}).SetZeroPoints({0});
+  } else {
+    zps = now_quant_info.ZeroPoints();
+    scales = now_quant_info.Scales();
+  }
+
+  while (ptr_callback &&
+         op_tb[ptr_callback->expr_]->specs_[0].quantization_.ZeroPoints().size() == 0) {
+    Expr expr = ptr_callback->expr_;
+    auto datatype = GetTvxType(expr->checked_type().as<TensorTypeNode>()->dtype);
+    if (datatype != tim::vx::DataType::INT32) {
+      op_tb[expr]
+          ->specs_[0]
+          .quantization_.SetType(tvx::QuantType::ASYMMETRIC)
+          .SetScales(scales)
+          .SetZeroPoints(zps);
+    }
+    ptr_callback = ptr_callback->ptr_pre_callback_;
+  }
+}
+
+template <typename T, typename T2>
+void attribute_transform(const T &attrs, T2 &attrs_num) {
+
+  std::transform(attrs.begin(), attrs.end(), attrs_num.begin(),
+                 [](const PrimExpr &attrs_num) {
+                   return static_cast<uint32_t>(
+                       attrs_num.as<IntImmNode>()->value);
+                 });
+};
+
+std::shared_ptr<tvx::Tensor> createVxOPerand(TensorInfoTable tensor_info,

Review comment:
       can you follow the C++ style guide here https://google.github.io/styleguide/cppguide.html#Function_Names? e.g.`CreateVxOperand`

##########
File path: tests/python/contrib/test_vsi_npu/test_operations.py
##########
@@ -0,0 +1,1673 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#from python.tvm.relay.qnn.op.qnn import dequantize
+import tvm
+from tvm import relay
+from tvm.relay import testing
+import numpy as np
+from infrastructure import verify_vsi_result
+
+def _single_operation_test(relay_nn_func, dtype, data_shape, out_shape,
+        *args, **kargs):
+    op_name = relay_nn_func.__name__.upper()
+    print("Testing {0: <50}".format(op_name), end="")
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+
+    out = relay_nn_func(data, *args, **kargs)
+
+    args = relay.analysis.free_vars(out)
+    net = relay.Function(args, out)
+
+    mod, params = relay.testing.init.create_workload(net)
+
+    verify_vsi_result(mod, params, data_shape, out_shape, dtype)
+
+def test_global_avg_pool2d():
+    func = relay.nn.global_avg_pool2d
+
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = (1, 20, 1, 1)
+    _single_operation_test(func, dtype, data_shape, out_shape)
+
+def test_global_max_pool2d():
+    func = relay.nn.global_max_pool2d
+
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = (1, 20, 1, 1)
+    _single_operation_test(func, dtype, data_shape, out_shape)
+
+def test_avg_pool2d():
+    func = relay.nn.avg_pool2d
+
+    dtype = "float32"
+    data_shape = (1, 1, 20, 20)
+    out_shape = (1, 1, 10, 10)
+    _single_operation_test(func, dtype, data_shape, out_shape, pool_size=(3, 3),
+            strides=(2, 2), padding=(1,1,1,1))
+
+def test_max_pool2d():
+    func = relay.nn.max_pool2d
+
+    dtype = "float32"
+    data_shape = (1, 1, 20, 20)
+    out_shape = (1, 1, 10, 10)
+    _single_operation_test(func, dtype, data_shape, out_shape, pool_size=(2, 2),
+            strides=(2, 2), padding=(0,0,0,0))
+
+def test_softmax():
+    func = relay.nn.softmax
+
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = data_shape
+    _single_operation_test(func, dtype, data_shape, out_shape)
+
+def test_relu():
+    func = relay.nn.relu
+
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = data_shape
+    _single_operation_test(func, dtype, data_shape, out_shape)
+
+def test_add():
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = data_shape
+
+    def get_workload(data_shape, dtype="float32"):
+        '''customized keywords(like data0,data1...) are not supported in \
+        relay.testing.init.create_workload
+        '''
+        data0 = relay.var("data", shape=data_shape, dtype=dtype)
+        data1 = relay.var("weight", shape=data_shape, dtype=dtype)
+
+        out = relay.add(data0, data1)
+
+        args = relay.analysis.free_vars(out)
+        net = relay.Function(args, out)
+
+        return relay.testing.init.create_workload(net)
+
+    print("Testing {0: <50}".format("ADD"), end="")

Review comment:
       suggest to use logging package for compat with pytest

##########
File path: src/relay/backend/contrib/vsi_npu/codegen.cc
##########
@@ -0,0 +1,425 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#include "codegen_vsi_npu.h"
+
+#include "../../../../runtime/contrib/vsi_npu/vsi_npu_runtime.h"
+#include "../../utils.h"
+#include "../codegen_c/codegen_c.h"
+#include "op_map/op_setup.h"
+
+#include <tvm/relay/attrs/image.h>
+#include <tvm/relay/attrs/nn.h>
+#include <tvm/relay/attrs/reduce.h>
+#include <tvm/relay/expr_functor.h>
+#include <tvm/relay/transform.h>
+#include <tvm/relay/type.h>
+#include <tvm/runtime/module.h>
+#include <tvm/runtime/registry.h>
+
+#include <fstream>
+#include <iostream>
+#include <numeric>
+#include <cassert>
+#include <sstream>
+
+#include "tim/transform/layout_inference.h"
+
+namespace tvx = tim::vx;
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace vsi_npu {
+
+using TensorInfoTable = std::map<Expr, std::vector<tim::vx::TensorSpec>>;
+
+void quant_info_infer(VxOpTable& op_tb, Expr now_expr, bool is_input) {
+  auto now_opsetup = op_tb[now_expr];
+  Expr pre_expr;
+  if ((now_opsetup->pCallbackexpr_ == nullptr ||
+      now_opsetup->pCallbackexpr_->ptr_pre_callback_ == nullptr) && is_input
+      ) {
+    return;
+  } else if((now_opsetup->pCallbackexpr_ == nullptr ||
+      now_opsetup->pCallbackexpr_->ptr_pre_callback_ == nullptr
+      || op_tb[now_expr]->specs_[0].quantization_.ZeroPoints().size() == 0)&& !is_input ){
+      return;
+  } else {
+    pre_expr = now_opsetup->pCallbackexpr_->ptr_pre_callback_->expr_;
+  }
+
+  auto pre_opsetup = op_tb[pre_expr];
+  auto ptr_callback = pre_opsetup->pCallbackexpr_;
+
+  if (now_opsetup->specs_[0].datatype_ == tvx::DataType::FLOAT32 ||
+      pre_opsetup->specs_[0].datatype_ == tvx::DataType::FLOAT32 ||
+      now_opsetup->specs_[0].datatype_ == tvx::DataType::BOOL8 ||
+      pre_opsetup->specs_[0].datatype_ == tvx::DataType::BOOL8) {
+    return;
+  }
+
+  tvx::Quantization& now_quant_info = now_opsetup->specs_[0].quantization_;
+
+  std::vector<int32_t> zps;
+  std::vector<float> scales;
+  if (now_quant_info.Type() == tvx::QuantType::NONE) {
+    zps = {0};
+    scales = {1.0};
+    now_quant_info.SetType(tvx::QuantType::ASYMMETRIC).SetScales({1.0}).SetZeroPoints({0});
+  } else {
+    zps = now_quant_info.ZeroPoints();
+    scales = now_quant_info.Scales();
+  }
+
+  while (ptr_callback &&
+         op_tb[ptr_callback->expr_]->specs_[0].quantization_.ZeroPoints().size() == 0) {
+    Expr expr = ptr_callback->expr_;
+    auto datatype = GetTvxType(expr->checked_type().as<TensorTypeNode>()->dtype);
+    if (datatype != tim::vx::DataType::INT32) {
+      op_tb[expr]
+          ->specs_[0]
+          .quantization_.SetType(tvx::QuantType::ASYMMETRIC)
+          .SetScales(scales)
+          .SetZeroPoints(zps);
+    }
+    ptr_callback = ptr_callback->ptr_pre_callback_;
+  }
+}
+
+template <typename T, typename T2>
+void attribute_transform(const T &attrs, T2 &attrs_num) {
+
+  std::transform(attrs.begin(), attrs.end(), attrs_num.begin(),
+                 [](const PrimExpr &attrs_num) {
+                   return static_cast<uint32_t>(
+                       attrs_num.as<IntImmNode>()->value);
+                 });
+};
+
+std::shared_ptr<tvx::Tensor> createVxOPerand(TensorInfoTable tensor_info,
+                                             Expr expr, tvx::Graph *graph,
+                                             uint32_t idx = 0) {
+  auto tensor_spec = tensor_info[expr][idx];
+  void *data = expr->IsInstance<ConstantNode>()
+                   ? expr.as<ConstantNode>()->data->data
+                   : nullptr;
+  return data == nullptr ? graph->CreateTensor(tensor_spec)
+                         : graph->CreateTensor(tensor_spec, data);
+};
+
+static std::vector<tim::vx::TensorSpec>
+GetTimVxTensorSpec(const TupleTypeNode *tuple) {
+  auto input_node_tensors = tuple->fields;
+
+  std::vector<tim::vx::TensorSpec> specs;
+  uint32_t input_node_num = input_node_tensors.size();
+  for (uint32_t i = 0; i < input_node_num; i++) {
+    std::cout << "GetTimVxTensorSpec: " << input_node_tensors[i].as<TensorTypeNode>() << std::endl;
+    tim::vx::ShapeType shape;
+    std::transform(input_node_tensors[i].as<TensorTypeNode>()->shape.rbegin(),
+                   input_node_tensors[i].as<TensorTypeNode>()->shape.rend(),
+                   std::back_inserter(shape), [](const PrimExpr &dim) {
+                     return static_cast<int>(dim.as<IntImmNode>()->value);
+                   });
+
+    auto dtype = input_node_tensors[i].as<TensorTypeNode>()->dtype;
+    auto dataType = GetTvxType(dtype);
+
+    tim::vx::TensorSpec spec(dataType, shape,
+                             tim::vx::TensorAttribute::OUTPUT);
+    specs.push_back(spec);
+  }
+  return specs;
+}
+
+using namespace backend;

Review comment:
       prefer to avoid this https://google.github.io/styleguide/cppguide.html#Namespaces

##########
File path: tests/python/contrib/test_vsi_npu/test_operations.py
##########
@@ -0,0 +1,1673 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#from python.tvm.relay.qnn.op.qnn import dequantize
+import tvm
+from tvm import relay
+from tvm.relay import testing
+import numpy as np
+from infrastructure import verify_vsi_result
+
+def _single_operation_test(relay_nn_func, dtype, data_shape, out_shape,
+        *args, **kargs):
+    op_name = relay_nn_func.__name__.upper()
+    print("Testing {0: <50}".format(op_name), end="")
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+
+    out = relay_nn_func(data, *args, **kargs)
+
+    args = relay.analysis.free_vars(out)
+    net = relay.Function(args, out)
+
+    mod, params = relay.testing.init.create_workload(net)
+
+    verify_vsi_result(mod, params, data_shape, out_shape, dtype)
+
+def test_global_avg_pool2d():
+    func = relay.nn.global_avg_pool2d
+
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = (1, 20, 1, 1)
+    _single_operation_test(func, dtype, data_shape, out_shape)
+
+def test_global_max_pool2d():
+    func = relay.nn.global_max_pool2d
+
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = (1, 20, 1, 1)
+    _single_operation_test(func, dtype, data_shape, out_shape)
+
+def test_avg_pool2d():
+    func = relay.nn.avg_pool2d
+
+    dtype = "float32"
+    data_shape = (1, 1, 20, 20)
+    out_shape = (1, 1, 10, 10)
+    _single_operation_test(func, dtype, data_shape, out_shape, pool_size=(3, 3),
+            strides=(2, 2), padding=(1,1,1,1))
+
+def test_max_pool2d():
+    func = relay.nn.max_pool2d
+
+    dtype = "float32"
+    data_shape = (1, 1, 20, 20)
+    out_shape = (1, 1, 10, 10)
+    _single_operation_test(func, dtype, data_shape, out_shape, pool_size=(2, 2),
+            strides=(2, 2), padding=(0,0,0,0))
+
+def test_softmax():
+    func = relay.nn.softmax
+
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = data_shape
+    _single_operation_test(func, dtype, data_shape, out_shape)
+
+def test_relu():
+    func = relay.nn.relu
+
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = data_shape
+    _single_operation_test(func, dtype, data_shape, out_shape)
+
+def test_add():
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = data_shape
+
+    def get_workload(data_shape, dtype="float32"):
+        '''customized keywords(like data0,data1...) are not supported in \
+        relay.testing.init.create_workload
+        '''
+        data0 = relay.var("data", shape=data_shape, dtype=dtype)
+        data1 = relay.var("weight", shape=data_shape, dtype=dtype)
+
+        out = relay.add(data0, data1)
+
+        args = relay.analysis.free_vars(out)
+        net = relay.Function(args, out)
+
+        return relay.testing.init.create_workload(net)
+
+    print("Testing {0: <50}".format("ADD"), end="")
+    mod, params = get_workload(data_shape, dtype)
+    verify_vsi_result(mod, params, data_shape, out_shape, dtype)
+
+def test_batch_flatten():
+    func = relay.nn.batch_flatten
+
+    dtype = "float32"
+    data_shape = (1, 5, 10, 10)
+    out_shape = (1, 500)
+    _single_operation_test(func, dtype, data_shape, out_shape)
+
+def test_batch_norm():
+    data_shape = (1, 4)
+    c_shape = (4,)
+    out_shape = (1, 4)
+    dtype="float32"
+
+    def get_workload(data_shape, weight_shape, dtype="float32"):
+        data = relay.var("data", shape=data_shape, dtype=dtype)
+
+        w = tvm.nd.array(np.ones(weight_shape, dtype))
+        gamma = relay.const(w, dtype)
+        beta = relay.const(w, dtype)
+        moving_mean = relay.const(w, dtype)
+        moving_var = relay.const(w, dtype)
+
+        bn = relay.nn.batch_norm(data, gamma, beta, moving_mean, moving_var)
+        out = bn[0]
+
+        args = relay.analysis.free_vars(out)
+        net = relay.Function(args, out)
+
+        return relay.testing.init.create_workload(net)
+
+    print("Testing {0: <50}".format("BATCH_NORM"), end="")
+    mod, params = get_workload(data_shape, c_shape, dtype)
+    verify_vsi_result(mod, params, data_shape, out_shape, dtype)
+
+def test_conv2d():
+    data_shape = (1, 256, 64, 64)
+    weight_shape = (256, 256, 3, 3)
+    out_shape = (1, 256, 64, 64)
+    dtype="float32"
+    Pad=(1,1,1,1)
+    Strides=(1,1)
+    Dilation=(1,1)
+    Ksize=(3,3)
+    Groups=1
+
+    def get_workload(data_shape, weight_shape, dtype="float32"):
+        """Function to construct a MobileNet"""
+        data = relay.var("data", shape=data_shape, dtype=dtype)
+        weight = relay.var("conv_weight")
+        conv = relay.nn.conv2d(
+            data,
+            weight,
+            channels=weight_shape[0],
+            kernel_size=Ksize,
+            strides=Strides,
+            padding=Pad,
+            groups=Groups,
+            data_layout="NCHW",
+            kernel_layout="OIHW"
+        )
+        args = relay.analysis.free_vars(conv)
+        net = relay.Function(args, conv)
+        return relay.testing.init.create_workload(net)
+
+    print("Testing {0: <50}".format("CONV2D"), end="")
+    mod, params = get_workload(data_shape, weight_shape, dtype)
+    verify_vsi_result(mod, params, data_shape, out_shape, dtype)
+
+
+def test_dense():
+    data_shape = (1, 784)
+    weight_shape = (128, 784)
+    out_shape = (1, 128)
+    dtype="float32"
+
+    def get_workload(data_shape, weight_shape, dtype="float32"):
+        data = relay.var("data", shape=data_shape, dtype=dtype)
+        fc1 = relay.nn.dense(data, relay.var("fc1_weight"), units=weight_shape[0])
+        fc = relay.nn.bias_add(fc1, relay.var("fc1_bias"), axis=1)
+        args = relay.analysis.free_vars(fc)
+        net = relay.Function(args, fc)
+
+        return relay.testing.init.create_workload(net)
+
+    print("Testing {0: <50}".format("DENSE_BIAS_ADD"), end="")
+    mod, params = get_workload(data_shape, weight_shape, dtype)
+    verify_vsi_result(mod, params, data_shape, out_shape, dtype)
+
+def test_concatenate():
+    dtype = "float32"
+    data_shape = (1, 20, 12)
+    out_shape = (1, 20, 24)
+
+    def get_workload(data_shape, dtype="float32"):
+        '''customized keywords(like data0,data1...) are not supported in \
+        relay.testing.init.create_workload
+        '''
+        data0 = relay.var("data", shape=data_shape, dtype=dtype)
+        data1 = relay.var("bias", shape=data_shape, dtype=dtype)
+
+        out = relay.concatenate([data0, data1], axis=-1)
+        args = relay.analysis.free_vars(out)
+        net = relay.Function(args, out)
+
+        return relay.testing.init.create_workload(net)
+
+    print("Testing {0: <50}".format("CONCATENATE"), end="")
+    mod, params = get_workload(data_shape, dtype)
+    verify_vsi_result(mod, params, data_shape, out_shape, dtype)
+
+def test_image_resize():
+    data_dtype = 'float32'
+    in_size = (33, 33)
+    out_size = (257, 257)
+    channel = (21,)
+
+    def run_test(layout, method, mode):
+        if (layout == "NHWC"):
+            data_shape = (1,) + in_size + channel
+            out_shape = (1,) + out_size + channel
+        else:
+            data_shape = (1,) + channel + in_size
+            out_shape = (1,) + channel + out_size
+        x = relay.var("data", shape=data_shape, dtype=data_dtype)
+        out = relay.image.resize(x, out_size, layout, method, mode)
+        args = relay.analysis.free_vars(out)
+        net = relay.Function(args, out)
+
+        mod, params = relay.testing.init.create_workload(net)
+        verify_vsi_result(mod, params, data_shape, out_shape, data_dtype)
+    print("Testing {0: <50}".format("RESIZE_1"), end="")
+    run_test("NHWC", "nearest_neighbor", "asymmetric")
+    print("Testing {0: <50}".format("RESIZE_2"), end="")
+    run_test("NHWC", "bilinear", "asymmetric")
+    print("Testing {0: <50}".format("RESIZE_3"), end="")
+    run_test("NHWC", "bilinear", "half_pixel")
+    print("Testing {0: <50}".format("RESIZE_4"), end="")
+    run_test("NHWC", "bilinear", "align_corners")
+    print("Testing {0: <50}".format("RESIZE_5"), end="")
+    run_test("NCHW", "nearest_neighbor", "asymmetric")
+    print("Testing {0: <50}".format("RESIZE_6"), end="")
+    run_test("NCHW", "bilinear", "asymmetric")
+    print("Testing {0: <50}".format("RESIZE_7"), end="")
+    run_test("NCHW", "bilinear", "half_pixel")
+    print("Testing {0: <50}".format("RESIZE_8"), end="")
+    run_test("NCHW", "bilinear", "align_corners")
+
+def test_dropout():
+    func = relay.nn.dropout
+
+    dtype = "float32"
+    data_shape = (1, 20, 12, 9)
+    out_shape = data_shape
+    _single_operation_test(func, dtype, data_shape, out_shape, rate=0.5)
+
+def test_qnn_add():
+    data_dtype = "uint8"
+    data_shape = (1, 96, 96, 64)
+    data2_shape = (64,)
+    out_shape = (1, 96, 96, 64)
+
+    x = relay.var("x", shape=data_shape, dtype=data_dtype)
+    y = relay.var("y", shape=data2_shape, dtype=data_dtype)
+    out = relay.qnn.op.add(
+        lhs=x,
+        rhs=y,
+        lhs_scale=relay.const(0.020283, "float32"),
+        lhs_zero_point=relay.const(112, "int32"),
+        rhs_scale=relay.const(0.000316, "float32"),
+        rhs_zero_point=relay.const(119, "int32"),
+        output_scale=relay.const(0.020144, "float32"),
+        output_zero_point=relay.const(112, "int32"),
+    )
+
+    print("Testing {0: <50}".format("QNN.ADD"), end="")
+    inputs = {
+        "x": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+    }
+    params = {
+        "y": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+    }
+    verify_vsi_result(inputs, out,params, data_shape, out_shape, data_dtype)
+
+def test_float_add():
+    dtype = "float32"
+    data_0_shape = (1,7 ,7 , 768)
+    data_1_shape = (1, 1, 1, 768)
+    out_shape = data_0_shape
+    data0 = relay.var("a", shape=data_0_shape, dtype=dtype)
+    data1 = relay.var("b", shape=data_1_shape, dtype=dtype)
+
+    out = relay.op.add(lhs=data0, rhs=data1)
+
+    print("Testing {0: <50}".format("ADD"), end="")
+    inputs = {
+        "a": tvm.nd.array(np.random.uniform(size=data_0_shape).astype(dtype)),
+        #"b": tvm.nd.array(np.random.uniform(size=data_1_shape).astype(dtype)),
+    }
+    params = {
+        #"weight": tvm.nd.array(np.ones(weight_shape,dtype)),
+        "b": tvm.nd.array(np.random.uniform(size=data_1_shape).astype(dtype)),
+    }
+    verify_vsi_result(inputs, out, params, data_0_shape, out_shape, dtype)
+
+def test_float_relu():
+    dtype ="float32"
+    data_shape = (2, 2, 2, 2)
+    out_shape = data_shape
+
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    out = relay.op.nn.relu(data)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(-1.0, 1.0, size=data_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("RELU"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, dtype)
+
+def test_uint8_relu():
+    input_dtype = "float32"
+    output_dtype = "uint8"
+    temp_dtype = "float32"
+    data_shape = (1,100)
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+
+    scale = 0.15294
+    zero_point = 128
+    quantize = lambda x: float(int(round(x / scale)) + zero_point)
+    qmax = float(tvm.tir.op.max_value("uint8").value)
+
+    quant = relay.qnn.op.quantize(data,
+                            output_scale=relay.const(0.15294, "float32"),
+                            output_zero_point=relay.const(128, "int32"),
+                            axis = -1,
+                            out_dtype=output_dtype
+                            )
+    op = relay.clip(quant, quantize(0), qmax)
+
+    requantize_params = {
+            "input_scale": relay.const(0.15294, "float32"),
+            "input_zero_point": relay.const(128, "int32"),
+            "output_scale": relay.const(0.15294, "float32"),
+            "output_zero_point": relay.const(128, "int32"),
+            "out_dtype":output_dtype,
+        }
+
+    requantize = relay.qnn.op.requantize(op,**requantize_params)
+
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(-4, 4, size=data_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("RELU"), end="")
+    verify_vsi_result(inputs, requantize, [], data_shape, data_shape, output_dtype)
+
+def test_float_leaky_relu():
+    dtype ="float32"
+    data_shape = (2, 2, 2, 2)
+    out_shape = data_shape
+
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    alpha = 0.1
+
+    out = relay.op.nn.leaky_relu(data, alpha)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(-1.0, 1.0, size=data_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("LEAKY RELU"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, dtype)
+
+def test_uint8_leaky_relu():
+    input_dtype = "uint8"
+    output_dtype = input_dtype
+    temp_dtype = "float32"
+    data_shape = (1,100)
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+    alpha = 0.1
+
+    dequantize_op = relay.qnn.op.dequantize(data,
+                            input_zero_point=relay.const(128, "int32"),
+                            input_scale=relay.const(0.15294, "float32"),
+                            axis = -1,
+                            )
+    op = relay.op.nn.leaky_relu(dequantize_op, alpha)
+
+    quantize = relay.qnn.op.quantize(op,
+                            output_scale=relay.const(0.15294, "float32"),
+                            output_zero_point=relay.const(128, "int32"),
+                            axis = -1,
+                            out_dtype=output_dtype
+                            )
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(0, 255, size=data_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("LEAKY RELU"), end="")
+    verify_vsi_result(inputs, quantize, [], data_shape, data_shape, output_dtype)
+
+
+def test_float_softmax():
+    #func = relay.nn.softmax
+
+    dtype = "float32"
+    data_shape = (1,100)
+    out_shape = data_shape
+    axis = 1
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    out = relay.op.nn.softmax(data,axis)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(size=data_shape).astype(dtype)),
+        #"data": tvm.nd.array(np.arange(1000).reshape(data_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("SOFTMAX"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, dtype)
+
+def test_float32_conv2d():
+    data_shape = (1, 2, 5, 5)
+    weight_shape = (2, 2, 3, 3)
+    out_shape = (1, 2, 3, 3)
+    dtype="float32"
+    Pad=(0,0,0,0)
+    Strides=(1,1)
+    Dilation=(1,1)
+    Ksize=(3,3)
+    Groups=1
+
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    weight = relay.var("weight")
+    out = relay.nn.conv2d(
+            data,
+            weight,
+            channels=weight_shape[0],
+            kernel_size=Ksize,
+            strides=Strides,
+            padding=Pad,
+            groups=Groups,
+            data_layout="NCHW",
+            kernel_layout="OIHW"
+        )
+    inputs = {
+        "data": tvm.nd.array(np.arange(50).reshape(data_shape).astype(dtype)),
+    }
+
+    params = {
+        "weight": tvm.nd.array(np.ones(weight_shape,dtype)),
+    }
+    print("Testing {0: <50}".format("CONV2D"), end="")
+    verify_vsi_result(inputs, out, params, data_shape, out_shape, dtype)
+
+def test_float32_conv2d_permute():
+    data_shape = (1, 4, 4, 4)
+    weight_shape = (3, 3, 4, 5)
+    out_shape = (1, 2, 2, 5)
+    dtype="float32"
+    Pad=(0,0,1,1)
+    Strides=(2,2)
+    Dilation=(1,1)
+    Ksize=(3,3)
+    Groups=1
+
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    weight = relay.var("weight",shape=weight_shape,dtype=dtype)
+    out = relay.nn.conv2d(
+            data,
+            weight,
+            channels=weight_shape[3],
+            padding=Pad,
+            kernel_size=Ksize,
+            strides=Strides,
+            groups=Groups,
+            data_layout="NHWC",
+            kernel_layout="HWIO"
+        )
+    inputs = {
+        #"data": tvm.nd.array(np.ones(data_shape).astype(dtype)),
+        #"data": tvm.nd.array(np.arange(1*4*4*4).reshape(data_shape).astype(dtype)),
+        "data": tvm.nd.array(np.random.uniform(size=data_shape).astype(dtype)),
+    }
+
+    params = {
+        #"weight": tvm.nd.array(np.arange(3*4*3*5).reshape(weight_shape).astype(dtype)),
+        "weight": tvm.nd.array(np.random.uniform(size=weight_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("CONV2D"), end="")
+    verify_vsi_result(inputs, out, params, data_shape, out_shape, dtype)
+
+def test_float32_depthwise_conv2d_permute():
+    data_shape = (1, 28, 28, 192)
+    weight_shape = (3, 3, 192, 1)
+    out_shape = (1, 14, 14, 192)
+    dtype="float32"
+    Pad=(0,0,1,1)
+    Strides=(2,2)
+    Dilation=(1,1)
+    Ksize=(3,3)
+    Groups=192
+
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    weight = relay.var("conv_weight",shape=weight_shape,dtype=dtype)
+    out = relay.nn.conv2d(
+            data,
+            weight,
+            channels=Groups,
+            padding=Pad,
+            kernel_size=Ksize,
+            strides=Strides,
+            groups=Groups,
+            data_layout="NHWC",
+            kernel_layout="HWOI"
+        )
+    inputs = {
+        #"data": tvm.nd.array(np.ones(data_shape,dtype)),
+        "data": tvm.nd.array(np.arange(data_shape[1]*data_shape[2]*data_shape[3]).reshape(data_shape).astype(dtype)),
+        #"weight": tvm.nd.array(np.ones(weight_shape,dtype)),
+    }
+    params = {
+        "conv_weight": tvm.nd.array(np.random.uniform(size=weight_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("CONV2D"), end="")
+    verify_vsi_result(inputs, out, params, data_shape, out_shape, dtype)
+
+def test_float_reshape():
+    data_dtype = "float32"
+    data_shape = (1,1,1,1000)
+    out_shape = (1,1000)
+    data = relay.var("data", shape=data_shape, dtype=data_dtype)
+    out = relay.op.reshape(data,out_shape)
+    inputs = {
+        "data": tvm.nd.array(np.ones(data_shape,data_dtype)),
+    }
+    print("Testing {0: <50}".format("RESHAPE"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_float_tranpose():
+    data_dtype = "float32"
+    data_shape = (1,1,192,256)
+    out_shape = (256,192,1,1)
+    data = relay.var("data", shape=data_shape, dtype=data_dtype)
+    out = relay.op.transpose(data,(3,2,1,0))
+    inputs = {
+        "data": tvm.nd.array(np.arange(data_shape[0]*data_shape[1]*data_shape[2]*data_shape[3]).reshape(data_shape).astype(data_dtype)),
+    }
+    print("Testing {0: <50}".format("TRANSPOSE"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_uint8_tranpose():
+    data_dtype = "uint8"
+    data_shape = (1,1,192,256)
+    out_shape = (256,192,1,1)
+    data = relay.var("data", shape=data_shape, dtype=data_dtype)
+    out = relay.op.transpose(data,(3,2,1,0))
+    inputs = {
+        "data": tvm.nd.array(np.arange(data_shape[0]*data_shape[1]*data_shape[2]*data_shape[3]).reshape(data_shape).astype(data_dtype)),
+    }
+    print("Testing {0: <50}".format("TRANSPOSE"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_float_relu6():
+    dtype = "float32"
+    data_shape = (1, 2, 1, 1)
+    out_shape = data_shape
+
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    out = relay.clip(data, 0, 6)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(-1, 1, size=data_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("RELU6"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, dtype)
+
+def test_uint8_relu6():
+    input_dtype = "float32"
+    output_dtype = "uint8"
+    temp_dtype = "float32"
+    data_shape = (1,100)
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+
+    scale = 0.15294
+    zero_point = 128
+    quantize = lambda x: float(int(round(x / scale)) + zero_point)
+
+    quant = relay.qnn.op.quantize(data,
+                            output_scale=relay.const(0.15294, "float32"),
+                            output_zero_point=relay.const(128, "int32"),
+                            axis = -1,
+                            out_dtype=output_dtype
+                            )
+    op = relay.clip(quant, quantize(0.0), quantize(6.0))
+
+    requantize_params = {
+            "input_scale": relay.const(0.15294, "float32"),
+            "input_zero_point": relay.const(128, "int32"),
+            "output_scale": relay.const(0.15294, "float32"),
+            "output_zero_point": relay.const(128, "int32"),
+            "out_dtype":output_dtype,
+        }
+
+    requantize = relay.qnn.op.requantize(op,**requantize_params)
+
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(-4, 4, size=data_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("RELU"), end="")
+    verify_vsi_result(inputs, requantize, [], data_shape, data_shape, output_dtype)
+
+
+def test_sample_model():
+    conv_data_shape = (1, 224, 224, 3)
+    weight_shape = (3, 3, 3, 1)
+    reshape_data_shape = (1, 112, 112, 1)
+    softmax_data_shape = (1,12544)
+
+    #conv
+    dtype="float32"
+    Pad=(0,0,1,1)
+    Strides=(2,2)
+    Dilation=(1,1)
+    Ksize=(3,3)
+    Groups=1
+
+    data = relay.var("data", shape=conv_data_shape, dtype=dtype)
+    weight = relay.var("weight",shape=weight_shape,dtype=dtype)
+    conv = relay.nn.conv2d(
+            data,
+            weight,
+            channels=weight_shape[3],
+            padding=Pad,
+            kernel_size=Ksize,
+            strides=Strides,
+            groups=Groups,
+            data_layout="NHWC",
+            kernel_layout="HWIO"
+        )
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(size=conv_data_shape).astype(dtype)),
+    }
+
+    reshape = relay.op.reshape(conv,softmax_data_shape)
+    softmax = relay.op.nn.softmax(reshape,1)
+    params = {
+        "weight": tvm.nd.array(np.random.uniform(size=weight_shape).astype(dtype)),
+    }
+    verify_vsi_result(inputs, softmax, params, conv_data_shape, softmax_data_shape, dtype)
+
+def test_quantize():
+    input_dtype = "float32"
+    output_dtype = "uint8"
+    data_shape = (1, 2, 2, 3)
+    out_shape = data_shape
+    scale=relay.const(0.1, input_dtype),
+    zero_point=relay.const(125, "int32"),
+
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+    out = relay.qnn.op.quantize(data,
+                            output_scale=relay.const(0.00784314, input_dtype),
+                            output_zero_point=relay.const(127, "int32"),
+                            axis = -1,
+                            out_dtype=output_dtype
+                            )
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(size=data_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("QUANTIZE"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, output_dtype)
+
+def test_dequantize():
+    input_dtype = "uint8"
+    output_dtype = "float32"
+    data_shape = (1, 2, 2, 3)
+    out_shape = data_shape
+    scale=relay.const(0.1, input_dtype),
+    zero_point=relay.const(125, "int32"),
+
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+    out = relay.qnn.op.dequantize(data,
+                            input_zero_point=relay.const(127, "int32"),
+                            input_scale=relay.const(0.00784314, output_dtype),
+                            axis = -1,
+                            )
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(0,10,size=data_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("QUANTIZE"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, output_dtype)
+
+def test_float_avg_pool():
+    dtype = "float32"
+    data_shape = (1, 7, 7, 768)
+    out_shape = (1, 1, 1, 768)
+
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    out = relay.op.nn.avg_pool2d(data,pool_size=(7, 7),strides=(2, 2),layout="NHWC")
+    inputs = {
+        "data": tvm.nd.array(np.arange(7*7*768).reshape(data_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("AVG_POOL_2D"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, dtype)
+
+def test_float32_pattern():
+    data_shape = (1, 4, 4, 4)
+    weight_shape = (3, 3, 4, 5)
+    add_shape = (5,)
+    out_shape = (1,2,2,5)
+    dtype="float32"
+    Pad=(0,0,1,1)
+    Strides=(2,2)
+    Dilation=(1,1)
+    Ksize=(3,3)
+    Groups=1
+
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    weight = relay.var("weight",shape=weight_shape,dtype=dtype)
+    add_data = relay.var("add", shape=add_shape, dtype=dtype)
+    conv = relay.nn.conv2d(
+            data,
+            weight,
+            channels=weight_shape[3],
+            padding=Pad,
+            kernel_size=Ksize,
+            strides=Strides,
+            groups=Groups,
+            data_layout="NHWC",
+            kernel_layout="HWIO"
+        )
+
+    add_op = relay.op.nn.bias_add(conv, add_data,3)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(size=data_shape).astype(dtype)),
+    }
+
+    params = {
+        "weight": tvm.nd.array(np.random.uniform(size=weight_shape).astype(dtype)),
+        "add": tvm.nd.array(np.random.uniform(size=add_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("CONV2D"), end="")
+    verify_vsi_result(inputs, add_op, params, data_shape, out_shape, dtype)
+
+
+def test_requantize():
+    input_shape = (1, 2, 2, 5)
+    output_shape = input_shape
+    intput_dtype="uint8"
+    output_dtype="int8"
+
+    data = relay.var("data", shape=input_shape, dtype=intput_dtype)
+
+    op_params = {
+            "input_scale": relay.const(0.00784314, "float32"),
+            "input_zero_point": relay.const(127, "int32"),
+            "output_scale": relay.const(0.01784314, "float32"),
+            "output_zero_point": relay.const(127, "int32"),
+            "out_dtype":output_dtype,
+        }
+    out = relay.qnn.op.requantize(data,**op_params)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(0,100,size=input_shape).astype(intput_dtype)),
+    }
+    print("Testing {0: <50}".format("REQUANTIZE"), end="")
+    verify_vsi_result(inputs, out, [], input_shape, output_shape, output_dtype)
+
+
+def test_uint8_conv2d_pattern():
+    data_shape = (1, 56, 56, 32)
+    weight_shape = (1, 1, 32, 64)
+    out_shape = (1, 56, 56, 64)
+    add_shape = (64,)
+    intput_dtype="int8"
+    output_dtype=intput_dtype
+    add_dtype = "int32"
+    Pad=(0,0,0,0)
+    Strides=(1,1)
+    Dilation=(1,1)
+    Ksize=(1,1)
+    Groups=1
+
+    data = relay.var("data", shape=data_shape, dtype=intput_dtype)
+    weight = relay.var("weight",shape=weight_shape,dtype=intput_dtype)
+    add = relay.var("add",shape=add_shape,dtype=add_dtype)
+
+    conv_params = {
+            "kernel_size": Ksize,
+            "strides": Strides,
+            "dilation": Dilation,
+            "padding": Pad,
+            "data_layout": "NHWC",
+            "channels":weight_shape[3],
+            "kernel_layout":"HWIO"
+        }
+    qnn_conv2d_params = dict(conv_params)
+    qnn_conv2d_params["input_zero_point"] = relay.const(0, "int32")
+    qnn_conv2d_params["kernel_zero_point"] = relay.const(77, "int32")
+    qnn_conv2d_params["out_dtype"] = "int32"
+    qnn_conv2d_params["input_scale"] = relay.const(0.023528, "float32")
+    qnn_conv2d_params["kernel_scale"] = relay.const(0.045283, "float32")
+    conv_op = relay.qnn.op.conv2d(
+            data,
+            weight,
+            **qnn_conv2d_params
+        )
+
+    add_op = relay.op.nn.bias_add(conv_op, add,3)
+
+    requantize_params = {
+            "input_scale": relay.const(0.001065418, "float32"),
+            "input_zero_point": relay.const(0, "int32"),
+            "output_scale": relay.const(0.0235285, "float32"),
+            "output_zero_point": relay.const(0, "int32"),
+            "out_dtype":output_dtype,
+        }
+
+    out = relay.qnn.op.requantize(add_op,**requantize_params)
+
+    inputs = {
+        "data": tvm.nd.array(np.ones(data_shape).astype(intput_dtype)),
+    }
+    params = {
+        "weight": tvm.nd.array(np.arange(weight_shape[0]*weight_shape[1]*weight_shape[2]*weight_shape[3]).reshape(weight_shape).astype(intput_dtype)),
+        "add": tvm.nd.array(np.arange(64).reshape(add_shape).astype(add_dtype)),
+    }
+    print("Testing {0: <50}".format("QNN pattern"), end="")
+    verify_vsi_result(inputs, out, params, data_shape, out_shape, output_dtype)
+
+def test_cast():
+    input_dtype = "uint8"
+    output_dtype = "float32"
+    input_shape = (1, 3, 3,1 )
+    output_shape = input_shape
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+
+    out = relay.op.cast(data,output_dtype)
+    inputs = {
+      "data": tvm.nd.array(np.random.uniform(0,20,size=input_shape).astype(input_dtype)),
+    }
+    verify_vsi_result(inputs, out, [], input_shape, output_shape, output_dtype)
+
+def test_uint8_avg_pool():
+    input_dtype = "uint8"
+    temp_dtype = "int32"
+    input_shape = (1, 7, 7, 768)
+    output_shape = (1, 1, 1, 768)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+
+    cast_0 = relay.op.cast(data,temp_dtype)
+    out = relay.op.nn.avg_pool2d(cast_0,pool_size=(7, 7),strides=(2, 2),layout="NHWC")
+    cast_1 = relay.op.cast(out,input_dtype)
+    inputs = {
+        "data": tvm.nd.array(np.arange(7*7*768).reshape(input_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("AVG_POOL_2D"), end="")
+    verify_vsi_result(inputs, cast_1, [], input_shape, output_shape, input_dtype)
+
+def test_uint8_softmax():
+    input_dtype = "uint8"
+    output_dtype = input_dtype
+    temp_dtype = "float32"
+    data_shape = (1,100)
+    axis = 1
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+
+    dequantize_op = relay.qnn.op.dequantize(data,
+                            input_zero_point=relay.const(76, "int32"),
+                            input_scale=relay.const(0.15294, "float32"),
+                            axis = -1,
+                            )
+    softmax_op = relay.op.nn.softmax(dequantize_op,axis)
+
+    quantize = relay.qnn.op.quantize(softmax_op,
+                            output_scale=relay.const(0.003906, "float32"),
+                            output_zero_point=relay.const(0, "int32"),
+                            axis = -1,
+                            out_dtype=output_dtype
+                            )
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,20,size=data_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("SOFTMAX"), end="")
+    verify_vsi_result(inputs, quantize, [], data_shape, data_shape, output_dtype)
+
+def test_uint8_reshape():
+    data_dtype = "uint8"
+    data_shape = (1,1,1,1000)
+    out_shape = (1,1000)
+    data = relay.var("data", shape=data_shape, dtype=data_dtype)
+    out = relay.op.reshape(data,out_shape)
+    inputs = {
+        "data": tvm.nd.array(np.ones(data_shape,data_dtype)),
+    }
+    print("Testing {0: <50}".format("RESHAPE"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_uint8_max_pool():
+    input_dtype = "uint8"
+    input_shape = (1, 112, 112, 2)
+    output_shape = (1, 56, 56, 2)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+
+    out = relay.op.nn.max_pool2d(data,pool_size=(3, 3),strides=(2, 2),padding=(0,0,1, 1),layout="NHWC")
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,20,size=input_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("MAX_POOL_2D"), end="")
+    verify_vsi_result(inputs, out, [], input_shape, output_shape, input_dtype)
+
+
+def test_uint8_concatenation():
+    dtype = "uint8"
+    data_0_shape = (1, 14, 14, 320)
+    data_1_shape = (1, 14, 14, 160)
+    data_2_shape = (1, 14, 14, 96)
+
+    out_shape = (1, 14, 14, 576)
+
+    data_0 = relay.var("data0", shape=data_0_shape, dtype=dtype)
+    data_1 = relay.var("data1", shape=data_1_shape, dtype=dtype)
+    data_2 = relay.var("data2", shape=data_2_shape, dtype=dtype)
+    data = [data_0,data_1,data_2]
+
+    input_scale_0 = relay.const(0.0673782, "float32")
+    input_zp_0 =  relay.const(0, "int32")
+
+    input_scale_1 = relay.const(0.0485237, "float32")
+    input_zp_1 =  relay.const(0, "int32")
+
+    input_scale_2 = relay.const(0.03775704, "float32")
+    input_zp_2 =  relay.const(0, "int32")
+
+    input_scales = (input_scale_0,input_scale_1,input_scale_2)
+    input_zps = (input_zp_0,input_zp_1,input_zp_2)
+
+    output_scale = relay.const(0.0673782, "float32")
+    output_zp =   relay.const(0, "int32")
+    out = relay.qnn.op.concatenate(data,input_scales=input_scales,input_zero_points=input_zps,
+                                    output_scale=output_scale,output_zero_point=output_zp,axis=3)
+
+    inputs = {
+         "data0": tvm.nd.array(np.random.uniform(1,50,size=data_0_shape).astype(dtype)),
+         "data1": tvm.nd.array(np.random.uniform(1,50,size=data_1_shape).astype(dtype)),
+         "data2": tvm.nd.array(np.random.uniform(1,50,size=data_2_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("AVG_POOL_2D"), end="")
+    verify_vsi_result(inputs, out, [], data_0_shape, out_shape, dtype)
+
+def test_float_mean():
+    input_dtype = "float32"
+    axis_dtype = "int32"
+    input_shape = (1, 20, 20, 5)
+    output_shape = (1, 1,1,5)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+    axis = tuple(np.array([1,2],dtype=axis_dtype))
+    out = relay.op.reduce.mean(data,axis,True)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,20,size=input_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("MEAN"), end="")
+    verify_vsi_result(inputs, out, [], input_shape, output_shape, input_dtype)
+
+def test_uint8_mean():
+    input_dtype = "uint8"
+    temp_dtype = "int32"
+    axis_dtype = "int32"
+    output_dtype = input_dtype
+    input_shape = (1, 20, 20, 5)
+    axis_shape = (2,)
+    output_shape = (1, 1,1,5)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+    axis = tuple(np.array([1,2],dtype=axis_dtype))
+
+    cast_0 = relay.op.cast(data,temp_dtype)
+    mean = relay.op.reduce.mean(data,axis,True)
+    requantize_params = {
+            "input_scale": relay.const(0.001065418, "float32"),
+            "input_zero_point": relay.const(0, "int32"),
+            "output_scale": relay.const(0.0235285, "float32"),
+            "output_zero_point": relay.const(0, "int32"),
+            "out_dtype":output_dtype,
+        }
+
+    out = relay.qnn.op.requantize(mean,**requantize_params)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,20,size=input_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("MEAN"), end="")
+    verify_vsi_result(inputs, out, [], input_shape, output_shape, input_dtype)
+
+def test_uint8_resizeBilinear():
+    input_dtype = "uint8"
+    size_dtype = "int32"
+    output_dtype = input_dtype
+    input_shape = (1, 1, 1, 256)
+    output_shape = (1, 33, 33, 256)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+    target_size = tuple(np.array([33,33],dtype=size_dtype))
+    method = "bilinear"
+    coord_trans = "align_corners"
+    out = relay.image.resize(
+            data, target_size, "NHWC", method, coordinate_transformation_mode=coord_trans
+        )
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,20,size=input_shape).astype(input_dtype)),
+    }
+    verify_vsi_result(inputs, out, [], input_shape, output_shape, input_dtype)
+
+def test_uint8_argmax():
+    input_dtype = "uint8"
+    output_dtype = "int32"
+    input_shape = (1, 513, 513, 21)
+    output_shape = (1, 513, 513)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+
+    out = relay.op.argmax(data, 3)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,100,size=input_shape).astype(input_dtype)),
+    }
+    verify_vsi_result(inputs, out, [], input_shape, output_shape, output_dtype)
+
+def test_uint8_argmin():
+    input_dtype = "uint8"
+    output_dtype = "int32"
+    input_shape = (1, 513, 513, 21)
+    output_shape = (1, 513, 513)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+
+    out = relay.op.argmin(data, 3)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,100,size=input_shape).astype(input_dtype)),
+    }
+    verify_vsi_result(inputs, out, [], input_shape, output_shape, output_dtype)
+
+def test_float_sigmoid():
+    dtype = "float32"
+    data_shape = (1,100)
+    out_shape = data_shape
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+    out = relay.op.sigmoid(data)
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(size=data_shape).astype(dtype)),
+    }
+    print("Testing {0: <50}".format("SIGMOID"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, dtype)
+
+def test_uint8_sigmoid():
+    input_dtype = "uint8"
+    output_dtype = input_dtype
+    temp_dtype = "float32"
+    data_shape = (1,100)
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+
+    dequantize_op = relay.qnn.op.dequantize(data,
+                            input_zero_point=relay.const(0, "int32"),
+                            input_scale=relay.const(0.15294, "float32"),
+                            axis = -1,
+                            )
+    sigmoid_op = relay.op.sigmoid(dequantize_op)
+
+    quantize = relay.qnn.op.quantize(sigmoid_op,
+                            output_scale=relay.const(0.15294, "float32"),
+                            output_zero_point=relay.const(0, "int32"),
+                            axis = -1,
+                            out_dtype=output_dtype
+                            )
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,20,size=data_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("SIGMOID"), end="")
+    verify_vsi_result(inputs, quantize, [], data_shape, data_shape, output_dtype)
+
+def test_float_batch_norm():
+    data_shape = (1, 4)
+    c_shape = (4,)
+    out_shape = (1, 4)
+
+    dtype = "float32"
+    w = tvm.nd.array(np.ones(c_shape, dtype))
+    gamma = relay.const(w, dtype)
+    beta = relay.const(w, dtype)
+    moving_mean = relay.const(w, dtype)
+    moving_var = relay.const(w, dtype)
+
+    epsilon = 1e-4
+
+    data = relay.var("data", shape=data_shape, dtype=dtype)
+
+    batch_norm = relay.nn.batch_norm(data, gamma, beta, moving_mean, moving_var,
+        epsilon=epsilon)
+    out = batch_norm[0]
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(size=data_shape).astype(dtype)),
+    }
+
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, dtype)
+
+def test_uint8_avg_pool2():
+    input_dtype = "uint8"
+    temp_dtype = "int32"
+    input_shape = (1, 4, 4, 1)
+    output_shape = (1, 4, 4, 1)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+
+    cast_0 = relay.op.cast(data,temp_dtype)
+    out = relay.op.nn.avg_pool2d(cast_0,pool_size=(3, 3),strides=(1, 1),padding=(1,1,1,1),layout="NHWC")
+    cast_1 = relay.op.cast(out,input_dtype)
+    inputs = {
+        #"data": tvm.nd.array(np.ones(input_shape,input_dtype)),
+        "data": tvm.nd.array(np.arange(4*4*1).reshape(input_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("AVG_POOL_2D"), end="")
+    verify_vsi_result(inputs, cast_1, [], input_shape, output_shape, input_dtype)
+
+def test_uint8_depthwiseconv2d_pattern():
+    data_shape = (1, 12, 12, 3)
+    weight_shape = (7, 7, 3, 8)
+    out_shape = (1, 6, 6, 24)
+    add_shape = (24,)
+    intput_dtype="uint8"
+    output_dtype=intput_dtype
+    add_dtype = "int32"
+    Pad=(2,2,3,3)
+    Strides=(2,2)
+    Dilation=(1,1)
+    Ksize=(7,7)
+    Groups=3
+
+    data = relay.var("data", shape=data_shape, dtype=intput_dtype)
+    weight = relay.var("weight",shape=weight_shape,dtype=intput_dtype)
+    add = relay.var("add",shape=add_shape,dtype=add_dtype)
+
+    conv_params = {
+            "kernel_size": Ksize,
+            "strides": Strides,
+            "dilation": Dilation,
+            "padding": Pad,
+            "data_layout": "NHWC",
+            "channels":24,
+            "kernel_layout":"HWOI",
+            "groups":Groups
+        }
+    qnn_conv2d_params = dict(conv_params)
+    qnn_conv2d_params["input_zero_point"] = relay.const(128, "int32")
+    qnn_conv2d_params["kernel_zero_point"] = relay.const(148, "int32")
+    qnn_conv2d_params["out_dtype"] = "int32"
+    qnn_conv2d_params["input_scale"] = relay.const(0.0078125, "float32")
+    qnn_conv2d_params["kernel_scale"] = relay.const(0.08764044, "float32")
+    conv_op = relay.qnn.op.conv2d(
+            data,
+            weight,
+            **qnn_conv2d_params
+        )
+
+    add_op = relay.op.nn.bias_add(conv_op, add,3)
+
+    requantize_params = {
+            "input_scale": relay.const(0.000684690952, "float32"),
+            "input_zero_point": relay.const(0, "int32"),
+            "output_scale": relay.const(0.906536, "float32"),
+            "output_zero_point": relay.const(128, "int32"),
+            "out_dtype":output_dtype,
+        }
+
+    out = relay.qnn.op.requantize(add_op,**requantize_params)
+
+    inputs = {
+        "data": tvm.nd.array(np.ones(data_shape).astype(intput_dtype)),
+    }
+    params = {
+        "weight": tvm.nd.array(np.arange(weight_shape[0]*weight_shape[1]*weight_shape[2]*weight_shape[3]).reshape(weight_shape).astype(intput_dtype)),
+        "add": tvm.nd.array(np.arange(24).reshape(add_shape).astype(add_dtype)),
+    }
+    print("Testing {0: <50}".format("QNN pattern"), end="")
+    verify_vsi_result(inputs, out, params, data_shape, out_shape, output_dtype)
+
+
+def test_uint8_fullconnected():
+    input_dtype = "uint8"
+    temp_dtype = "int32"
+    output_dtype = input_dtype
+    input_shape = (1, 1, 1, 1536)
+    weight_shape = (1001, 1536)
+    add_shape = (1001,)
+    reshape_output_shape = (-1, 1536)
+    output_shape = (1, 1001)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+    weight = relay.var("weight", shape=weight_shape, dtype=input_dtype)
+    add = relay.var("add", shape=add_shape, dtype=temp_dtype)
+    reshape_op = relay.op.reshape(data, reshape_output_shape)
+    dense_op = relay.qnn.op.dense(reshape_op, weight,
+                                  input_zero_point=relay.const(0, "int32"),
+                                  kernel_zero_point=relay.const(0, "int32"),
+                                  input_scale=relay.const(
+                                      1.0, "float32"),
+                                  kernel_scale=relay.const(
+                                      1.0, "float32"),
+                                  units=weight_shape[0],
+                                  out_dtype=temp_dtype)
+
+    add_op = relay.op.nn.bias_add(dense_op, add)
+
+    requantize_params = {
+        "input_scale": relay.const(1.0, "float32"),
+        "input_zero_point": relay.const(0, "int32"),
+        "output_scale": relay.const(0.005, "float32"),
+        "output_zero_point": relay.const(0, "int32"),
+        "out_dtype": output_dtype,
+    }
+
+    out = relay.qnn.op.requantize(add_op, **requantize_params)
+    inputs = {
+        "data": tvm.nd.array(np.random.randint(1, high=10, size=input_shape, dtype=input_dtype)),
+    }
+    params = {
+        "weight": tvm.nd.array(np.random.randint(1, high=10, size=weight_shape, dtype=input_dtype)),
+        "add": tvm.nd.array(np.random.randint(1, high=10, size=add_shape, dtype=temp_dtype)),
+    }
+    print("Testing {0: <50}".format("AVG_POOL_2D"), end="")
+    verify_vsi_result(inputs, out, params, input_shape,
+                      output_shape, output_dtype)
+
+def test_uint8_squeeze():
+    data_dtype = "uint8"
+    axis_dtype = "int32"
+    data_shape = (1,1,1,1000)
+    out_shape = (1,1000)
+
+    axis = tuple(np.array([1,2],dtype=axis_dtype))
+    data = relay.var("data", shape=data_shape, dtype=data_dtype)
+    out = relay.op.squeeze(data,axis)
+    inputs = {
+        "data": tvm.nd.array(np.ones(data_shape,data_dtype)),
+    }
+    print("Testing {0: <50}".format("RESHAPE"), end="")
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+
+def test_uint8_depthtospace():
+    input_dtype = "uint8"
+    input_shape = (1, 256, 256, 256)
+    out_shape = (1, 512, 512, 64)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+    out = relay.op.nn.depth_to_space(data, 2, layout="NHWC")
+    inputs = {
+        "data": tvm.nd.array(np.random.randint(1, high=10, size=input_shape, dtype=input_dtype)),
+    }
+    print("Testing {0: <50}".format("RESHAPE"), end="")
+    verify_vsi_result(inputs, out, [], input_shape, out_shape, input_dtype)
+
+def test_qnn_sub():
+    data_dtype = "uint8"
+    data_shape = (1, 8, 8, 1)
+    out_shape = (1, 8, 8, 1)
+
+    x = relay.var("x", shape=data_shape, dtype=data_dtype)
+    y = relay.var("y", shape=data_shape, dtype=data_dtype)
+    out = relay.qnn.op.subtract(
+        lhs=x,
+        rhs=y,
+        lhs_scale=relay.const(0.00784314, "float32"),
+        lhs_zero_point=relay.const(127, "int32"),
+        rhs_scale=relay.const(0.00784314, "float32"),
+        rhs_zero_point=relay.const(127, "int32"),
+        output_scale=relay.const(0.00784314, "float32"),
+        output_zero_point=relay.const(127, "int32"),
+    )
+
+    print("Testing {0: <50}".format("QNN.SUB"), end="")
+    inputs = {
+        "x": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+        "y": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+    }
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_qnn_multiply():
+    data_dtype = "uint8"
+    data_shape = (1, 8, 8, 1)
+    out_shape = (1, 8, 8, 1)
+
+    x = relay.var("x", shape=data_shape, dtype=data_dtype)
+    y = relay.var("y", shape=data_shape, dtype=data_dtype)
+    out = relay.qnn.op.mul(
+        lhs=x,
+        rhs=y,
+        lhs_scale=relay.const(0.00784314, "float32"),
+        lhs_zero_point=relay.const(127, "int32"),
+        rhs_scale=relay.const(0.00784314, "float32"),
+        rhs_zero_point=relay.const(127, "int32"),
+        output_scale=relay.const(0.00784314, "float32"),
+        output_zero_point=relay.const(127, "int32"),
+    )
+
+    print("Testing {0: <50}".format("QNN.SUB"), end="")
+    inputs = {
+        "x": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+        "y": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+    }
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_qnn_maximum():
+    data_dtype = "uint8"
+    data_shape = (1, 8, 8, 1)
+    out_shape = (1, 8, 8, 1)
+
+    x = relay.var("x", shape=data_shape, dtype=data_dtype)
+    y = relay.var("y", shape=data_shape, dtype=data_dtype)
+    out = relay.op.maximum(
+        lhs=x,
+        rhs=y,
+    )
+
+    print("Testing {0: <50}".format("MAXINUM"), end="")
+    inputs = {
+        "x": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+        "y": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+    }
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_qnn_minimum():
+    data_dtype = "uint8"
+    data_shape = (1, 8, 8, 1)
+    out_shape = (1, 8, 8, 1)
+
+    x = relay.var("x", shape=data_shape, dtype=data_dtype)
+    y = relay.var("y", shape=data_shape, dtype=data_dtype)
+    out = relay.op.minimum(
+        lhs=x,
+        rhs=y,
+    )
+
+    print("Testing {0: <50}".format("MININUM"), end="")
+    inputs = {
+        "x": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+        "y": tvm.nd.array(np.random.randint(1, high=101, size=data_shape, dtype="uint8")),
+    }
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_qnn_logical_and():
+    data_dtype = "bool"
+    data_shape = (1, 8, 8, 1)
+    out_shape = (1, 8, 8, 1)
+
+    x = relay.var("x", shape=data_shape, dtype=data_dtype)
+    y = relay.var("y", shape=data_shape, dtype=data_dtype)
+    out = relay.op.logical_and(lhs=x,rhs=y)
+
+    print("Testing {0: <50}".format("QNN.LOGICAL_AND"), end="")
+    inputs = {
+        "x": tvm.nd.array(np.random.randint(0, high=2, size=data_shape, dtype=data_dtype)),
+        "y": tvm.nd.array(np.random.randint(0, high=2, size=data_shape, dtype=data_dtype)),
+    }
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_qnn_logical_or():
+    data_dtype = "bool"
+    data_shape = (1, 8, 8, 1)
+    out_shape = (1, 8, 8, 1)
+
+    x = relay.var("x", shape=data_shape, dtype=data_dtype)
+    y = relay.var("y", shape=data_shape, dtype=data_dtype)
+    out = relay.op.logical_or(lhs=x,rhs=y)
+
+    print("Testing {0: <50}".format("QNN.LOGICAL_OR"), end="")
+    inputs = {
+        "x": tvm.nd.array(np.random.randint(0, high=2, size=data_shape, dtype=data_dtype)),
+        "y": tvm.nd.array(np.random.randint(0, high=2, size=data_shape, dtype=data_dtype)),
+    }
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_qnn_pad():
+    data_dtype = "uint8"
+    data_shape = (1, 8, 8, 2)
+    out_shape = (1, 10, 10, 2)
+
+    paddings_num = [[0,0],[1,1],[1,1],[0,0]]
+
+    x = relay.var("x", shape=data_shape, dtype=data_dtype)
+    paddings = tuple(tuple(l) for l in paddings_num)
+    pad_value = float(0)
+    out = relay.op.nn.pad(x,paddings,pad_value)
+
+    print("Testing {0: <50}".format("QNN.LOGICAL_OR"), end="")
+    inputs = {
+        "x": tvm.nd.array(np.random.randint(0, high=100, size=data_shape, dtype=data_dtype)),
+    }
+    verify_vsi_result(inputs, out, [], data_shape, out_shape, data_dtype)
+
+def test_uint8_resizeNear():
+    input_dtype = "uint8"
+    size_dtype = "int32"
+    output_dtype = input_dtype
+    input_shape = (1, 38, 38, 128)
+    output_shape = (1, 76, 76, 128)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+    target_size = tuple(np.array([76,76],dtype=size_dtype))
+    method = "nearest_neighbor"
+    coord_trans = "asymmetric"
+    out = relay.image.resize(
+            data, target_size, "NHWC", method, coordinate_transformation_mode=coord_trans
+        )
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,200,size=input_shape).astype(input_dtype)),
+    }
+    verify_vsi_result(inputs, out, [], input_shape, output_shape, input_dtype)
+
+def test_uint8_mean():
+    input_dtype = "uint8"
+    temp_dtype = "int32"
+    output_dtype = input_dtype
+    input_shape = (1, 7, 7, 20)
+    output_shape = (1, 1,1,20)
+
+    data = relay.var("data", shape=input_shape, dtype=input_dtype)
+    cast = relay.op.cast(data,temp_dtype)
+    axis = tuple(np.array([1,2],dtype=temp_dtype))
+    mean = relay.op.reduce.mean(cast,axis,True)
+
+    requantize_params = {
+        "input_scale": relay.const(0.1568378, "float32"),
+        "input_zero_point": relay.const(0, "int32"),
+        "output_scale": relay.const(0.1568378, "float32"),
+        "output_zero_point": relay.const(0, "int32"),
+        "out_dtype": output_dtype,
+    }
+
+    out = relay.qnn.op.requantize(mean, **requantize_params)
+    inputs = {
+        "data": tvm.nd.array(np.random.randint(1, high=10, size=input_shape, dtype=input_dtype)),
+    }
+    print("Testing {0: <50}".format("UINT MEAN"), end="")
+    verify_vsi_result(inputs, out, [], input_shape,
+                      output_shape, output_dtype)
+
+def test_transpose_conv2d_pattern():
+    data_shape = (1, 24, 24, 256)
+    weight_shape = (256, 128, 2,2)
+    out_shape = (1, 48, 48, 128)
+
+    input_dtype = "uint8"
+    out_dtype= input_dtype
+
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+    weight = relay.var("weight",shape=weight_shape,dtype=input_dtype)
+
+    out = relay.nn.conv2d_transpose(
+        data,
+        weight,
+        strides=(2, 2),
+        padding=(0, 0, 0, 0),
+        channels=int(128),
+        kernel_size=(int(2), int(2)),
+        data_layout="NHWC",
+        kernel_layout="OIHW",
+        out_dtype=out_dtype,
+    )
+
+    inputs = {
+        "data": tvm.nd.array(np.random.randint(1, high=10, size=data_shape, dtype=input_dtype)),
+
+    }
+    params = {
+        "weight": tvm.nd.array(np.random.randint(1, high=200, size=weight_shape, dtype=input_dtype)),
+    }
+    print("Testing {0: <50}".format("QNN pattern"), end="")
+    verify_vsi_result(inputs, out, params, data_shape, out_shape, out_dtype)
+
+def test_uint8_transpose_conv2d_pattern():
+    data_shape = (1, 24, 24, 256)
+    weight_shape = (256, 128, 2,2)
+    out_shape = (1, 48, 48, 128)
+
+    input_dtype = "uint8"
+    temp_dtype = "int32"
+    output_dtype= input_dtype
+    kernel_size=(2, 2)
+    strides=(2, 2)
+    padding=(0, 0, 0, 0)
+    data_layout="NHWC"
+
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+    weight = relay.var("weight",shape=weight_shape,dtype=input_dtype)
+
+
+    conv_params = {
+            "kernel_size": kernel_size,
+            "padding": padding,
+            "data_layout": data_layout,
+            "channels":weight_shape[1],
+            "out_dtype":temp_dtype,
+            "strides":strides
+        }
+    qnn_conv2d_params = dict(conv_params)
+    qnn_conv2d_params["input_zero_point"] = relay.const(0, "int32")
+    qnn_conv2d_params["kernel_zero_point"] = relay.const(129, "int32")
+    qnn_conv2d_params["out_dtype"] = "int32"
+    qnn_conv2d_params["input_scale"] = relay.const(0.0109899, "float32")
+    qnn_conv2d_params["kernel_scale"] = relay.const(0.00171253, "float32")
+    conv_op = relay.qnn.op.conv2d_transpose(
+            data,
+            weight,
+            **qnn_conv2d_params
+        )
+
+    requantize_params = {
+            "input_scale": relay.const(0.0109899*0.00171253, "float32"),
+            "input_zero_point": relay.const(0, "int32"),
+            "output_scale": relay.const(0.00000125877, "float32"),
+            "output_zero_point": relay.const(124, "int32"),
+            "axis": 3,
+            "out_dtype":output_dtype,
+        }
+    out = relay.qnn.op.requantize(conv_op,**requantize_params)
+
+
+    inputs = {
+        "data": tvm.nd.array(np.random.randint(1, high=20, size=data_shape, dtype=input_dtype)),
+
+    }
+    params = {
+        "weight": tvm.nd.array(np.random.randint(1, high=20, size=weight_shape, dtype=input_dtype)),
+    }
+    print("Testing {0: <50}".format("QNN pattern"), end="")
+    verify_vsi_result(inputs, out, params, data_shape, out_shape, output_dtype)
+
+def test_uint8_transpose_conv2d_pattern2():
+    data_shape = (1, 24, 24, 128)
+    weight_shape = (128, 64, 3,3)
+    out_shape = (1, 48, 48, 64)
+
+    input_dtype = "uint8"
+    temp_dtype = "int32"
+    output_dtype= input_dtype
+    kernel_size=(3, 3)
+    strides=(2, 2)
+    padding=(0, 0, 1, 1)
+    data_layout="NHWC"
+
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+    weight = relay.var("weight",shape=weight_shape,dtype=input_dtype)
+
+
+    conv_params = {
+            "kernel_size": kernel_size,
+            "padding": padding,
+            "data_layout": data_layout,
+            "channels":weight_shape[1],
+            "out_dtype":temp_dtype,
+            "strides":strides
+        }
+    qnn_conv2d_params = dict(conv_params)
+    qnn_conv2d_params["input_zero_point"] = relay.const(0, "int32")
+    qnn_conv2d_params["kernel_zero_point"] = relay.const(129, "int32")
+    qnn_conv2d_params["out_dtype"] = "int32"
+    qnn_conv2d_params["input_scale"] = relay.const(0.0109899, "float32")
+    qnn_conv2d_params["kernel_scale"] = relay.const(0.00171253, "float32")
+    conv_op = relay.qnn.op.conv2d_transpose(
+            data,
+            weight,
+            **qnn_conv2d_params
+        )
+
+    requantize_params = {
+            "input_scale": relay.const(0.0109899*0.00171253, "float32"),
+            "input_zero_point": relay.const(0, "int32"),
+            "output_scale": relay.const(0.00000125877, "float32"),
+            "output_zero_point": relay.const(124, "int32"),
+            "axis": 3,
+            "out_dtype":output_dtype,
+        }
+    out = relay.qnn.op.requantize(conv_op,**requantize_params)
+
+
+    inputs = {
+        "data": tvm.nd.array(np.random.randint(1, high=20, size=data_shape, dtype=input_dtype)),
+
+    }
+    params = {
+        "weight": tvm.nd.array(np.random.randint(1, high=20, size=weight_shape, dtype=input_dtype)),
+    }
+    print("Testing {0: <50}".format("QNN pattern"), end="")
+    verify_vsi_result(inputs, out, params, data_shape, out_shape, output_dtype)
+
+def test_uint8_tanh():
+    input_dtype = "uint8"
+    output_dtype = input_dtype
+    temp_dtype = "float32"
+    data_shape = (1,100)
+    data = relay.var("data", shape=data_shape, dtype=input_dtype)
+
+    dequantize_op = relay.qnn.op.dequantize(data,
+                            input_zero_point=relay.const(0, "int32"),
+                            input_scale=relay.const(0.15294, "float32"),
+                            axis = -1,
+                            )
+    sigmoid_op = relay.op.tanh(dequantize_op)
+
+    quantize = relay.qnn.op.quantize(sigmoid_op,
+                            output_scale=relay.const(0.15294, "float32"),
+                            output_zero_point=relay.const(0, "int32"),
+                            axis = -1,
+                            out_dtype=output_dtype
+                            )
+    inputs = {
+        "data": tvm.nd.array(np.random.uniform(1,20,size=data_shape).astype(input_dtype)),
+    }
+    print("Testing {0: <50}".format("SIGMOID"), end="")
+    verify_vsi_result(inputs, quantize, [], data_shape, data_shape, output_dtype)
+
+if __name__ == "__main__":
+    #test_qnn_add()

Review comment:
       replace all of this with:
   `sys.exit(pytest.main([__file__] + sys.argv[1:]))`

##########
File path: src/relay/backend/contrib/vsi_npu/codegen_vsi_npu.h
##########
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#ifndef TVM_RELAY_BACKEND_CONTRIB_VSI_NPU_CODEGEN_VSI_NPU_H_
+#define TVM_RELAY_BACKEND_CONTRIB_VSI_NPU_CODEGEN_VSI_NPU_H_
+
+#include <tim/vx/context.h>
+#include <tim/vx/graph.h>
+#include <tim/vx/operation.h>
+#include <tvm/ir/error.h>
+#include <tvm/relay/expr.h>
+#include <tvm/relay/expr_functor.h>
+#include <tvm/runtime/module.h>
+#include <tvm/runtime/object.h>
+
+#include "op_map/op_setup.h"
+
+using namespace tvm::runtime;
+using namespace tvm::relay::contrib::vsi_npu::op_map;
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace vsi_npu {
+
+class VsiError {
+  // TODO
+};
+
+inline int32_t ConvertAxis(int32_t axisIn, uint32_t dimNum) {

Review comment:
       can you add comments about what this is converting from and to?

##########
File path: src/relay/backend/contrib/vsi_npu/codegen_vsi_npu.h
##########
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#ifndef TVM_RELAY_BACKEND_CONTRIB_VSI_NPU_CODEGEN_VSI_NPU_H_
+#define TVM_RELAY_BACKEND_CONTRIB_VSI_NPU_CODEGEN_VSI_NPU_H_
+
+#include <tim/vx/context.h>
+#include <tim/vx/graph.h>
+#include <tim/vx/operation.h>
+#include <tvm/ir/error.h>
+#include <tvm/relay/expr.h>
+#include <tvm/relay/expr_functor.h>
+#include <tvm/runtime/module.h>
+#include <tvm/runtime/object.h>
+
+#include "op_map/op_setup.h"
+
+using namespace tvm::runtime;
+using namespace tvm::relay::contrib::vsi_npu::op_map;
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace vsi_npu {
+
+class VsiError {
+  // TODO
+};
+
+inline int32_t ConvertAxis(int32_t axisIn, uint32_t dimNum) {
+  return dimNum - (axisIn < 0 ? dimNum + axisIn : axisIn) - 1;
+}
+
+struct RawGraphDef {
+  std::shared_ptr<char> compiled_graph;
+  uint32_t compiled_graph_size;
+  std::vector<tim::vx::TensorSpec> inputs_spec;
+  std::vector<tim::vx::TensorSpec> outputs_spec;
+};
+class VsiErrorReporter {

Review comment:
       could you comment what this is used for?

##########
File path: src/relay/backend/contrib/vsi_npu/codegen.cc
##########
@@ -0,0 +1,425 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#include "codegen_vsi_npu.h"
+
+#include "../../../../runtime/contrib/vsi_npu/vsi_npu_runtime.h"
+#include "../../utils.h"
+#include "../codegen_c/codegen_c.h"
+#include "op_map/op_setup.h"
+
+#include <tvm/relay/attrs/image.h>
+#include <tvm/relay/attrs/nn.h>
+#include <tvm/relay/attrs/reduce.h>
+#include <tvm/relay/expr_functor.h>
+#include <tvm/relay/transform.h>
+#include <tvm/relay/type.h>
+#include <tvm/runtime/module.h>
+#include <tvm/runtime/registry.h>
+
+#include <fstream>
+#include <iostream>
+#include <numeric>
+#include <cassert>
+#include <sstream>
+
+#include "tim/transform/layout_inference.h"
+
+namespace tvx = tim::vx;
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace vsi_npu {
+
+using TensorInfoTable = std::map<Expr, std::vector<tim::vx::TensorSpec>>;
+
+void quant_info_infer(VxOpTable& op_tb, Expr now_expr, bool is_input) {
+  auto now_opsetup = op_tb[now_expr];
+  Expr pre_expr;
+  if ((now_opsetup->pCallbackexpr_ == nullptr ||
+      now_opsetup->pCallbackexpr_->ptr_pre_callback_ == nullptr) && is_input
+      ) {
+    return;
+  } else if((now_opsetup->pCallbackexpr_ == nullptr ||
+      now_opsetup->pCallbackexpr_->ptr_pre_callback_ == nullptr
+      || op_tb[now_expr]->specs_[0].quantization_.ZeroPoints().size() == 0)&& !is_input ){
+      return;
+  } else {
+    pre_expr = now_opsetup->pCallbackexpr_->ptr_pre_callback_->expr_;
+  }
+
+  auto pre_opsetup = op_tb[pre_expr];
+  auto ptr_callback = pre_opsetup->pCallbackexpr_;
+
+  if (now_opsetup->specs_[0].datatype_ == tvx::DataType::FLOAT32 ||
+      pre_opsetup->specs_[0].datatype_ == tvx::DataType::FLOAT32 ||
+      now_opsetup->specs_[0].datatype_ == tvx::DataType::BOOL8 ||
+      pre_opsetup->specs_[0].datatype_ == tvx::DataType::BOOL8) {
+    return;
+  }
+
+  tvx::Quantization& now_quant_info = now_opsetup->specs_[0].quantization_;
+
+  std::vector<int32_t> zps;
+  std::vector<float> scales;
+  if (now_quant_info.Type() == tvx::QuantType::NONE) {
+    zps = {0};
+    scales = {1.0};
+    now_quant_info.SetType(tvx::QuantType::ASYMMETRIC).SetScales({1.0}).SetZeroPoints({0});
+  } else {
+    zps = now_quant_info.ZeroPoints();
+    scales = now_quant_info.Scales();
+  }
+
+  while (ptr_callback &&
+         op_tb[ptr_callback->expr_]->specs_[0].quantization_.ZeroPoints().size() == 0) {
+    Expr expr = ptr_callback->expr_;
+    auto datatype = GetTvxType(expr->checked_type().as<TensorTypeNode>()->dtype);
+    if (datatype != tim::vx::DataType::INT32) {
+      op_tb[expr]
+          ->specs_[0]
+          .quantization_.SetType(tvx::QuantType::ASYMMETRIC)
+          .SetScales(scales)
+          .SetZeroPoints(zps);
+    }
+    ptr_callback = ptr_callback->ptr_pre_callback_;
+  }
+}
+
+template <typename T, typename T2>
+void attribute_transform(const T &attrs, T2 &attrs_num) {
+
+  std::transform(attrs.begin(), attrs.end(), attrs_num.begin(),
+                 [](const PrimExpr &attrs_num) {
+                   return static_cast<uint32_t>(
+                       attrs_num.as<IntImmNode>()->value);
+                 });
+};
+
+std::shared_ptr<tvx::Tensor> createVxOPerand(TensorInfoTable tensor_info,
+                                             Expr expr, tvx::Graph *graph,
+                                             uint32_t idx = 0) {
+  auto tensor_spec = tensor_info[expr][idx];
+  void *data = expr->IsInstance<ConstantNode>()
+                   ? expr.as<ConstantNode>()->data->data
+                   : nullptr;
+  return data == nullptr ? graph->CreateTensor(tensor_spec)
+                         : graph->CreateTensor(tensor_spec, data);
+};
+
+static std::vector<tim::vx::TensorSpec>
+GetTimVxTensorSpec(const TupleTypeNode *tuple) {
+  auto input_node_tensors = tuple->fields;
+
+  std::vector<tim::vx::TensorSpec> specs;
+  uint32_t input_node_num = input_node_tensors.size();
+  for (uint32_t i = 0; i < input_node_num; i++) {
+    std::cout << "GetTimVxTensorSpec: " << input_node_tensors[i].as<TensorTypeNode>() << std::endl;
+    tim::vx::ShapeType shape;
+    std::transform(input_node_tensors[i].as<TensorTypeNode>()->shape.rbegin(),
+                   input_node_tensors[i].as<TensorTypeNode>()->shape.rend(),
+                   std::back_inserter(shape), [](const PrimExpr &dim) {
+                     return static_cast<int>(dim.as<IntImmNode>()->value);
+                   });
+
+    auto dtype = input_node_tensors[i].as<TensorTypeNode>()->dtype;
+    auto dataType = GetTvxType(dtype);
+
+    tim::vx::TensorSpec spec(dataType, shape,
+                             tim::vx::TensorAttribute::OUTPUT);
+    specs.push_back(spec);
+  }
+  return specs;
+}
+
+using namespace backend;
+
+std::map<Expr, std::shared_ptr<OpSetup>>
+TensorMakerImpl::Create(const Expr &expr) {
+  this->vxOpmap_tbl_.clear();
+  CHECK(expr->checked_type().defined());
+  if (auto tuple = expr->checked_type().as<TupleTypeNode>()) {
+    auto specs = GetTimVxTensorSpec(tuple);
+    auto tn = expr.as<TupleNode>();
+    for (uint32_t i = 0; i < tuple->fields.size(); i++) {
+      vxOpmap_tbl_[tn->fields[i]] = std::make_shared<OpSetup>(specs[i]);
+    }
+  }
+  else {
+    auto tensor_node = expr->checked_type().as<TensorTypeNode>();
+    tim::vx::ShapeType o_shape;
+    std::transform(tensor_node->shape.rbegin(), tensor_node->shape.rend(),
+                   std::back_inserter(o_shape), [](const PrimExpr &dim) {
+                     return static_cast<int>(dim.as<IntImmNode>()->value);
+                   });
+
+    auto dtype = tensor_node[0].dtype;
+    auto tvx_type = GetTvxType(dtype);
+    auto output_Opsetup = std::make_shared<OpSetup>(
+        tvx::TensorSpec(tvx_type, o_shape, tvx::TensorAttribute::OUTPUT),
+        std::make_shared<CallbackExpr>(expr));
+    vxOpmap_tbl_[expr] = output_Opsetup;
+  }
+  VisitInferred(expr);
+  return vxOpmap_tbl_;
+}
+
+typedef void (*setup_operand_fun_ptr)(VxOpTable&, Expr&);
+
+template <typename T>
+void setup_operand(VxOpTable& vxOpmap_tbl_, Expr& expr) {
+  vxOpmap_tbl_[expr] = std::make_shared<T>(vxOpmap_tbl_[expr]->specs_[0],vxOpmap_tbl_[expr]->pCallbackexpr_);
+}
+
+#define DEFINE_NODE_ITEM(name, op) \
+  {name, setup_operand<op>}
+
+static std::map<std::string, setup_operand_fun_ptr> call_node_table = {
+  DEFINE_NODE_ITEM("nn.relu", Relu),
+  DEFINE_NODE_ITEM("nn.softmax", Softmax),
+  DEFINE_NODE_ITEM("nn.avg_pool2d", AvgPool),
+  DEFINE_NODE_ITEM("transpose", Transpose),
+  DEFINE_NODE_ITEM("qnn.add", QnnAdd),
+  DEFINE_NODE_ITEM("qnn.subtract", QnnSubtract),
+  DEFINE_NODE_ITEM("qnn.mul", QnnMul),
+  DEFINE_NODE_ITEM("maximum", Maximum),
+  DEFINE_NODE_ITEM("minimum", Minimum),
+  DEFINE_NODE_ITEM("nn.conv2d", Conv),
+  DEFINE_NODE_ITEM("qnn.quantize", Quantize),
+  DEFINE_NODE_ITEM("qnn.dequantize", Dequantize),
+  DEFINE_NODE_ITEM("reshape", Reshape),
+  DEFINE_NODE_ITEM("squeeze", Squeeze),
+  DEFINE_NODE_ITEM("argmax", ArgMax),
+  DEFINE_NODE_ITEM("argmin", ArgMin),
+  DEFINE_NODE_ITEM("image.resize2d", Resize),
+  DEFINE_NODE_ITEM("nn.max_pool2d", MaxPool2d),
+  DEFINE_NODE_ITEM("qnn.concatenate", VsiNpuConcat),
+  DEFINE_NODE_ITEM("add", Add),
+  DEFINE_NODE_ITEM("mean", Mean),
+  DEFINE_NODE_ITEM("sigmoid", Sigmoid),
+  DEFINE_NODE_ITEM("tanh", Tanh),
+  DEFINE_NODE_ITEM("nn.depth_to_space", DepthtoSpace),
+  DEFINE_NODE_ITEM("logical_and", LogicalAnd),
+  DEFINE_NODE_ITEM("logical_or", LogicalOr),
+  DEFINE_NODE_ITEM("nn.pad", Pad),
+  DEFINE_NODE_ITEM("nn.leaky_relu", LeakyRelu),
+  DEFINE_NODE_ITEM("qnn.requantize", QnnRequantize),
+};
+
+static std::map<std::string, setup_operand_fun_ptr> func_node_table = {
+  DEFINE_NODE_ITEM("vsi_npu.qnn_conv2d", VsiNpuQnnConv2d),
+  DEFINE_NODE_ITEM("vsi_npu.qnn_avgpool2d", VsiNpuQnnAvgPool),
+  DEFINE_NODE_ITEM("vsi_npu.qnn_softmax", VsiNpuQnnSoftmax),
+  DEFINE_NODE_ITEM("vsi_npu.qnn_sigmoid", VsiNpuQnnSigmoid),
+  DEFINE_NODE_ITEM("vsi_npu.qnn_clip", VsiNpuQnnClip),
+  DEFINE_NODE_ITEM("vsi_npu.qnn_dense", VsiNpuQnnDense),
+  DEFINE_NODE_ITEM("vsi_npu.qnn_mean", VsiNpuQnnMean),
+  DEFINE_NODE_ITEM("vsi_npu.qnn_leaky_relu", VsiNpuQnnLeakyRelu),
+  DEFINE_NODE_ITEM("vsi_npu.qnn_deconv", VsiNpuQnnDeconv),
+  DEFINE_NODE_ITEM("vsi_npu.qnn_tanh", VsiNpuQnnTanh),
+};
+
+void TensorMakerImpl::InferCall(const CallNode *cn) {
+  Call call_obj = GetRef<Call>(cn);
+  Expr expr = GetRef<Expr>(cn);
+  std::string name;
+  tvx::Quantization out_quant = tvx::Quantization();
+  if (const auto *fn = cn->op.as<FunctionNode>()) {
+    auto comp = fn->GetAttr<String>(attr::kComposite);
+    CHECK(comp.defined());
+    name = comp.value();
+    std::cout << "TensorMakerImpl::InferCall: " << name << std::endl;

Review comment:
       rm `std::cout` and replace with LOG calls if needed (see #9012 )




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org