You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by lm...@apache.org on 2020/11/15 19:59:54 UTC

[incubator-tvm-site] branch asf-site updated: Docs build at Sun Nov 15 11:59:40 PST 2020

This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 3705c6a  Docs build at Sun Nov 15 11:59:40 PST 2020
3705c6a is described below

commit 3705c6a3e2e165989de11b8a34e4ea23c8645c45
Author: Lianmin Zheng <li...@gmail.com>
AuthorDate: Sun Nov 15 11:59:41 2020 -0800

    Docs build at Sun Nov 15 11:59:40 PST 2020
---
 .../tune_simple_template.py                        |   4 +-
 .../tune_simple_template.ipynb                     |   2 +-
 .../tune_relay_cuda.py                             |   2 +-
 .../tune_network_cuda.py                           |  10 +-
 .../tune_relay_mobile_gpu.ipynb                    |   2 +-
 .../tune_conv2d_layer_cuda.py                      |   5 +-
 .../tune_relay_cuda.ipynb                          |   2 +-
 .../tune_relay_x86.py                              |   2 +-
 .../tune_matmul_x86.py                             |   8 +-
 .../tune_relay_x86.ipynb                           |   2 +-
 .../tune_relay_arm.py                              |   2 +-
 .../tune_conv2d_layer_cuda.ipynb                   |   4 +-
 .../tune_network_cuda.ipynb                        |   6 +-
 .../tune_relay_mobile_gpu.py                       |   2 +-
 .../tune_matmul_x86.ipynb                          |   4 +-
 .../tune_relay_arm.ipynb                           |   2 +-
 docs/_sources/index.rst.txt                        |   2 +-
 .../auto_scheduler/sg_execution_times.rst.txt      |   8 +-
 .../auto_scheduler/tune_conv2d_layer_cuda.rst.txt  | 157 ++---
 .../auto_scheduler/tune_matmul_x86.rst.txt         |  12 +-
 .../auto_scheduler/tune_network_cuda.rst.txt       | 265 +++++++-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |  16 +-
 .../tutorials/autotvm/tune_conv2d_cuda.rst.txt     |  44 +-
 .../tutorials/autotvm/tune_relay_arm.rst.txt       |   2 +-
 .../tutorials/autotvm/tune_relay_cuda.rst.txt      |   2 +-
 .../autotvm/tune_relay_mobile_gpu.rst.txt          |   2 +-
 .../tutorials/autotvm/tune_relay_x86.rst.txt       |   2 +-
 .../tutorials/autotvm/tune_simple_template.rst.txt |  24 +-
 .../tutorials/dev/bring_your_own_datatypes.rst.txt |   2 +-
 .../tutorials/dev/sg_execution_times.rst.txt       |   8 +-
 .../frontend/deploy_model_on_android.rst.txt       |   2 +-
 .../deploy_object_detection_pytorch.rst.txt        |   2 +-
 .../tutorials/frontend/deploy_prequantized.rst.txt |   2 +-
 .../frontend/deploy_prequantized_tflite.rst.txt    |   4 +-
 .../tutorials/frontend/deploy_ssd_gluoncv.rst.txt  |   2 +-
 .../tutorials/frontend/from_pytorch.rst.txt        |   2 +-
 .../tutorials/frontend/from_tensorflow.rst.txt     | 752 ++++++++++-----------
 .../tutorials/frontend/sg_execution_times.rst.txt  |  40 +-
 .../get_started/cross_compilation_and_rpc.rst.txt  |   2 +-
 .../get_started/relay_quick_start.rst.txt          |   2 +-
 .../get_started/sg_execution_times.rst.txt         |  10 +-
 docs/_sources/tutorials/index.rst.txt              |   4 +-
 .../tutorials/language/sg_execution_times.rst.txt  |  16 +-
 docs/_sources/tutorials/language/tensorize.rst.txt |  12 +-
 .../tutorials/language/tuple_inputs.rst.txt        |  26 +-
 .../tutorials/micro/sg_execution_times.rst.txt     |   6 +-
 .../tutorials/optimize/opt_conv_cuda.rst.txt       |   2 +-
 .../tutorials/optimize/opt_conv_tensorcore.rst.txt |   2 +-
 docs/_sources/tutorials/optimize/opt_gemm.rst.txt  |  20 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |  10 +-
 docs/_sources/tutorials/topi/intro_topi.rst.txt    |   2 +-
 .../tutorials/topi/sg_execution_times.rst.txt      |   4 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |   4 +-
 .../vta/tutorials/autotvm/tune_relay_vta.rst.txt   |   2 +-
 .../frontend/deploy_classification.rst.txt         |   4 +-
 .../tutorials/frontend/sg_execution_times.rst.txt  |   4 +-
 .../_sources/vta/tutorials/matrix_multiply.rst.txt |   8 +-
 .../vta/tutorials/optimize/convolution_opt.rst.txt |   8 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |   6 +-
 .../vta/tutorials/sg_execution_times.rst.txt       |   6 +-
 .../_sources/vta/tutorials/vta_get_started.rst.txt |   4 +-
 docs/api/doxygen/crt_2packed__func_8h.html         |   2 +-
 docs/api/doxygen/crt_2packed__func_8h__incl.svg    | 109 +--
 docs/api/doxygen/device__api_8h.html               |  16 +-
 docs/api/doxygen/device__api_8h_source.html        |   6 +-
 docs/api/doxygen/globals_func.html                 |   3 +
 docs/api/doxygen/globals_t.html                    |   3 +
 docs/api/doxygen/graph__runtime_8h.html            |   2 +-
 docs/api/doxygen/graph__runtime_8h__incl.svg       | 175 ++---
 docs/api/doxygen/namespacemembers.html             |   3 +
 docs/api/doxygen/namespacemembers_func.html        |   3 +
 docs/api/doxygen/namespacemembers_func_g.html      |   7 +-
 docs/api/doxygen/namespacemembers_func_i.html      |   3 +
 docs/api/doxygen/namespacemembers_func_r.html      |   5 +-
 docs/api/doxygen/namespacemembers_g.html           |   5 +-
 docs/api/doxygen/namespacemembers_i.html           |   3 +
 docs/api/doxygen/namespacemembers_r.html           |   3 +
 docs/api/doxygen/namespacetvm_1_1runtime.html      | 150 +++-
 docs/api/doxygen/platform_8h.html                  |  60 +-
 docs/api/doxygen/platform_8h__incl.svg             |  48 +-
 docs/api/doxygen/platform_8h_source.html           |   3 +-
 docs/api/doxygen/search/all_1.js                   |   7 +-
 docs/api/doxygen/search/all_12.js                  |   5 +-
 docs/api/doxygen/search/all_13.js                  |   2 +-
 docs/api/doxygen/search/all_14.js                  |  17 +-
 docs/api/doxygen/search/all_7.js                   |   5 +-
 docs/api/doxygen/search/all_9.js                   |   9 +-
 docs/api/doxygen/search/all_f.js                   |   2 +-
 docs/api/doxygen/search/functions_1.js             |   1 +
 docs/api/doxygen/search/functions_12.js            |   1 +
 docs/api/doxygen/search/functions_14.js            |   1 +
 docs/api/doxygen/search/functions_7.js             |   1 +
 docs/api/doxygen/search/functions_9.js             |   1 +
 docs/api/doxygen/search/functions_f.js             |   2 +-
 docs/api/python/auto_scheduler.html                |   4 +-
 docs/api/rust/compiler_ext/fn.tvm_export.html      |   2 +-
 docs/api/typedoc/classes/bytestreamreader.html     |  12 +-
 docs/api/typedoc/classes/cachedcallstack.html      |  34 +-
 docs/api/typedoc/classes/dlcontext.html            |  10 +-
 docs/api/typedoc/classes/dldatatype.html           |  12 +-
 docs/api/typedoc/classes/environment.html          |  12 +-
 docs/api/typedoc/classes/ffilibrary.html           |  20 +-
 docs/api/typedoc/classes/graphruntime.html         |  16 +-
 docs/api/typedoc/classes/instance.html             |  40 +-
 docs/api/typedoc/classes/memory.html               |  34 +-
 docs/api/typedoc/classes/module.html               |  10 +-
 docs/api/typedoc/classes/ndarray.html              |  22 +-
 docs/api/typedoc/classes/packedfunccell.html       |   6 +-
 docs/api/typedoc/classes/rpcserver.html            |  14 +-
 docs/api/typedoc/classes/scalar.html               |   6 +-
 docs/api/typedoc/classes/webgpucontext.html        |  12 +-
 docs/api/typedoc/enums/argtypecode.html            |  30 +-
 docs/api/typedoc/enums/aynccallbackcode.html       |   4 +-
 docs/api/typedoc/enums/dldatatypecode.html         |   8 +-
 docs/api/typedoc/enums/rpcserverstate.html         |  12 +-
 docs/api/typedoc/enums/sizeof.html                 |  18 +-
 docs/api/typedoc/index.html                        | 114 ++--
 docs/api/typedoc/interfaces/disposable.html        |   2 +-
 docs/api/typedoc/interfaces/functioninfo.html      |   6 +-
 docs/api/typedoc/interfaces/libraryprovider.html   |   4 +-
 docs/index.html                                    |   2 +-
 docs/objects.inv                                   | Bin 17442 -> 17446 bytes
 docs/searchindex.js                                |   2 +-
 .../auto_scheduler/sg_execution_times.html         |   8 +-
 .../auto_scheduler/tune_conv2d_layer_cuda.html     | 174 ++---
 docs/tutorials/auto_scheduler/tune_matmul_x86.html |  31 +-
 .../auto_scheduler/tune_network_cuda.html          | 278 +++++++-
 docs/tutorials/autotvm/sg_execution_times.html     |  14 +-
 docs/tutorials/autotvm/tune_conv2d_cuda.html       |  62 +-
 docs/tutorials/autotvm/tune_relay_arm.html         |  24 +-
 docs/tutorials/autotvm/tune_relay_cuda.html        |  20 +-
 docs/tutorials/autotvm/tune_relay_mobile_gpu.html  |  24 +-
 docs/tutorials/autotvm/tune_relay_x86.html         |  26 +-
 docs/tutorials/autotvm/tune_simple_template.html   |  38 +-
 docs/tutorials/dev/bring_your_own_datatypes.html   |   6 +-
 docs/tutorials/dev/sg_execution_times.html         |   8 +-
 .../frontend/deploy_model_on_android.html          |   2 +-
 .../frontend/deploy_object_detection_pytorch.html  |   2 +-
 docs/tutorials/frontend/deploy_prequantized.html   |   2 +-
 .../frontend/deploy_prequantized_tflite.html       |   4 +-
 docs/tutorials/frontend/deploy_ssd_gluoncv.html    |   6 +-
 docs/tutorials/frontend/from_pytorch.html          |   4 +-
 docs/tutorials/frontend/from_tensorflow.html       | 752 ++++++++++-----------
 docs/tutorials/frontend/sg_execution_times.html    |  40 +-
 .../get_started/cross_compilation_and_rpc.html     |   2 +-
 docs/tutorials/get_started/relay_quick_start.html  | 120 ++--
 docs/tutorials/get_started/sg_execution_times.html |  10 +-
 docs/tutorials/index.html                          |  36 +-
 docs/tutorials/language/sg_execution_times.html    |  16 +-
 docs/tutorials/language/tensorize.html             |  12 +-
 docs/tutorials/language/tuple_inputs.html          |  26 +-
 docs/tutorials/micro/sg_execution_times.html       |   6 +-
 docs/tutorials/optimize/opt_conv_cuda.html         |   2 +-
 docs/tutorials/optimize/opt_conv_tensorcore.html   |   2 +-
 docs/tutorials/optimize/opt_gemm.html              |  20 +-
 .../optimize/opt_matmul_auto_tensorcore.html       |   4 +-
 docs/tutorials/optimize/sg_execution_times.html    |  10 +-
 docs/tutorials/topi/intro_topi.html                |   2 +-
 docs/tutorials/topi/sg_execution_times.html        |   4 +-
 docs/vta/tutorials/autotvm/sg_execution_times.html |   4 +-
 docs/vta/tutorials/autotvm/tune_relay_vta.html     | 186 ++---
 .../tutorials/frontend/deploy_classification.html  |  18 +-
 .../vta/tutorials/frontend/sg_execution_times.html |   4 +-
 docs/vta/tutorials/matrix_multiply.html            |   8 +-
 docs/vta/tutorials/optimize/convolution_opt.html   |   8 +-
 .../vta/tutorials/optimize/sg_execution_times.html |   6 +-
 docs/vta/tutorials/sg_execution_times.html         |   6 +-
 docs/vta/tutorials/vta_get_started.html            |   4 +-
 168 files changed, 2760 insertions(+), 1987 deletions(-)

diff --git a/docs/_downloads/0bb862dbb3a4c434477f93fe2c147fbb/tune_simple_template.py b/docs/_downloads/0bb862dbb3a4c434477f93fe2c147fbb/tune_simple_template.py
index 4c5c7da..db199fc 100644
--- a/docs/_downloads/0bb862dbb3a4c434477f93fe2c147fbb/tune_simple_template.py
+++ b/docs/_downloads/0bb862dbb3a4c434477f93fe2c147fbb/tune_simple_template.py
@@ -15,8 +15,8 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Writing tunable template and Using auto-tuner
-=============================================
+Writing Tunable Templates and Using the Auto-tuner
+==================================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_
 
 This is an introduction tutorial to the auto-tuning module in TVM.
diff --git a/docs/_downloads/0d95a85fc279fdff660608ef305b9107/tune_simple_template.ipynb b/docs/_downloads/0d95a85fc279fdff660608ef305b9107/tune_simple_template.ipynb
index 30db8c5..63b2965 100644
--- a/docs/_downloads/0d95a85fc279fdff660608ef305b9107/tune_simple_template.ipynb
+++ b/docs/_downloads/0d95a85fc279fdff660608ef305b9107/tune_simple_template.ipynb
@@ -15,7 +15,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "\nWriting tunable template and Using auto-tuner\n=============================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_\n\nThis is an introduction tutorial to the auto-tuning module in TVM.\n\nThere are two steps in auto-tuning.\nThe first step is defining a search space.\nThe second step is running a search algorithm to explore through this space.\nIn this tutorial, you can learn how to perform these two steps in TVM.\nThe whole workflow is  [...]
+        "\nWriting Tunable Templates and Using the Auto-tuner\n==================================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_\n\nThis is an introduction tutorial to the auto-tuning module in TVM.\n\nThere are two steps in auto-tuning.\nThe first step is defining a search space.\nThe second step is running a search algorithm to explore through this space.\nIn this tutorial, you can learn how to perform these two steps in TVM.\nThe whole wo [...]
       ]
     },
     {
diff --git a/docs/_downloads/272a5a893d007658546dc0eaf0a7aeed/tune_relay_cuda.py b/docs/_downloads/272a5a893d007658546dc0eaf0a7aeed/tune_relay_cuda.py
index 9140713..33b62bb 100644
--- a/docs/_downloads/272a5a893d007658546dc0eaf0a7aeed/tune_relay_cuda.py
+++ b/docs/_downloads/272a5a893d007658546dc0eaf0a7aeed/tune_relay_cuda.py
@@ -15,7 +15,7 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Auto-tuning a convolutional network for NVIDIA GPU
+Auto-tuning a Convolutional Network for NVIDIA GPU
 ==================================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Eddie Yan <https://github.com/eqy/>`_
 
diff --git a/docs/_downloads/2771a7fc8bf8eeb7788823ff349aacc0/tune_network_cuda.py b/docs/_downloads/2771a7fc8bf8eeb7788823ff349aacc0/tune_network_cuda.py
index 9eb5d5c..4756ea3 100644
--- a/docs/_downloads/2771a7fc8bf8eeb7788823ff349aacc0/tune_network_cuda.py
+++ b/docs/_downloads/2771a7fc8bf8eeb7788823ff349aacc0/tune_network_cuda.py
@@ -15,8 +15,8 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Auto-tuning a Neural Network for NVIDIA GPU
-===========================================
+Auto-scheduling a Neural Network for NVIDIA GPU
+===============================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_
 
 Auto-tuning for specific devices and workloads is critical for getting the
@@ -156,6 +156,10 @@ print("Extract tasks...")
 mod, params, input_shape, output_shape = get_network(network, batch_size, layout, dtype=dtype)
 tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, target)
 
+for idx, task in enumerate(tasks):
+    print("========== Task %d  (workload key: %s) ==========" % (idx, task.workload_key))
+    print(task.compute_dag)
+
 #################################################################
 # Begin Tuning
 # ------------
@@ -250,7 +254,7 @@ def run_tuning():
 #   There will also be some "dmlc::Error"s and CUDA errors, because the
 #   auto-scheduler will try some invalid schedules.
 #   You can safely ignore them if the tuning can continue, because these
-#   errors are isolated from the master process.
+#   errors are isolated from the main process.
 #
 
 ######################################################################
diff --git a/docs/_downloads/2c0ed53a9ebd68caf76cd8235fae2711/tune_relay_mobile_gpu.ipynb b/docs/_downloads/2c0ed53a9ebd68caf76cd8235fae2711/tune_relay_mobile_gpu.ipynb
index 032e56e..74c1ac4 100644
--- a/docs/_downloads/2c0ed53a9ebd68caf76cd8235fae2711/tune_relay_mobile_gpu.ipynb
+++ b/docs/_downloads/2c0ed53a9ebd68caf76cd8235fae2711/tune_relay_mobile_gpu.ipynb
@@ -15,7 +15,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "\nAuto-tuning a convolutional network for Mobile GPU\n==================================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Eddie Yan <https://github.com/eqy>`_\n\nAuto-tuning for a specific device is critical for getting the best\nperformance. This is a tutorial about how to tune a whole convolutional\nnetwork.\n\nThe operator implementation for Mobile GPU in TVM is written in template form.\nThe template has many tunable knobs (tile [...]
+        "\nAuto-tuning a Convolutional Network for Mobile GPU\n==================================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Eddie Yan <https://github.com/eqy>`_\n\nAuto-tuning for a specific device is critical for getting the best\nperformance. This is a tutorial about how to tune a whole convolutional\nnetwork.\n\nThe operator implementation for Mobile GPU in TVM is written in template form.\nThe template has many tunable knobs (tile [...]
       ]
     },
     {
diff --git a/docs/_downloads/678f3c372a599a18d909aed0fefb30be/tune_conv2d_layer_cuda.py b/docs/_downloads/678f3c372a599a18d909aed0fefb30be/tune_conv2d_layer_cuda.py
index a8bb8dd..a28e98b 100644
--- a/docs/_downloads/678f3c372a599a18d909aed0fefb30be/tune_conv2d_layer_cuda.py
+++ b/docs/_downloads/678f3c372a599a18d909aed0fefb30be/tune_conv2d_layer_cuda.py
@@ -17,11 +17,13 @@
 """
 .. _auto-scheduler-conv-gpu:
 
-Auto-scheduling a convolution layer for GPU
+Auto-scheduling a Convolution Layer for GPU
 ===========================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, \
             `Chengfan Jia <https://github.com/jcf94/>`_
 
+This is a tutorial on how to use the auto-scheduler for GPUs.
+
 Different from the template-based :ref:`autotvm <tutorials-autotvm-sec>` which relies on
 manual templates to define the search space, the auto-scheduler does not require any templates.
 Users only need to write the computation declaration without any schedule commands or templates.
@@ -99,6 +101,7 @@ tune_option = auto_scheduler.TuningOptions(
     num_measure_trials=10,  # change this to 1000 to achieve the best performance
     runner=measure_ctx.runner,
     measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
+    verbose=2,
 )
 
 ######################################################################
diff --git a/docs/_downloads/739deb9ab034a5315ce6ba6bf7e5ff44/tune_relay_cuda.ipynb b/docs/_downloads/739deb9ab034a5315ce6ba6bf7e5ff44/tune_relay_cuda.ipynb
index 02c1e42..277a4a0 100644
--- a/docs/_downloads/739deb9ab034a5315ce6ba6bf7e5ff44/tune_relay_cuda.ipynb
+++ b/docs/_downloads/739deb9ab034a5315ce6ba6bf7e5ff44/tune_relay_cuda.ipynb
@@ -15,7 +15,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "\nAuto-tuning a convolutional network for NVIDIA GPU\n==================================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Eddie Yan <https://github.com/eqy/>`_\n\nAuto-tuning for specific devices and workloads is critical for getting the\nbest performance. This is a tutorial on how to tune a whole convolutional\nnetwork for NVIDIA GPU.\n\nThe operator implementation for NVIDIA GPU in TVM is written in template form.\nThe template ha [...]
+        "\nAuto-tuning a Convolutional Network for NVIDIA GPU\n==================================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Eddie Yan <https://github.com/eqy/>`_\n\nAuto-tuning for specific devices and workloads is critical for getting the\nbest performance. This is a tutorial on how to tune a whole convolutional\nnetwork for NVIDIA GPU.\n\nThe operator implementation for NVIDIA GPU in TVM is written in template form.\nThe template ha [...]
       ]
     },
     {
diff --git a/docs/_downloads/85ba00b8ada85b8c5367f37b526a8caa/tune_relay_x86.py b/docs/_downloads/85ba00b8ada85b8c5367f37b526a8caa/tune_relay_x86.py
index 5b3d032..30e62ef 100644
--- a/docs/_downloads/85ba00b8ada85b8c5367f37b526a8caa/tune_relay_x86.py
+++ b/docs/_downloads/85ba00b8ada85b8c5367f37b526a8caa/tune_relay_x86.py
@@ -17,7 +17,7 @@
 """
 .. _tune_relay_x86:
 
-Auto-tuning a convolutional network for x86 CPU
+Auto-tuning a Convolutional Network for x86 CPU
 ===============================================
 **Author**: `Yao Wang <https://github.com/kevinthesun>`_, `Eddie Yan <https://github.com/eqy>`_
 
diff --git a/docs/_downloads/91b0339c8f3cc2594cee580dc450149a/tune_matmul_x86.py b/docs/_downloads/91b0339c8f3cc2594cee580dc450149a/tune_matmul_x86.py
index 2bd47de..6d75629 100644
--- a/docs/_downloads/91b0339c8f3cc2594cee580dc450149a/tune_matmul_x86.py
+++ b/docs/_downloads/91b0339c8f3cc2594cee580dc450149a/tune_matmul_x86.py
@@ -15,11 +15,13 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Auto-scheduling matrix multiplication for CPU
+Auto-scheduling Matrix Multiplication for CPU
 =============================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, \
             `Chengfan Jia <https://github.com/jcf94/>`_
 
+This is a tutorial on how to use the auto-scheduler for CPUs.
+
 Different from the template-based :ref:`autotvm <tutorials-autotvm-sec>` which relies on
 manual templates to define the search space, the auto-scheduler does not require any templates.
 Users only need to write the computation declaration without any schedule commands or templates.
@@ -88,7 +90,9 @@ print(task.compute_dag)
 
 log_file = "matmul.json"
 tune_option = auto_scheduler.TuningOptions(
-    num_measure_trials=10, measure_callbacks=[auto_scheduler.RecordToFile(log_file)]
+    num_measure_trials=10,
+    measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
+    verbose=2,
 )
 
 ######################################################################
diff --git a/docs/_downloads/b9891d1a23f84eec3271025d99d005f7/tune_relay_x86.ipynb b/docs/_downloads/b9891d1a23f84eec3271025d99d005f7/tune_relay_x86.ipynb
index 2a07a04..e540aa6 100644
--- a/docs/_downloads/b9891d1a23f84eec3271025d99d005f7/tune_relay_x86.ipynb
+++ b/docs/_downloads/b9891d1a23f84eec3271025d99d005f7/tune_relay_x86.ipynb
@@ -15,7 +15,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "\n\nAuto-tuning a convolutional network for x86 CPU\n===============================================\n**Author**: `Yao Wang <https://github.com/kevinthesun>`_, `Eddie Yan <https://github.com/eqy>`_\n\nThis is a tutorial about how to tune convolution neural network\nfor x86 CPU.\n\nNote that this tutorial will not run on Windows or recent versions of macOS. To\nget it to run, you will need to wrap the body of this tutorial in a :code:`if\n__name__ == \"__main__\":` block.\n\n"
+        "\n\nAuto-tuning a Convolutional Network for x86 CPU\n===============================================\n**Author**: `Yao Wang <https://github.com/kevinthesun>`_, `Eddie Yan <https://github.com/eqy>`_\n\nThis is a tutorial about how to tune convolution neural network\nfor x86 CPU.\n\nNote that this tutorial will not run on Windows or recent versions of macOS. To\nget it to run, you will need to wrap the body of this tutorial in a :code:`if\n__name__ == \"__main__\":` block.\n\n"
       ]
     },
     {
diff --git a/docs/_downloads/baf1373314e0e040008107ff2571b4cd/tune_relay_arm.py b/docs/_downloads/baf1373314e0e040008107ff2571b4cd/tune_relay_arm.py
index c69c7d9..1e1e98a 100644
--- a/docs/_downloads/baf1373314e0e040008107ff2571b4cd/tune_relay_arm.py
+++ b/docs/_downloads/baf1373314e0e040008107ff2571b4cd/tune_relay_arm.py
@@ -17,7 +17,7 @@
 """
 .. _tune_relay_arm:
 
-Auto-tuning a convolutional network for ARM CPU
+Auto-tuning a Convolutional Network for ARM CPU
 ===============================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Zhao Wu <https://github.com/FrozenGene>`_, `Eddie Yan <https://github.com/eqy>`_
 
diff --git a/docs/_downloads/bcb4a24e8acc1ca84214bc8d7fb7954b/tune_conv2d_layer_cuda.ipynb b/docs/_downloads/bcb4a24e8acc1ca84214bc8d7fb7954b/tune_conv2d_layer_cuda.ipynb
index 03a713a..8dd3ff1 100644
--- a/docs/_downloads/bcb4a24e8acc1ca84214bc8d7fb7954b/tune_conv2d_layer_cuda.ipynb
+++ b/docs/_downloads/bcb4a24e8acc1ca84214bc8d7fb7954b/tune_conv2d_layer_cuda.ipynb
@@ -15,7 +15,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "\n\nAuto-scheduling a convolution layer for GPU\n===========================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_,             `Chengfan Jia <https://github.com/jcf94/>`_\n\nDifferent from the template-based `autotvm <tutorials-autotvm-sec>` which relies on\nmanual templates to define the search space, the auto-scheduler does not require any templates.\nUsers only need to write the computation declaration without any schedule commands or  [...]
+        "\n\nAuto-scheduling a Convolution Layer for GPU\n===========================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_,             `Chengfan Jia <https://github.com/jcf94/>`_\n\nThis is a tutorial on how to use the auto-scheduler for GPUs.\n\nDifferent from the template-based `autotvm <tutorials-autotvm-sec>` which relies on\nmanual templates to define the search space, the auto-scheduler does not require any templates.\nUsers only need to wr [...]
       ]
     },
     {
@@ -80,7 +80,7 @@
       },
       "outputs": [],
       "source": [
-        "log_file = \"conv2d.json\"\nmeasure_ctx = auto_scheduler.LocalRPCMeasureContext(min_repeat_ms=300)\ntune_option = auto_scheduler.TuningOptions(\n    num_measure_trials=10,  # change this to 1000 to achieve the best performance\n    runner=measure_ctx.runner,\n    measure_callbacks=[auto_scheduler.RecordToFile(log_file)],\n)"
+        "log_file = \"conv2d.json\"\nmeasure_ctx = auto_scheduler.LocalRPCMeasureContext(min_repeat_ms=300)\ntune_option = auto_scheduler.TuningOptions(\n    num_measure_trials=10,  # change this to 1000 to achieve the best performance\n    runner=measure_ctx.runner,\n    measure_callbacks=[auto_scheduler.RecordToFile(log_file)],\n    verbose=2,\n)"
       ]
     },
     {
diff --git a/docs/_downloads/dad91669fd0ea707f1374fe331b0dffe/tune_network_cuda.ipynb b/docs/_downloads/dad91669fd0ea707f1374fe331b0dffe/tune_network_cuda.ipynb
index 312cec7..7814f32 100644
--- a/docs/_downloads/dad91669fd0ea707f1374fe331b0dffe/tune_network_cuda.ipynb
+++ b/docs/_downloads/dad91669fd0ea707f1374fe331b0dffe/tune_network_cuda.ipynb
@@ -15,7 +15,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "\nAuto-tuning a Neural Network for NVIDIA GPU\n===========================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_\n\nAuto-tuning for specific devices and workloads is critical for getting the\nbest performance. This is a tutorial on how to tune a whole neural\nnetwork for NVIDIA GPU with the auto-scheduler.\n\nTo auto-tune a neural network, we partition the network into small subgraphs and \ntune them independently. Each subgraph is treated [...]
+        "\nAuto-scheduling a Neural Network for NVIDIA GPU\n===============================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_\n\nAuto-tuning for specific devices and workloads is critical for getting the\nbest performance. This is a tutorial on how to tune a whole neural\nnetwork for NVIDIA GPU with the auto-scheduler.\n\nTo auto-tune a neural network, we partition the network into small subgraphs and \ntune them independently. Each subgraph is [...]
       ]
     },
     {
@@ -62,7 +62,7 @@
       },
       "outputs": [],
       "source": [
-        "# Enable auto-scheduler in relay\nauto_scheduler.enable_relay_integration()\n\n# Extract tasks from the network\nprint(\"Extract tasks...\")\nmod, params, input_shape, output_shape = get_network(network, batch_size, layout, dtype=dtype)\ntasks, task_weights = auto_scheduler.extract_tasks(mod[\"main\"], params, target)"
+        "# Enable auto-scheduler in relay\nauto_scheduler.enable_relay_integration()\n\n# Extract tasks from the network\nprint(\"Extract tasks...\")\nmod, params, input_shape, output_shape = get_network(network, batch_size, layout, dtype=dtype)\ntasks, task_weights = auto_scheduler.extract_tasks(mod[\"main\"], params, target)\n\nfor idx, task in enumerate(tasks):\n    print(\"========== Task %d  (workload key: %s) ==========\" % (idx, task.workload_key))\n    print(task.compute_dag)"
       ]
     },
     {
@@ -87,7 +87,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "<div class=\"alert alert-info\"><h4>Note</h4><p>Explain the printed information during tuning\n\n  During the tuning, a lot of information will be printed on the console.\n  They are used for debugging purposes. The most important info is the output\n  of the task scheduler. The following table is a sample output.\n\n  .. code-block:: c\n\n    ----------------------------------------------------------------------\n    ------------------------------  [ Task Scheduler ]\n    ----- [...]
+        "<div class=\"alert alert-info\"><h4>Note</h4><p>Explain the printed information during tuning\n\n  During the tuning, a lot of information will be printed on the console.\n  They are used for debugging purposes. The most important info is the output\n  of the task scheduler. The following table is a sample output.\n\n  .. code-block:: c\n\n    ----------------------------------------------------------------------\n    ------------------------------  [ Task Scheduler ]\n    ----- [...]
       ]
     },
     {
diff --git a/docs/_downloads/e41367a7f459e4f4dca82180009c1539/tune_relay_mobile_gpu.py b/docs/_downloads/e41367a7f459e4f4dca82180009c1539/tune_relay_mobile_gpu.py
index 3611696..10e201f 100644
--- a/docs/_downloads/e41367a7f459e4f4dca82180009c1539/tune_relay_mobile_gpu.py
+++ b/docs/_downloads/e41367a7f459e4f4dca82180009c1539/tune_relay_mobile_gpu.py
@@ -15,7 +15,7 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Auto-tuning a convolutional network for Mobile GPU
+Auto-tuning a Convolutional Network for Mobile GPU
 ==================================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Eddie Yan <https://github.com/eqy>`_
 
diff --git a/docs/_downloads/f1a09967bab66114252357e4a9babb45/tune_matmul_x86.ipynb b/docs/_downloads/f1a09967bab66114252357e4a9babb45/tune_matmul_x86.ipynb
index 4c33490..e58f5ba 100644
--- a/docs/_downloads/f1a09967bab66114252357e4a9babb45/tune_matmul_x86.ipynb
+++ b/docs/_downloads/f1a09967bab66114252357e4a9babb45/tune_matmul_x86.ipynb
@@ -15,7 +15,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "\nAuto-scheduling matrix multiplication for CPU\n=============================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_,             `Chengfan Jia <https://github.com/jcf94/>`_\n\nDifferent from the template-based `autotvm <tutorials-autotvm-sec>` which relies on\nmanual templates to define the search space, the auto-scheduler does not require any templates.\nUsers only need to write the computation declaration without any schedule commands o [...]
+        "\nAuto-scheduling Matrix Multiplication for CPU\n=============================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_,             `Chengfan Jia <https://github.com/jcf94/>`_\n\nThis is a tutorial on how to use the auto-scheduler for CPUs.\n\nDifferent from the template-based `autotvm <tutorials-autotvm-sec>` which relies on\nmanual templates to define the search space, the auto-scheduler does not require any templates.\nUsers only need to  [...]
       ]
     },
     {
@@ -80,7 +80,7 @@
       },
       "outputs": [],
       "source": [
-        "log_file = \"matmul.json\"\ntune_option = auto_scheduler.TuningOptions(\n    num_measure_trials=10, measure_callbacks=[auto_scheduler.RecordToFile(log_file)]\n)"
+        "log_file = \"matmul.json\"\ntune_option = auto_scheduler.TuningOptions(\n    num_measure_trials=10,\n    measure_callbacks=[auto_scheduler.RecordToFile(log_file)],\n    verbose=2,\n)"
       ]
     },
     {
diff --git a/docs/_downloads/f8f7a2adf30f5033603d79cdbacd9235/tune_relay_arm.ipynb b/docs/_downloads/f8f7a2adf30f5033603d79cdbacd9235/tune_relay_arm.ipynb
index 6a0ef0c..6d84eeb 100644
--- a/docs/_downloads/f8f7a2adf30f5033603d79cdbacd9235/tune_relay_arm.ipynb
+++ b/docs/_downloads/f8f7a2adf30f5033603d79cdbacd9235/tune_relay_arm.ipynb
@@ -15,7 +15,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "\n\nAuto-tuning a convolutional network for ARM CPU\n===============================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Zhao Wu <https://github.com/FrozenGene>`_, `Eddie Yan <https://github.com/eqy>`_\n\nAuto-tuning for a specific ARM device is critical for getting the best\nperformance. This is a tutorial about how to tune a whole convolutional\nnetwork.\n\nThe operator implementation for ARM CPU in TVM is written in template form.\n [...]
+        "\n\nAuto-tuning a Convolutional Network for ARM CPU\n===============================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Zhao Wu <https://github.com/FrozenGene>`_, `Eddie Yan <https://github.com/eqy>`_\n\nAuto-tuning for a specific ARM device is critical for getting the best\nperformance. This is a tutorial about how to tune a whole convolutional\nnetwork.\n\nThe operator implementation for ARM CPU in TVM is written in template form.\n [...]
       ]
     },
     {
diff --git a/docs/_sources/index.rst.txt b/docs/_sources/index.rst.txt
index 18b2da7..f407fa2 100644
--- a/docs/_sources/index.rst.txt
+++ b/docs/_sources/index.rst.txt
@@ -25,7 +25,7 @@ Get Started
 -----------
 
 - Follow the :doc:`instructions <install/index>` to install TVM.
-- Checkout the :doc:`Tutorials <tutorials/index>`.
+- Checkout the :doc:`tutorials <tutorials/index>`.
 
 For Developers
 --------------
diff --git a/docs/_sources/tutorials/auto_scheduler/sg_execution_times.rst.txt b/docs/_sources/tutorials/auto_scheduler/sg_execution_times.rst.txt
index eefb895..03c79e9 100644
--- a/docs/_sources/tutorials/auto_scheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/sg_execution_times.rst.txt
@@ -5,8 +5,8 @@
 
 Computation times
 =================
-**05:05.432** total execution time for **tutorials_auto_scheduler** files:
+**05:00.161** total execution time for **tutorials_auto_scheduler** files:
 
-- **02:46.397**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``)
-- **01:54.000**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_matmul_x86.py` (``tune_matmul_x86.py``)
-- **00:25.034**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_network_cuda.py` (``tune_network_cuda.py``)
+- **02:42.357**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``)
+- **01:53.558**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_matmul_x86.py` (``tune_matmul_x86.py``)
+- **00:24.246**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_network_cuda.py` (``tune_network_cuda.py``)
diff --git a/docs/_sources/tutorials/auto_scheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/tutorials/auto_scheduler/tune_conv2d_layer_cuda.rst.txt
index 37ec121..03aafc6 100644
--- a/docs/_sources/tutorials/auto_scheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/tune_conv2d_layer_cuda.rst.txt
@@ -9,10 +9,12 @@
 
 .. _auto-scheduler-conv-gpu:
 
-Auto-scheduling a convolution layer for GPU
+Auto-scheduling a Convolution Layer for GPU
 ===========================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_,             `Chengfan Jia <https://github.com/jcf94/>`_
 
+This is a tutorial on how to use the auto-scheduler for GPUs.
+
 Different from the template-based :ref:`autotvm <tutorials-autotvm-sec>` which relies on
 manual templates to define the search space, the auto-scheduler does not require any templates.
 Users only need to write the computation declaration without any schedule commands or templates.
@@ -135,6 +137,7 @@ mainly specify how we do the measurement during the search.
         num_measure_trials=10,  # change this to 1000 to achieve the best performance
         runner=measure_ctx.runner,
         measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
+        verbose=2,
     )
 
 
@@ -207,92 +210,58 @@ cooperative fetching, unrolling and operator fusion.
                  kernel: Buffer(kernel_2: Pointer(float32), float32, [512, 512, 3, 3], []),
                  data: Buffer(data_2: Pointer(float32), float32, [1, 512, 7, 7], [])}
       buffer_map = {data_1: data, kernel_1: kernel, bias_1: bias, compute_1: compute} {
-      attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 224;
+      attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 16;
       attr [compute_3: Pointer(float32)] "storage_scope" = "local";
-      allocate(compute_3, float32, [2]);
+      allocate(compute_3, float32, [14]);
       attr [pad_temp.shared: Pointer(float32)] "storage_scope" = "shared";
-      allocate(pad_temp.shared, float32, [72]);
+      allocate(pad_temp.shared, float32, [81]);
       attr [kernel.shared: Pointer(float32)] "storage_scope" = "shared";
-      allocate(kernel.shared, float32, [384]);
-      attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56 {
-        compute_3[0] = 0f32
-        compute_3[1] = 0f32
-        for (rc.outer.outer: int32, 0, 64) {
-          for (rx.outer.outer: int32, 0, 3) {
-            attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
-            pad_temp.shared[threadIdx.x_1] = @tir.if_then_else(((((1 <= floormod(threadIdx.x_1, 9)) && (floormod(threadIdx.x_1, 9) < 8)) && (1 <= (rx.outer.outer + floormod(blockIdx.x, 7)))) && ((rx.outer.outer + floormod(blockIdx.x, 7)) < 8)), (float32*)data_2[((((((rc.outer.outer*392) + (floordiv(threadIdx.x_1, 9)*49)) + (floormod(threadIdx.x_1, 9)*7)) + rx.outer.outer) + floormod(blockIdx.x, 7)) - 8)], 0f32, dtype=float32)
-            attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
-            if @tir.likely((threadIdx.x_1 < 16), dtype=bool) {
-              pad_temp.shared[(threadIdx.x_1 + 56)] = @tir.if_then_else(((((1 <= floormod((threadIdx.x_1 + 2), 9)) && (floormod((threadIdx.x_1 + 2), 9) < 8)) && (1 <= (rx.outer.outer + floormod(blockIdx.x, 7)))) && ((rx.outer.outer + floormod(blockIdx.x, 7)) < 8)), (float32*)data_2[((((((rc.outer.outer*392) + (floordiv((threadIdx.x_1 + 56), 9)*49)) + (floormod((threadIdx.x_1 + 2), 9)*7)) + rx.outer.outer) + floormod(blockIdx.x, 7)) - 8)], 0f32, dtype=float32)
+      allocate(kernel.shared, float32, [288]);
+      attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 112 {
+        for (ff.inner.init: int32, 0, 2) {
+          compute_3[ff.inner.init] = 0f32
+          compute_3[(ff.inner.init + 2)] = 0f32
+          compute_3[(ff.inner.init + 4)] = 0f32
+          compute_3[(ff.inner.init + 6)] = 0f32
+          compute_3[(ff.inner.init + 8)] = 0f32
+          compute_3[(ff.inner.init + 10)] = 0f32
+          compute_3[(ff.inner.init + 12)] = 0f32
+        }
+        for (rc.outer.outer: int32, 0, 512) {
+          attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 112;
+          if @tir.likely((threadIdx.x_1 < 9), dtype=bool) {
+            for (ax0.ax1.fused.ax2.fused.ax3.fused.inner.s: int32, 0, 9) {
+              pad_temp.shared[((threadIdx.x_1*9) + ax0.ax1.fused.ax2.fused.ax3.fused.inner.s)] = @tir.if_then_else(((((1 <= threadIdx.x_1) && (threadIdx.x_1 < 8)) && (1 <= ax0.ax1.fused.ax2.fused.ax3.fused.inner.s)) && (ax0.ax1.fused.ax2.fused.ax3.fused.inner.s < 8)), (float32*)data_2[((((rc.outer.outer*49) + (threadIdx.x_1*7)) + ax0.ax1.fused.ax2.fused.ax3.fused.inner.s) - 8)], 0f32, dtype=float32)
             }
-            attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
-            kernel.shared[threadIdx.x_2] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floormod(threadIdx.x_2, 24)*3)) + rx.outer.outer)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
-            kernel.shared[(threadIdx.x_2 + 56)] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv((threadIdx.x_2 + 56), 24)*4608)) + (rc.outer.outer*72)) + (floormod((threadIdx.x_2 + 8), 24)*3)) + rx.outer.outer)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
-            kernel.shared[(threadIdx.x_2 + 112)] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv((threadIdx.x_2 + 112), 24)*4608)) + (rc.outer.outer*72)) + (floormod((threadIdx.x_2 + 16), 24)*3)) + rx.outer.outer)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
-            kernel.shared[(threadIdx.x_2 + 168)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*73728) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floormod(threadIdx.x_2, 24)*3)) + rx.outer.outer) + 32256)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
-            kernel.shared[(threadIdx.x_2 + 224)] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv((threadIdx.x_2 + 224), 24)*4608)) + (rc.outer.outer*72)) + (floormod((threadIdx.x_2 + 8), 24)*3)) + rx.outer.outer)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
-            kernel.shared[(threadIdx.x_2 + 280)] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv((threadIdx.x_2 + 280), 24)*4608)) + (rc.outer.outer*72)) + (floormod((threadIdx.x_2 + 16), 24)*3)) + rx.outer.outer)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
-            if @tir.likely((threadIdx.x_2 < 48), dtype=bool) {
-              kernel.shared[(threadIdx.x_2 + 336)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*73728) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floormod(threadIdx.x_2, 24)*3)) + rx.outer.outer) + 64512)]
+          }
+          for (ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer: int32, 0, 3) {
+            attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 112;
+            if @tir.likely((((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*112) + threadIdx.x_2) < 288), dtype=bool) {
+              kernel.shared[((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*112) + threadIdx.x_2)] = (float32*)kernel_2[((((blockIdx.x*147456) + (floordiv(((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*112) + threadIdx.x_2), 9)*4608)) + (rc.outer.outer*9)) + floormod(((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*112) + threadIdx.x_2), 9))]
+            }
+          }
+          for (ry.outer.inner: int32, 0, 3) {
+            for (rx.inner: int32, 0, 3) {
+              for (ff.inner: int32, 0, 2) {
+                compute_3[ff.inner] = ((float32*)compute_3[ff.inner] + ((float32*)pad_temp.shared[(((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+                compute_3[(ff.inner + 2)] = ((float32*)compute_3[(ff.inner + 2)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 1)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+                compute_3[(ff.inner + 4)] = ((float32*)compute_3[(ff.inner + 4)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 2)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+                compute_3[(ff.inner + 6)] = ((float32*)compute_3[(ff.inner + 6)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 3)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+                compute_3[(ff.inner + 8)] = ((float32*)compute_3[(ff.inner + 8)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 4)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+                compute_3[(ff.inner + 10)] = ((float32*)compute_3[(ff.inner + 10)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 5)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+                compute_3[(ff.inner + 12)] = ((float32*)compute_3[(ff.inner + 12)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 6)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+              }
             }
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[floormod(threadIdx.x, 7)]*(float32*)kernel.shared[(floordiv(threadIdx.x, 7)*48)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 9)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 3)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[floormod(threadIdx.x, 7)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 24)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 9)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 27)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 1)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 1)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 10)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 4)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 1)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 25)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 10)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 28)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 2)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 2)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 11)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 5)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 2)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 26)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 11)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 29)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 18)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 6)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 27)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 9)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 18)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 30)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 27)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 33)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 19)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 7)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 28)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 10)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 19)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 31)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 28)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 34)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 20)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 8)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 29)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 11)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 20)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 32)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 29)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 35)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 36)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 12)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 45)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 15)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 36)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 36)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 45)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 39)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 37)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 13)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 46)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 16)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 37)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 37)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 46)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 40)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 38)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 14)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 47)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 17)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 38)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 38)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 47)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 41)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 54)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 18)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 63)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 21)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 54)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 42)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 63)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 45)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 55)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 19)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 64)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 22)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 55)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 43)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 64)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 46)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 56)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 20)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 65)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 23)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 56)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 44)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 65)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 47)]))
           }
         }
         for (i1.inner: int32, 0, 2) {
-          compute_2[(((((floordiv(blockIdx.x, 7)*784) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + floormod(blockIdx.x, 7))] = max(((float32*)compute_3[i1.inner] + (float32*)bias_2[(((floordiv(blockIdx.x, 7)*16) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+          compute_2[((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7))] = max(((float32*)compute_3[i1.inner] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+          compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 1)] = max(((float32*)compute_3[(i1.inner + 2)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+          compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 2)] = max(((float32*)compute_3[(i1.inner + 4)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+          compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 3)] = max(((float32*)compute_3[(i1.inner + 6)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+          compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 4)] = max(((float32*)compute_3[(i1.inner + 8)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+          compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 5)] = max(((float32*)compute_3[(i1.inner + 10)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+          compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 6)] = max(((float32*)compute_3[(i1.inner + 12)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
         }
       }
     }
@@ -345,7 +314,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 0.364 ms
+    Execution time of this operator: 0.394 ms
 
 
 
@@ -395,9 +364,9 @@ print the equivalent python schedule API, and build the binary again.
     compute_nn_o_o_i, compute_nn_o_i = s[compute].split(compute_nn_o_i, factor=1)
     compute_nn_o_o_o_i, compute_nn_o_o_i = s[compute].split(compute_nn_o_o_i, factor=1)
     compute_nn_o_o_o_o, compute_nn_o_o_o_i = s[compute].split(compute_nn_o_o_o_i, factor=1)
-    compute_ff_o_i, compute_ff_i = s[compute].split(compute_ff, factor=1)
-    compute_ff_o_o_i, compute_ff_o_i = s[compute].split(compute_ff_o_i, factor=2)
-    compute_ff_o_o_o_i, compute_ff_o_o_i = s[compute].split(compute_ff_o_o_i, factor=8)
+    compute_ff_o_i, compute_ff_i = s[compute].split(compute_ff, factor=2)
+    compute_ff_o_o_i, compute_ff_o_i = s[compute].split(compute_ff_o_i, factor=1)
+    compute_ff_o_o_o_i, compute_ff_o_o_i = s[compute].split(compute_ff_o_o_i, factor=16)
     compute_ff_o_o_o_o, compute_ff_o_o_o_i = s[compute].split(compute_ff_o_o_o_i, factor=1)
     compute_yy_o_i, compute_yy_i = s[compute].split(compute_yy, factor=1)
     compute_yy_o_o_i, compute_yy_o_i = s[compute].split(compute_yy_o_i, factor=1)
@@ -406,26 +375,26 @@ print the equivalent python schedule API, and build the binary again.
     compute_xx_o_i, compute_xx_i = s[compute].split(compute_xx, factor=1)
     compute_xx_o_o_i, compute_xx_o_i = s[compute].split(compute_xx_o_i, factor=1)
     compute_xx_o_o_o_i, compute_xx_o_o_i = s[compute].split(compute_xx_o_o_i, factor=1)
-    compute_xx_o_o_o_o, compute_xx_o_o_o_i = s[compute].split(compute_xx_o_o_o_i, factor=1)
-    compute_rc_o_i, compute_rc_i = s[compute].split(compute_rc, factor=2)
-    compute_rc_o_o, compute_rc_o_i = s[compute].split(compute_rc_o_i, factor=4)
+    compute_xx_o_o_o_o, compute_xx_o_o_o_i = s[compute].split(compute_xx_o_o_o_i, factor=7)
+    compute_rc_o_i, compute_rc_i = s[compute].split(compute_rc, factor=1)
+    compute_rc_o_o, compute_rc_o_i = s[compute].split(compute_rc_o_i, factor=1)
     compute_ry_o_i, compute_ry_i = s[compute].split(compute_ry, factor=1)
     compute_ry_o_o, compute_ry_o_i = s[compute].split(compute_ry_o_i, factor=3)
-    compute_rx_o_i, compute_rx_i = s[compute].split(compute_rx, factor=1)
+    compute_rx_o_i, compute_rx_i = s[compute].split(compute_rx, factor=3)
     compute_rx_o_o, compute_rx_o_i = s[compute].split(compute_rx_o_i, factor=1)
     s[compute].reorder(compute_nn_o_o_o_o, compute_ff_o_o_o_o, compute_yy_o_o_o_o, compute_xx_o_o_o_o, compute_nn_o_o_o_i, compute_ff_o_o_o_i, compute_yy_o_o_o_i, compute_xx_o_o_o_i, compute_nn_o_o_i, compute_ff_o_o_i, compute_yy_o_o_i, compute_xx_o_o_i, compute_rc_o_o, compute_ry_o_o, compute_rx_o_o, compute_rc_o_i, compute_ry_o_i, compute_rx_o_i, compute_nn_o_i, compute_ff_o_i, compute_yy_o_i, compute_xx_o_i, compute_rc_i, compute_ry_i, compute_rx_i, compute_nn_i, compute_ff_i, compute [...]
     compute_i0_o_i, compute_i0_i = s[compute].split(compute_i0, factor=1)
     compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
     compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
     compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=2)
-    compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=8)
+    compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=16)
     compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
     compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
     compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
     compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
     compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=1)
     compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=1)
-    compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=1)
+    compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=7)
     s[compute].reorder(compute_i0_o_o_o, compute_i1_o_o_o, compute_i2_o_o_o, compute_i3_o_o_o, compute_i0_o_o_i, compute_i1_o_o_i, compute_i2_o_o_i, compute_i3_o_o_i, compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i, compute_i0_i, compute_i1_i, compute_i2_i, compute_i3_i)
     s[compute].compute_at(s[compute], compute_i3_o_i)
     kernel_shared = s.cache_read(kernel, "shared", [compute])
@@ -444,14 +413,14 @@ print the equivalent python schedule API, and build the binary again.
     kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
     kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
     s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
+    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=112)
     s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
     pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
-    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
+    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=9)
     s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
+    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=112)
     s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
-    s[compute].pragma(compute_nn_o_o_o_o, "auto_unroll_max_step", 64)
+    s[compute].pragma(compute_nn_o_o_o_o, "auto_unroll_max_step", 0)
     s[compute].pragma(compute_nn_o_o_o_o, "unroll_explicit", True)
 
 
@@ -499,7 +468,7 @@ In the example below we resume the status and do more 5 trials.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  46.397 seconds)
+   **Total running time of the script:** ( 2 minutes  42.357 seconds)
 
 
 .. _sphx_glr_download_tutorials_auto_scheduler_tune_conv2d_layer_cuda.py:
diff --git a/docs/_sources/tutorials/auto_scheduler/tune_matmul_x86.rst.txt b/docs/_sources/tutorials/auto_scheduler/tune_matmul_x86.rst.txt
index ef4ea6f..8a41c85 100644
--- a/docs/_sources/tutorials/auto_scheduler/tune_matmul_x86.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/tune_matmul_x86.rst.txt
@@ -7,10 +7,12 @@
 .. _sphx_glr_tutorials_auto_scheduler_tune_matmul_x86.py:
 
 
-Auto-scheduling matrix multiplication for CPU
+Auto-scheduling Matrix Multiplication for CPU
 =============================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_,             `Chengfan Jia <https://github.com/jcf94/>`_
 
+This is a tutorial on how to use the auto-scheduler for CPUs.
+
 Different from the template-based :ref:`autotvm <tutorials-autotvm-sec>` which relies on
 manual templates to define the search space, the auto-scheduler does not require any templates.
 Users only need to write the computation declaration without any schedule commands or templates.
@@ -122,7 +124,9 @@ Next, we set parameters for the auto-scheduler.
 
     log_file = "matmul.json"
     tune_option = auto_scheduler.TuningOptions(
-        num_measure_trials=10, measure_callbacks=[auto_scheduler.RecordToFile(log_file)]
+        num_measure_trials=10,
+        measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
+        verbose=2,
     )
 
 
@@ -248,7 +252,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 2.209 ms
+    Execution time of this operator: 2.243 ms
 
 
 
@@ -364,7 +368,7 @@ In the example below we resume the status and do more 5 trials.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  54.000 seconds)
+   **Total running time of the script:** ( 1 minutes  53.558 seconds)
 
 
 .. _sphx_glr_download_tutorials_auto_scheduler_tune_matmul_x86.py:
diff --git a/docs/_sources/tutorials/auto_scheduler/tune_network_cuda.rst.txt b/docs/_sources/tutorials/auto_scheduler/tune_network_cuda.rst.txt
index e993bac..e01863a 100644
--- a/docs/_sources/tutorials/auto_scheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/tune_network_cuda.rst.txt
@@ -7,8 +7,8 @@
 .. _sphx_glr_tutorials_auto_scheduler_tune_network_cuda.py:
 
 
-Auto-tuning a Neural Network for NVIDIA GPU
-===========================================
+Auto-scheduling a Neural Network for NVIDIA GPU
+===============================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_
 
 Auto-tuning for specific devices and workloads is critical for getting the
@@ -169,6 +169,10 @@ The task scheduler will just optimize this objective.
     mod, params, input_shape, output_shape = get_network(network, batch_size, layout, dtype=dtype)
     tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, target)
 
+    for idx, task in enumerate(tasks):
+        print("========== Task %d  (workload key: %s) ==========" % (idx, task.workload_key))
+        print(task.compute_dag)
+
 
 
 
@@ -180,6 +184,259 @@ The task scheduler will just optimize this objective.
  .. code-block:: none
 
     Extract tasks...
+    ========== Task 0  (workload key: ["d09dc1a6bb90d59c91b68989ad3492ff"]) ==========
+    placeholder = PLACEHOLDER [1, 512]
+    placeholder = PLACEHOLDER [1000, 512]
+    T_dense(i, j) += (placeholder[i, k]*placeholder[j, k])
+    placeholder = PLACEHOLDER [1000]
+    T_add(ax0, ax1) = (T_dense[ax0, ax1] + placeholder[ax1])
+
+    ========== Task 1  (workload key: ["8d5a93959138dc7b2ee1f1b3219dfa14"]) ==========
+    placeholder = PLACEHOLDER [1, 7, 7, 512]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 8)) && (i2 >= 1)) && (i2 < 8)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 16), ((floormod(floordiv(p, 4), 4)*2) + eps), ((floormod(p, 4)*2) + nu), ci]
+    B(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [4, 4, 512, 512]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) && (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 2) == 0)), 1f, 0f))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*4)*4) + (floordiv(h, 2)*4)) + floordiv(w, 2)), co]
+    placeholder = PLACEHOLDER [1, 7, 7, 512]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+    placeholder = PLACEHOLDER [1, 1, 1, 512]
+    T_multiply(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3]*placeholder[ax0, 0, 0, ax3])
+    placeholder = PLACEHOLDER [1, 1, 1, 512]
+    T_add(ax0, ax1, ax2, ax3) = (T_multiply[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 2  (workload key: ["ac6920940de3797cc3f9f9c260675e5d"]) ==========
+    placeholder = PLACEHOLDER [1, 7, 7, 512]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 8)) && (i2 >= 1)) && (i2 < 8)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 16), ((floormod(floordiv(p, 4), 4)*2) + eps), ((floormod(p, 4)*2) + nu), ci]
+    B(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [4, 4, 512, 512]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) && (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 2) == 0)), 1f, 0f))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*4)*4) + (floordiv(h, 2)*4)) + floordiv(w, 2)), co]
+    placeholder = PLACEHOLDER [1, 1, 1, 512]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 3  (workload key: ["7e83a2ee5cd5d50282ed19310700046a"]) ==========
+    placeholder = PLACEHOLDER [1, 7, 7, 512]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 8)) && (i2 >= 1)) && (i2 < 8)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 16), ((floormod(floordiv(p, 4), 4)*2) + eps), ((floormod(p, 4)*2) + nu), ci]
+    B(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [4, 4, 512, 512]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) && (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 2) == 0)), 1f, 0f))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*4)*4) + (floordiv(h, 2)*4)) + floordiv(w, 2)), co]
+    placeholder = PLACEHOLDER [1, 7, 7, 512]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+
+    ========== Task 4  (workload key: ["1f6cd3637ec856bf5cf5010a623eed05"]) ==========
+    placeholder = PLACEHOLDER [1, 14, 14, 256]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 15)) && (i2 >= 1)) && (i2 < 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 256, 512]
+    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+    placeholder = PLACEHOLDER [1, 1, 1, 512]
+    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 5  (workload key: ["424ba83160af31badc0b098136e1a3b0"]) ==========
+    placeholder = PLACEHOLDER [1, 14, 14, 256]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 15)) && (i2 >= 1)) && (i2 < 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 49), ((floormod(floordiv(p, 7), 7)*2) + eps), ((floormod(p, 7)*2) + nu), ci]
+    B(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [4, 4, 256, 256]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) && (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 2) == 0)), 1f, 0f))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*7)*7) + (floordiv(h, 2)*7)) + floordiv(w, 2)), co]
+    placeholder = PLACEHOLDER [1, 14, 14, 256]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+    placeholder = PLACEHOLDER [1, 1, 1, 256]
+    T_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 6  (workload key: ["a169cd0053d3a7ca82998fcb62e42c58"]) ==========
+    placeholder = PLACEHOLDER [1, 14, 14, 256]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 15)) && (i2 >= 1)) && (i2 < 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 49), ((floormod(floordiv(p, 7), 7)*2) + eps), ((floormod(p, 7)*2) + nu), ci]
+    B(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [4, 4, 256, 256]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) && (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 2) == 0)), 1f, 0f))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*7)*7) + (floordiv(h, 2)*7)) + floordiv(w, 2)), co]
+    placeholder = PLACEHOLDER [1, 1, 1, 256]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 7  (workload key: ["0141ffc4fbabc10cc5a94c954419055b"]) ==========
+    placeholder = PLACEHOLDER [1, 14, 14, 256]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 15)) && (i2 >= 1)) && (i2 < 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 49), ((floormod(floordiv(p, 7), 7)*2) + eps), ((floormod(p, 7)*2) + nu), ci]
+    B(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [4, 4, 256, 256]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) && (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 2) == 0)), 1f, 0f))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*7)*7) + (floordiv(h, 2)*7)) + floordiv(w, 2)), co]
+    placeholder = PLACEHOLDER [1, 14, 14, 256]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+
+    ========== Task 8  (workload key: ["81aae4b8e2c076a4014d403e8a2c70a1"]) ==========
+    placeholder = PLACEHOLDER [1, 28, 28, 128]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 29)) && (i2 >= 1)) && (i2 < 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 128, 256]
+    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+    placeholder = PLACEHOLDER [1, 1, 1, 256]
+    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 9  (workload key: ["c7a6b56bdc04b94c829fb2ef9874019e"]) ==========
+    placeholder = PLACEHOLDER [1, 28, 28, 128]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 29)) && (i2 >= 1)) && (i2 < 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*2) + eps), ((floormod(p, 14)*2) + nu), ci]
+    B(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [4, 4, 128, 128]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) && (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 2) == 0)), 1f, 0f))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*14)*14) + (floordiv(h, 2)*14)) + floordiv(w, 2)), co]
+    placeholder = PLACEHOLDER [1, 28, 28, 128]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+    placeholder = PLACEHOLDER [1, 1, 1, 128]
+    T_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 10  (workload key: ["c035cc8b0568a8e054d06bd7f4950550"]) ==========
+    placeholder = PLACEHOLDER [1, 28, 28, 128]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 29)) && (i2 >= 1)) && (i2 < 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*2) + eps), ((floormod(p, 14)*2) + nu), ci]
+    B(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [4, 4, 128, 128]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) && (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 2) == 0)), 1f, 0f))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*14)*14) + (floordiv(h, 2)*14)) + floordiv(w, 2)), co]
+    placeholder = PLACEHOLDER [1, 1, 1, 128]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 11  (workload key: ["c5ee3e05edd9754492d0763aa41fd025"]) ==========
+    placeholder = PLACEHOLDER [1, 28, 28, 128]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 29)) && (i2 >= 1)) && (i2 < 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*2) + eps), ((floormod(p, 14)*2) + nu), ci]
+    B(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [4, 4, 128, 128]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 4) == 3) && (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) && (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) && (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) && (floormod(j, 2) == 0)), 1f, 0f))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*14)*14) + (floordiv(h, 2)*14)) + floordiv(w, 2)), co]
+    placeholder = PLACEHOLDER [1, 28, 28, 128]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+
+    ========== Task 12  (workload key: ["022ebb6b7c55c5ed030421380ec83a04"]) ==========
+    placeholder = PLACEHOLDER [1, 56, 56, 64]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 57)) && (i2 >= 1)) && (i2 < 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 64, 128]
+    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+    placeholder = PLACEHOLDER [1, 1, 1, 128]
+    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 13  (workload key: ["de0df0893e01892cfe69f7bc2c24111f"]) ==========
+    placeholder = PLACEHOLDER [1, 56, 56, 64]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 57)) && (i2 >= 1)) && (i2 < 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*4) + eps), ((floormod(p, 14)*4) + nu), ci]
+    B(i, j) = select(((floormod(i, 6) == 5) && (floormod(j, 6) == 5)), 1f, select(((floormod(i, 6) == 5) && (floormod(j, 6) == 4)),  ..(OMITTED)..  (floormod(j, 6) == 1)), 0f, select(((floormod(i, 6) == 0) && (floormod(j, 6) == 0)), 1f, 0f))))))))))))))))))))))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [6, 6, 64, 64]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 6) == 5) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 6) == 5) && (floormod(j, 4) == 2)),  ..(OMITTED)..  6) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 6) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 4), floormod(w, 4), ((((n*14)*14) + (floordiv(h, 4)*14)) + floordiv(w, 4)), co]
+    placeholder = PLACEHOLDER [1, 56, 56, 64]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+    placeholder = PLACEHOLDER [1, 1, 1, 64]
+    T_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 14  (workload key: ["f2e3c09a00e7d0a9897f70497e089f1e"]) ==========
+    placeholder = PLACEHOLDER [1, 56, 56, 64]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 57)) && (i2 >= 1)) && (i2 < 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*4) + eps), ((floormod(p, 14)*4) + nu), ci]
+    B(i, j) = select(((floormod(i, 6) == 5) && (floormod(j, 6) == 5)), 1f, select(((floormod(i, 6) == 5) && (floormod(j, 6) == 4)),  ..(OMITTED)..  (floormod(j, 6) == 1)), 0f, select(((floormod(i, 6) == 0) && (floormod(j, 6) == 0)), 1f, 0f))))))))))))))))))))))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [6, 6, 64, 64]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 6) == 5) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 6) == 5) && (floormod(j, 4) == 2)),  ..(OMITTED)..  6) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 6) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 4), floormod(w, 4), ((((n*14)*14) + (floordiv(h, 4)*14)) + floordiv(w, 4)), co]
+    placeholder = PLACEHOLDER [1, 1, 1, 64]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 15  (workload key: ["fa26946d7ac51126bfa859cb183f9ca1"]) ==========
+    placeholder = PLACEHOLDER [1, 56, 56, 64]
+    data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 57)) && (i2 >= 1)) && (i2 < 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*4) + eps), ((floormod(p, 14)*4) + nu), ci]
+    B(i, j) = select(((floormod(i, 6) == 5) && (floormod(j, 6) == 5)), 1f, select(((floormod(i, 6) == 5) && (floormod(j, 6) == 4)),  ..(OMITTED)..  (floormod(j, 6) == 1)), 0f, select(((floormod(i, 6) == 0) && (floormod(j, 6) == 0)), 1f, 0f))))))))))))))))))))))))))))))))))))
+    data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+    placeholder = PLACEHOLDER [6, 6, 64, 64]
+    bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+    A(i, j) = select(((floormod(i, 6) == 5) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 6) == 5) && (floormod(j, 4) == 2)),  ..(OMITTED)..  6) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 6) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))))))))))
+    inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+    conv2d_winograd(n, h, w, co) = inverse[floormod(h, 4), floormod(w, 4), ((((n*14)*14) + (floordiv(h, 4)*14)) + floordiv(w, 4)), co]
+    placeholder = PLACEHOLDER [1, 56, 56, 64]
+    T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+
+    ========== Task 16  (workload key: ["a0eb8d6048282a4a0986cc2ccf14eaa2"]) ==========
+    placeholder = PLACEHOLDER [1, 224, 224, 3]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 3) && (i1 < 227)) && (i2 >= 3)) && (i2 < 227)), placeholder[i0, (i1 - 3), (i2 - 3), i3], 0f)
+    placeholder = PLACEHOLDER [7, 7, 3, 64]
+    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+    placeholder = PLACEHOLDER [1, 1, 1, 64]
+    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 17  (workload key: ["bf78a7bf0209980f72953637dfd14a6f"]) ==========
+    placeholder = PLACEHOLDER [1, 56, 56, 64]
+    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
+    placeholder = PLACEHOLDER [1, 1, 64, 64]
+    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
+
+    ========== Task 18  (workload key: ["6630936c26852f2b89dbfa2ff37fbb9c"]) ==========
+    placeholder = PLACEHOLDER [1, 56, 56, 64]
+    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
+    placeholder = PLACEHOLDER [1, 1, 64, 128]
+    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+
+    ========== Task 19  (workload key: ["ba5f918733ccbbd4a1d7fd3724665a2f"]) ==========
+    placeholder = PLACEHOLDER [1, 28, 28, 128]
+    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
+    placeholder = PLACEHOLDER [1, 1, 128, 256]
+    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+
+    ========== Task 20  (workload key: ["21ad409d72953de188314010134e3acd"]) ==========
+    placeholder = PLACEHOLDER [1, 14, 14, 256]
+    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
+    placeholder = PLACEHOLDER [1, 1, 256, 512]
+    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+
 
 
 
@@ -285,7 +542,7 @@ Now, we set some options for tuning and launch the search tasks
   There will also be some "dmlc::Error"s and CUDA errors, because the
   auto-scheduler will try some invalid schedules.
   You can safely ignore them if the tuning can continue, because these
-  errors are isolated from the master process.
+  errors are isolated from the main process.
 
 
 .. note:: Terminate the tuning earlier
@@ -336,7 +593,7 @@ so we can read the log file and load the best schedules.
 
     Compile...
     Evaluate inference time cost...
-    Mean inference time (std dev): 3.15 ms (0.01 ms)
+    Mean inference time (std dev): 3.14 ms (0.01 ms)
 
 
 
diff --git a/docs/_sources/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/tutorials/autotvm/sg_execution_times.rst.txt
index 1b78ed1..3afcd64 100644
--- a/docs/_sources/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,11 +5,11 @@
 
 Computation times
 =================
-**00:59.076** total execution time for **tutorials_autotvm** files:
-
-- **00:30.069**: :ref:`sphx_glr_tutorials_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)
-- **00:28.300**: :ref:`sphx_glr_tutorials_autotvm_tune_simple_template.py` (``tune_simple_template.py``)
-- **00:00.205**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)
-- **00:00.176**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)
-- **00:00.163**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``)
-- **00:00.162**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)
+**01:07.514** total execution time for **tutorials_autotvm** files:
+
+- **00:35.925**: :ref:`sphx_glr_tutorials_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)
+- **00:30.897**: :ref:`sphx_glr_tutorials_autotvm_tune_simple_template.py` (``tune_simple_template.py``)
+- **00:00.204**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)
+- **00:00.169**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)
+- **00:00.160**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)
+- **00:00.159**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``)
diff --git a/docs/_sources/tutorials/autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/tutorials/autotvm/tune_conv2d_cuda.rst.txt
index 617f854..81e1a70 100644
--- a/docs/_sources/tutorials/autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_conv2d_cuda.rst.txt
@@ -241,26 +241,26 @@ for this template
        7 unroll_explicit: OtherOption([0, 1]) len=2
     )
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 226.07/226.07   result: MeasureResult(costs=(0.0010240150306122448,), error_no=0, all_cost=1.4391686916351318, timestamp=1605262522.444662)     [('tile_f', [-1, 2, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4881186
-    No: 2   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 3   GFLOPS: 179.21/226.07   result: MeasureResult(costs=(0.0012917972661290324,), error_no=0, all_cost=1.6224138736724854, timestamp=1605262523.8775072)    [('tile_f', [-1, 4, 32, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3605182
-    No: 4   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 5   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 6   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 7   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 8   GFLOPS: 1.75/226.07     result: MeasureResult(costs=(0.13202702,), error_no=0, all_cost=3.336221933364868, timestamp=1605262527.2192101)        [('tile_f', [-1, 2, 4, 64]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2716108
-    No: 9   GFLOPS: 12.08/226.07    result: MeasureResult(costs=(0.019164146333333333,), error_no=0, all_cost=1.751448392868042, timestamp=1605262530.169132)       [('tile_f', [-1, 1, 4, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1263092
-    No: 10  GFLOPS: 228.40/228.40   result: MeasureResult(costs=(0.0010135667474747475,), error_no=0, all_cost=1.4332818984985352, timestamp=1605262531.0474083)    [('tile_f', [-1, 1, 32, 4]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 16, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8921130
-    No: 11  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 12  GFLOPS: 120.00/228.40   result: MeasureResult(costs=(0.0019292541346153846,), error_no=0, all_cost=1.344985008239746, timestamp=1605262532.1955059)     [('tile_f', [-1, 2, 32, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,5036371
-    No: 13  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 14  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 15  GFLOPS: 82.26/228.40    result: MeasureResult(costs=(0.0028143660526315792,), error_no=0, all_cost=1.4765589237213135, timestamp=1605262533.614049)     [('tile_f', [-1, 1, 1, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3582580
-    No: 16  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 17  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 18  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 19  GFLOPS: 18.26/228.40    result: MeasureResult(costs=(0.012675726555555555,), error_no=0, all_cost=1.667898178100586, timestamp=1605262536.8822658)      [('tile_f', [-1, 8, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4107668
-    No: 20  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 1   GFLOPS: 309.56/309.56   result: MeasureResult(costs=(0.0007478322784810127,), error_no=0, all_cost=1.6631748676300049, timestamp=1605451988.403362)     [('tile_f', [-1, 2, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4881186
+    No: 2   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 3   GFLOPS: 180.09/309.56   result: MeasureResult(costs=(0.0012854856451612903,), error_no=0, all_cost=1.655426025390625, timestamp=1605451990.2436233)     [('tile_f', [-1, 4, 32, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3605182
+    No: 4   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 5   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 6   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 7   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 8   GFLOPS: 1.75/309.56     result: MeasureResult(costs=(0.1320260235,), error_no=0, all_cost=3.5821640491485596, timestamp=1605451994.7318807)     [('tile_f', [-1, 2, 4, 64]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2716108
+    No: 9   GFLOPS: 12.11/309.56    result: MeasureResult(costs=(0.019121657333333333,), error_no=0, all_cost=1.8620920181274414, timestamp=1605451998.4005475)     [('tile_f', [-1, 1, 4, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1263092
+    No: 10  GFLOPS: 228.36/309.56   result: MeasureResult(costs=(0.0010137748686868686,), error_no=0, all_cost=1.599214792251587, timestamp=1605451999.5098352)     [('tile_f', [-1, 1, 32, 4]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 16, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8921130
+    No: 11  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 12  GFLOPS: 120.00/309.56   result: MeasureResult(costs=(0.0019292001346153846,), error_no=0, all_cost=1.4698126316070557, timestamp=1605452001.1850023)    [('tile_f', [-1, 2, 32, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,5036371
+    No: 13  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 14  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 15  GFLOPS: 70.72/309.56    result: MeasureResult(costs=(0.003273344657894737,), error_no=0, all_cost=1.608949899673462, timestamp=1605452003.2869494)      [('tile_f', [-1, 1, 1, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3582580
+    No: 16  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 17  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 18  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 19  GFLOPS: 17.38/309.56    result: MeasureResult(costs=(0.013322496,), error_no=0, all_cost=1.7255771160125732, timestamp=1605452007.9485376)      [('tile_f', [-1, 8, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4107668
+    No: 20  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
 
 
 
@@ -312,8 +312,8 @@ and measure running time.
 
 
     Best config:
-    [('tile_f', [-1, 1, 32, 4]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 16, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8921130
-    Time cost of this operator: 0.001455
+    [('tile_f', [-1, 2, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4881186
+    Time cost of this operator: 0.001034
 
 
 
diff --git a/docs/_sources/tutorials/autotvm/tune_relay_arm.rst.txt b/docs/_sources/tutorials/autotvm/tune_relay_arm.rst.txt
index 40ab833..6aa3c98 100644
--- a/docs/_sources/tutorials/autotvm/tune_relay_arm.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_relay_arm.rst.txt
@@ -9,7 +9,7 @@
 
 .. _tune_relay_arm:
 
-Auto-tuning a convolutional network for ARM CPU
+Auto-tuning a Convolutional Network for ARM CPU
 ===============================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Zhao Wu <https://github.com/FrozenGene>`_, `Eddie Yan <https://github.com/eqy>`_
 
diff --git a/docs/_sources/tutorials/autotvm/tune_relay_cuda.rst.txt b/docs/_sources/tutorials/autotvm/tune_relay_cuda.rst.txt
index e42d10c..53479de 100644
--- a/docs/_sources/tutorials/autotvm/tune_relay_cuda.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_relay_cuda.rst.txt
@@ -7,7 +7,7 @@
 .. _sphx_glr_tutorials_autotvm_tune_relay_cuda.py:
 
 
-Auto-tuning a convolutional network for NVIDIA GPU
+Auto-tuning a Convolutional Network for NVIDIA GPU
 ==================================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Eddie Yan <https://github.com/eqy/>`_
 
diff --git a/docs/_sources/tutorials/autotvm/tune_relay_mobile_gpu.rst.txt b/docs/_sources/tutorials/autotvm/tune_relay_mobile_gpu.rst.txt
index 03e9949..39c74c6 100644
--- a/docs/_sources/tutorials/autotvm/tune_relay_mobile_gpu.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_relay_mobile_gpu.rst.txt
@@ -7,7 +7,7 @@
 .. _sphx_glr_tutorials_autotvm_tune_relay_mobile_gpu.py:
 
 
-Auto-tuning a convolutional network for Mobile GPU
+Auto-tuning a Convolutional Network for Mobile GPU
 ==================================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_, `Eddie Yan <https://github.com/eqy>`_
 
diff --git a/docs/_sources/tutorials/autotvm/tune_relay_x86.rst.txt b/docs/_sources/tutorials/autotvm/tune_relay_x86.rst.txt
index 6d131ba..4e7dba9 100644
--- a/docs/_sources/tutorials/autotvm/tune_relay_x86.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_relay_x86.rst.txt
@@ -9,7 +9,7 @@
 
 .. _tune_relay_x86:
 
-Auto-tuning a convolutional network for x86 CPU
+Auto-tuning a Convolutional Network for x86 CPU
 ===============================================
 **Author**: `Yao Wang <https://github.com/kevinthesun>`_, `Eddie Yan <https://github.com/eqy>`_
 
diff --git a/docs/_sources/tutorials/autotvm/tune_simple_template.rst.txt b/docs/_sources/tutorials/autotvm/tune_simple_template.rst.txt
index f736406..7725a66 100644
--- a/docs/_sources/tutorials/autotvm/tune_simple_template.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_simple_template.rst.txt
@@ -7,8 +7,8 @@
 .. _sphx_glr_tutorials_autotvm_tune_simple_template.py:
 
 
-Writing tunable template and Using auto-tuner
-=============================================
+Writing Tunable Templates and Using the Auto-tuner
+==================================================
 **Author**: `Lianmin Zheng <https://github.com/merrymercy>`_
 
 This is an introduction tutorial to the auto-tuning module in TVM.
@@ -369,16 +369,16 @@ used to get the best config later.
  .. code-block:: none
 
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 0.52/0.52       result: MeasureResult(costs=(0.519133092,), error_no=0, all_cost=8.710088014602661, timestamp=1605262499.8931446)       [('tile_y', [-1, 64]), ('tile_x', [-1, 1])],None,6
-    No: 2   GFLOPS: 2.19/2.19       result: MeasureResult(costs=(0.122798191,), error_no=0, all_cost=2.4234249591827393, timestamp=1605262502.3358595)      [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
-    No: 3   GFLOPS: 2.68/2.68       result: MeasureResult(costs=(0.1002148718,), error_no=0, all_cost=2.024742603302002, timestamp=1605262504.4139025)      [('tile_y', [-1, 2]), ('tile_x', [-1, 8])],None,31
-    No: 4   GFLOPS: 7.24/7.24       result: MeasureResult(costs=(0.0370866816,), error_no=0, all_cost=1.0611913204193115, timestamp=1605262505.483117)      [('tile_y', [-1, 1]), ('tile_x', [-1, 32])],None,50
-    No: 5   GFLOPS: 13.37/13.37     result: MeasureResult(costs=(0.020077077,), error_no=0, all_cost=0.7708723545074463, timestamp=1605262506.2793317)      [('tile_y', [-1, 256]), ('tile_x', [-1, 64])],None,68
-    No: 6   GFLOPS: 12.17/13.37     result: MeasureResult(costs=(0.0220493612,), error_no=0, all_cost=0.7993049621582031, timestamp=1605262507.1112614)     [('tile_y', [-1, 256]), ('tile_x', [-1, 512])],None,98
-    No: 7   GFLOPS: 0.92/13.37      result: MeasureResult(costs=(0.29137312579999997,), error_no=0, all_cost=5.066913843154907, timestamp=1605262512.2570298)       [('tile_y', [-1, 128]), ('tile_x', [-1, 2])],None,17
-    No: 8   GFLOPS: 2.61/13.37      result: MeasureResult(costs=(0.102951418,), error_no=0, all_cost=2.0490610599517822, timestamp=1605262514.3929913)      [('tile_y', [-1, 8]), ('tile_x', [-1, 4])],None,23
-    No: 9   GFLOPS: 11.68/13.37     result: MeasureResult(costs=(0.0229774654,), error_no=0, all_cost=0.7303047180175781, timestamp=1605262515.9335515)     [('tile_y', [-1, 256]), ('tile_x', [-1, 32])],None,58
-    No: 10  GFLOPS: 14.79/14.79     result: MeasureResult(costs=(0.018150249,), error_no=0, all_cost=0.760230541229248, timestamp=1605262516.7134416)       [('tile_y', [-1, 64]), ('tile_x', [-1, 128])],None,76
+    No: 1   GFLOPS: 0.52/0.52       result: MeasureResult(costs=(0.5180510666,), error_no=0, all_cost=8.781435012817383, timestamp=1605451963.079987)       [('tile_y', [-1, 64]), ('tile_x', [-1, 1])],None,6
+    No: 2   GFLOPS: 2.14/2.14       result: MeasureResult(costs=(0.125252425,), error_no=0, all_cost=2.5115718841552734, timestamp=1605451965.740256)       [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
+    No: 3   GFLOPS: 2.71/2.71       result: MeasureResult(costs=(0.099166239,), error_no=0, all_cost=2.10246205329895, timestamp=1605451967.9825048)        [('tile_y', [-1, 2]), ('tile_x', [-1, 8])],None,31
+    No: 4   GFLOPS: 7.80/7.80       result: MeasureResult(costs=(0.0344016246,), error_no=0, all_cost=1.0503571033477783, timestamp=1605451969.1972082)     [('tile_y', [-1, 1]), ('tile_x', [-1, 32])],None,50
+    No: 5   GFLOPS: 13.09/13.09     result: MeasureResult(costs=(0.020505473599999997,), error_no=0, all_cost=0.8432226181030273, timestamp=1605451970.1748426)     [('tile_y', [-1, 256]), ('tile_x', [-1, 64])],None,68
+    No: 6   GFLOPS: 12.21/13.09     result: MeasureResult(costs=(0.0219806834,), error_no=0, all_cost=0.8422861099243164, timestamp=1605451971.1708436)     [('tile_y', [-1, 256]), ('tile_x', [-1, 512])],None,98
+    No: 7   GFLOPS: 0.92/13.09      result: MeasureResult(costs=(0.29195622499999996,), error_no=0, all_cost=5.095428705215454, timestamp=1605451976.5535822)       [('tile_y', [-1, 128]), ('tile_x', [-1, 2])],None,17
+    No: 8   GFLOPS: 2.40/13.09      result: MeasureResult(costs=(0.11178283959999999,), error_no=0, all_cost=2.2193515300750732, timestamp=1605451978.9914048)      [('tile_y', [-1, 8]), ('tile_x', [-1, 4])],None,23
+    No: 9   GFLOPS: 11.22/13.09     result: MeasureResult(costs=(0.0239272356,), error_no=0, all_cost=0.7559356689453125, timestamp=1605451980.961141)      [('tile_y', [-1, 256]), ('tile_x', [-1, 32])],None,58
+    No: 10  GFLOPS: 14.58/14.58     result: MeasureResult(costs=(0.0184163582,), error_no=0, all_cost=0.790762186050415, timestamp=1605451981.9231868)      [('tile_y', [-1, 64]), ('tile_x', [-1, 128])],None,76
 
 
 
diff --git a/docs/_sources/tutorials/dev/bring_your_own_datatypes.rst.txt b/docs/_sources/tutorials/dev/bring_your_own_datatypes.rst.txt
index f437fd3..c5f9343 100644
--- a/docs/_sources/tutorials/dev/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/tutorials/dev/bring_your_own_datatypes.rst.txt
@@ -521,7 +521,7 @@ Now, to actually convert the entire network, we have written `a pass in Relay <h
 
  .. code-block:: none
 
-      Check failed: lower == false: FloatImm lowering function for target llvm type 150 not found
+      Check failed: lower == false: Intrinsic lowering function for target llvm, intrinsic name tir.sqrt, type 150 not found
 
 
 
diff --git a/docs/_sources/tutorials/dev/sg_execution_times.rst.txt b/docs/_sources/tutorials/dev/sg_execution_times.rst.txt
index 3fd5151..c0aacff 100644
--- a/docs/_sources/tutorials/dev/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/dev/sg_execution_times.rst.txt
@@ -5,8 +5,8 @@
 
 Computation times
 =================
-**00:31.889** total execution time for **tutorials_dev** files:
+**00:32.590** total execution time for **tutorials_dev** files:
 
-- **00:31.318**: :ref:`sphx_glr_tutorials_dev_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``)
-- **00:00.391**: :ref:`sphx_glr_tutorials_dev_use_pass_infra.py` (``use_pass_infra.py``)
-- **00:00.180**: :ref:`sphx_glr_tutorials_dev_low_level_custom_pass.py` (``low_level_custom_pass.py``)
+- **00:32.007**: :ref:`sphx_glr_tutorials_dev_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``)
+- **00:00.400**: :ref:`sphx_glr_tutorials_dev_use_pass_infra.py` (``use_pass_infra.py``)
+- **00:00.184**: :ref:`sphx_glr_tutorials_dev_low_level_custom_pass.py` (``low_level_custom_pass.py``)
diff --git a/docs/_sources/tutorials/frontend/deploy_model_on_android.rst.txt b/docs/_sources/tutorials/frontend/deploy_model_on_android.rst.txt
index 80135f9..df9198f 100644
--- a/docs/_sources/tutorials/frontend/deploy_model_on_android.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_model_on_android.rst.txt
@@ -421,7 +421,7 @@ Execute on TVM
 
     TVM prediction top-1: tiger cat
     Evaluate inference time cost...
-    Mean inference time (std dev): 5.41 ms (0.17 ms)
+    Mean inference time (std dev): 5.88 ms (0.13 ms)
 
 
 
diff --git a/docs/_sources/tutorials/frontend/deploy_object_detection_pytorch.rst.txt b/docs/_sources/tutorials/frontend/deploy_object_detection_pytorch.rst.txt
index 25f54fb..6059e2e 100644
--- a/docs/_sources/tutorials/frontend/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_object_detection_pytorch.rst.txt
@@ -247,7 +247,7 @@ Get boxes with score larger than 0.9
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  5.919 seconds)
+   **Total running time of the script:** ( 2 minutes  5.731 seconds)
 
 
 .. _sphx_glr_download_tutorials_frontend_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/tutorials/frontend/deploy_prequantized.rst.txt b/docs/_sources/tutorials/frontend/deploy_prequantized.rst.txt
index 655b37b..4eb5415 100644
--- a/docs/_sources/tutorials/frontend/deploy_prequantized.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_prequantized.rst.txt
@@ -350,7 +350,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
  .. code-block:: none
 
-    Elapsed average ms: 19.227042330000003
+    Elapsed average ms: 20.06191538
 
 
 
diff --git a/docs/_sources/tutorials/frontend/deploy_prequantized_tflite.rst.txt b/docs/_sources/tutorials/frontend/deploy_prequantized_tflite.rst.txt
index 22b583f..0fe5b13 100644
--- a/docs/_sources/tutorials/frontend/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_prequantized_tflite.rst.txt
@@ -368,7 +368,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
  .. code-block:: none
 
-    Elapsed average ms: 36.272248340000004
+    Elapsed average ms: 36.5043426
 
 
 
@@ -401,7 +401,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  37.496 seconds)
+   **Total running time of the script:** ( 2 minutes  38.431 seconds)
 
 
 .. _sphx_glr_download_tutorials_frontend_deploy_prequantized_tflite.py:
diff --git a/docs/_sources/tutorials/frontend/deploy_ssd_gluoncv.rst.txt b/docs/_sources/tutorials/frontend/deploy_ssd_gluoncv.rst.txt
index e1fbb21..989f729 100644
--- a/docs/_sources/tutorials/frontend/deploy_ssd_gluoncv.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_ssd_gluoncv.rst.txt
@@ -195,7 +195,7 @@ Display result
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  54.583 seconds)
+   **Total running time of the script:** ( 1 minutes  55.744 seconds)
 
 
 .. _sphx_glr_download_tutorials_frontend_deploy_ssd_gluoncv.py:
diff --git a/docs/_sources/tutorials/frontend/from_pytorch.rst.txt b/docs/_sources/tutorials/frontend/from_pytorch.rst.txt
index ad3dcc7..60dd9b6 100644
--- a/docs/_sources/tutorials/frontend/from_pytorch.rst.txt
+++ b/docs/_sources/tutorials/frontend/from_pytorch.rst.txt
@@ -155,7 +155,7 @@ Compile the graph to llvm target with given input specification.
 
  .. code-block:: none
 
-
    ...47%, 0.01 MB, 40 KB/s, 0 seconds passed
    ...94%, 0.02 MB, 81 KB/s, 0 seconds passed
    ...100%, 0.02 MB, 121 KB/s, 0 seconds passed
+
    ...47%, 0.01 MB, 40 KB/s, 0 seconds passed
    ...94%, 0.02 MB, 80 KB/s, 0 seconds passed
    ...100%, 0.02 MB, 120 KB/s, 0 seconds passed
     Cannot find config for target=llvm -keys=cpu, workload=('dense_nopack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (1000, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
 
 
diff --git a/docs/_sources/tutorials/frontend/from_tensorflow.rst.txt b/docs/_sources/tutorials/frontend/from_tensorflow.rst.txt
index d88b140..e71be30 100644
--- a/docs/_sources/tutorials/frontend/from_tensorflow.rst.txt
+++ b/docs/_sources/tutorials/frontend/from_tensorflow.rst.txt
@@ -199,28 +199,28 @@ Results:
       "will be used for operator %s." % node.name
     /workspace/docs/../python/tvm/relay/frontend/tensorflow.py:735: UserWarning: DecodeJpeg: It's a pass through, please handle preprocessing before input
       warnings.warn("DecodeJpeg: It's a pass through, please handle preprocessing before input")
-    WARNING:root:Attribute Tdim is ignored in relay.sym.expand_dims
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.expand_dims
+    WARNING:root:Attribute Tdim is ignored in relay.sym.expand_dims
     WARNING:root:Attribute T is ignored in relay.sym.expand_dims
     WARNING:root:Attribute _node_name is ignored in relay.sym.expand_dims
     WARNING:root:Attribute _target_layout is ignored in relay.sym.expand_dims
+    WARNING:root:Attribute T is ignored in relay.sym.resize
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.resize
     WARNING:root:Attribute half_pixel_centers is ignored in relay.sym.resize
-    WARNING:root:Attribute T is ignored in relay.sym.resize
     WARNING:root:Attribute _node_name is ignored in relay.sym.resize
     WARNING:root:Attribute _target_layout is ignored in relay.sym.resize
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -228,19 +228,19 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
@@ -248,13 +248,13 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
@@ -271,42 +271,42 @@ Results:
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
+    WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -319,99 +319,99 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -419,19 +419,19 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
@@ -439,13 +439,13 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
+    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
@@ -453,43 +453,43 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
+    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
@@ -497,75 +497,75 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
@@ -581,23 +581,23 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -605,15 +605,15 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
@@ -625,14 +625,14 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
@@ -648,9 +648,9 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
@@ -658,128 +658,128 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute T is ignored in relay.sym.concatenate
+    WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
@@ -791,47 +791,47 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
@@ -840,23 +840,23 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute T is ignored in relay.sym.concatenate
+    WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
@@ -864,13 +864,13 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.copy
@@ -878,23 +878,23 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
@@ -902,17 +902,17 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -920,48 +920,48 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
@@ -978,52 +978,52 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
+    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
@@ -1031,31 +1031,31 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute N is ignored in relay.sym.concatenate
+    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -1063,29 +1063,29 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
@@ -1101,66 +1101,66 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
@@ -1168,32 +1168,32 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
@@ -1202,8 +1202,8 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute message is ignored in relay.sym.copy
@@ -1215,66 +1215,66 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
+    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
     WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute T is ignored in relay.sym.concatenate
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -1282,33 +1282,33 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.copy
@@ -1316,122 +1316,122 @@ Results:
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
+    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -1439,42 +1439,42 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
     WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -1482,19 +1482,19 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
@@ -1502,8 +1502,8 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
@@ -1511,28 +1511,28 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
@@ -1540,47 +1540,47 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
@@ -1588,18 +1588,18 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
@@ -1621,9 +1621,9 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
@@ -1631,31 +1631,31 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
+    WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -1664,32 +1664,32 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
@@ -1701,10 +1701,10 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
@@ -1716,76 +1716,76 @@ Results:
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
+    WARNING:root:Attribute N is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
@@ -1798,13 +1798,13 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
@@ -1813,46 +1813,46 @@ Results:
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
@@ -1860,8 +1860,8 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
@@ -1869,13 +1869,13 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.copy
@@ -1883,22 +1883,22 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -1908,21 +1908,21 @@ Results:
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
@@ -1930,34 +1930,34 @@ Results:
     WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute N is ignored in relay.sym.concatenate
+    WARNING:root:Attribute T is ignored in relay.sym.concatenate
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
@@ -1973,33 +1973,33 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute message is ignored in relay.sym.copy
@@ -2007,18 +2007,18 @@ Results:
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
@@ -2026,18 +2026,18 @@ Results:
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.copy
@@ -2060,18 +2060,18 @@ Results:
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
@@ -2083,31 +2083,31 @@ Results:
     WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.copy
+    WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
     WARNING:root:Attribute T is ignored in relay.sym.relu
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute message is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -2116,44 +2116,44 @@ Results:
     WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+    WARNING:root:Attribute T is ignored in relay.sym.conv2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
     WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
     WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+    WARNING:root:Attribute T is ignored in relay.sym.copy
     WARNING:root:Attribute _node_name is ignored in relay.sym.copy
     WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute T is ignored in relay.sym.relu
+    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
     WARNING:root:Attribute _node_name is ignored in relay.sym.relu
     WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
     WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
+    WARNING:root:Attribute T is ignored in relay.sym.concatenate
     WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
     WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
+    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
     WARNING:root:Attribute Tshape is ignored in relay.sym.reshape
-    WARNING:root:Attribute T is ignored in relay.sym.reshape
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.reshape
+    WARNING:root:Attribute T is ignored in relay.sym.reshape
     WARNING:root:Attribute _node_name is ignored in relay.sym.reshape
     WARNING:root:Attribute _target_layout is ignored in relay.sym.reshape
     WARNING:root:Attribute _output_shapes is ignored in relay.sym.dense
-    WARNING:root:Attribute transpose_a is ignored in relay.sym.dense
-    WARNING:root:Attribute T is ignored in relay.sym.dense
     WARNING:root:Attribute transpose_b is ignored in relay.sym.dense
+    WARNING:root:Attribute T is ignored in relay.sym.dense
+    WARNING:root:Attribute transpose_a is ignored in relay.sym.dense
     WARNING:root:Attribute _node_name is ignored in relay.sym.dense
     WARNING:root:Attribute _target_layout is ignored in relay.sym.dense
     WARNING:root:Attribute T is ignored in relay.sym.softmax
diff --git a/docs/_sources/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/tutorials/frontend/sg_execution_times.rst.txt
index 321c890..f081e91 100644
--- a/docs/_sources/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,24 +5,24 @@
 
 Computation times
 =================
-**10:33.930** total execution time for **tutorials_frontend** files:
+**10:42.749** total execution time for **tutorials_frontend** files:
 
-- **02:37.496**: :ref:`sphx_glr_tutorials_frontend_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)
-- **02:05.919**: :ref:`sphx_glr_tutorials_frontend_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``)
-- **01:54.583**: :ref:`sphx_glr_tutorials_frontend_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)
-- **00:38.250**: :ref:`sphx_glr_tutorials_frontend_from_tensorflow.py` (``from_tensorflow.py``)
-- **00:29.622**: :ref:`sphx_glr_tutorials_frontend_deploy_quantized.py` (``deploy_quantized.py``)
-- **00:28.815**: :ref:`sphx_glr_tutorials_frontend_deploy_prequantized.py` (``deploy_prequantized.py``)
-- **00:25.931**: :ref:`sphx_glr_tutorials_frontend_from_tflite.py` (``from_tflite.py``)
-- **00:23.114**: :ref:`sphx_glr_tutorials_frontend_from_darknet.py` (``from_darknet.py``)
-- **00:16.411**: :ref:`sphx_glr_tutorials_frontend_from_caffe2.py` (``from_caffe2.py``)
-- **00:14.843**: :ref:`sphx_glr_tutorials_frontend_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)
-- **00:12.609**: :ref:`sphx_glr_tutorials_frontend_deploy_model_on_android.py` (``deploy_model_on_android.py``)
-- **00:12.240**: :ref:`sphx_glr_tutorials_frontend_from_pytorch.py` (``from_pytorch.py``)
-- **00:09.754**: :ref:`sphx_glr_tutorials_frontend_from_mxnet.py` (``from_mxnet.py``)
-- **00:09.037**: :ref:`sphx_glr_tutorials_frontend_from_coreml.py` (``from_coreml.py``)
-- **00:08.907**: :ref:`sphx_glr_tutorials_frontend_from_keras.py` (``from_keras.py``)
-- **00:03.395**: :ref:`sphx_glr_tutorials_frontend_using_external_lib.py` (``using_external_lib.py``)
-- **00:01.675**: :ref:`sphx_glr_tutorials_frontend_from_onnx.py` (``from_onnx.py``)
-- **00:01.137**: :ref:`sphx_glr_tutorials_frontend_build_gcn.py` (``build_gcn.py``)
-- **00:00.193**: :ref:`sphx_glr_tutorials_frontend_deploy_sparse.py` (``deploy_sparse.py``)
+- **02:38.431**: :ref:`sphx_glr_tutorials_frontend_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)
+- **02:05.731**: :ref:`sphx_glr_tutorials_frontend_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``)
+- **01:55.744**: :ref:`sphx_glr_tutorials_frontend_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)
+- **00:38.307**: :ref:`sphx_glr_tutorials_frontend_from_tensorflow.py` (``from_tensorflow.py``)
+- **00:32.694**: :ref:`sphx_glr_tutorials_frontend_deploy_quantized.py` (``deploy_quantized.py``)
+- **00:28.983**: :ref:`sphx_glr_tutorials_frontend_deploy_prequantized.py` (``deploy_prequantized.py``)
+- **00:25.999**: :ref:`sphx_glr_tutorials_frontend_from_tflite.py` (``from_tflite.py``)
+- **00:23.300**: :ref:`sphx_glr_tutorials_frontend_from_darknet.py` (``from_darknet.py``)
+- **00:16.374**: :ref:`sphx_glr_tutorials_frontend_from_caffe2.py` (``from_caffe2.py``)
+- **00:14.854**: :ref:`sphx_glr_tutorials_frontend_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)
+- **00:13.189**: :ref:`sphx_glr_tutorials_frontend_deploy_model_on_android.py` (``deploy_model_on_android.py``)
+- **00:12.212**: :ref:`sphx_glr_tutorials_frontend_from_pytorch.py` (``from_pytorch.py``)
+- **00:11.363**: :ref:`sphx_glr_tutorials_frontend_from_keras.py` (``from_keras.py``)
+- **00:09.914**: :ref:`sphx_glr_tutorials_frontend_from_mxnet.py` (``from_mxnet.py``)
+- **00:09.060**: :ref:`sphx_glr_tutorials_frontend_from_coreml.py` (``from_coreml.py``)
+- **00:03.443**: :ref:`sphx_glr_tutorials_frontend_using_external_lib.py` (``using_external_lib.py``)
+- **00:01.793**: :ref:`sphx_glr_tutorials_frontend_from_onnx.py` (``from_onnx.py``)
+- **00:01.165**: :ref:`sphx_glr_tutorials_frontend_build_gcn.py` (``build_gcn.py``)
+- **00:00.194**: :ref:`sphx_glr_tutorials_frontend_deploy_sparse.py` (``deploy_sparse.py``)
diff --git a/docs/_sources/tutorials/get_started/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorials/get_started/cross_compilation_and_rpc.rst.txt
index dde9435..2700fa6 100644
--- a/docs/_sources/tutorials/get_started/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorials/get_started/cross_compilation_and_rpc.rst.txt
@@ -235,7 +235,7 @@ device and returns the measured cost. Network overhead is excluded.
 
  .. code-block:: none
 
-    1.175e-07 secs/op
+    1.17e-07 secs/op
 
 
 
diff --git a/docs/_sources/tutorials/get_started/relay_quick_start.rst.txt b/docs/_sources/tutorials/get_started/relay_quick_start.rst.txt
index d5f44bd..86b0b54 100644
--- a/docs/_sources/tutorials/get_started/relay_quick_start.rst.txt
+++ b/docs/_sources/tutorials/get_started/relay_quick_start.rst.txt
@@ -224,7 +224,7 @@ in this example. Then the machine code will be generated as the module library.
 
  .. code-block:: none
 
-
    ...1%, 0.01 MB, 10 KB/s, 0 seconds passed
    ...3%, 0.02 MB, 21 KB/s, 0 seconds passed
    ...5%, 0.02 MB, 32 KB/s, 0 seconds passed
    ...6%, 0.03 MB, 42 KB/s, 0 seconds passed
    ...8%, 0.04 MB, 53 KB/s, 0 seconds passed
    ...10%, 0.05 MB, 63 KB/s, 0 seconds passed
    ...11%, 0.05 MB, 74 KB/s, 0 seconds passed
    ...13%, 0.06 MB, 84 KB/s, 0 seconds passed
    ...15%, 0.07 MB, 95 KB/s, 0 seconds passed
    ...16%, 0.08 MB, 106 KB/s, 0 seconds passed
    ...18%, 0.09 MB, 116 KB/s, 0 seconds passed
    ...20%, 0.09 MB, 127 KB/s, 0 seconds passed
    ...21%, 0.10 MB, 137 KB/s, 0 seconds passed
    ...23%, 0.11 MB, 148 KB/s, 0 seconds passed
    ...25%, 0.12 MB, 158 KB/s, 0 seconds passed
    ...26%, 0.12 MB, 168 KB/s, 0 seconds passed
    ...28%, 0.13 MB, 178 KB/s, 0 seconds passed
    ...30%, 0.14 MB, 189 KB/s, 0 seconds passed
    ...31%, 0.15 MB, 199 KB/s, 0 seconds passed
    ...33%, 0.16 MB, 210 KB/s, 0 seconds passed
    ...35%, 0.16 MB, 220 KB/s, 0 seconds passed
  
   ...36%, 0.17 MB, 231 KB/s, 0 seconds passed
    ...38%, 0.18 MB, 241 KB/s, 0 seconds passed
    ...40%, 0.19 MB, 251 KB/s, 0 seconds passed
    ...41%, 0.20 MB, 262 KB/s, 0 seconds passed
    ...43%, 0.20 MB, 272 KB/s, 0 seconds passed
    ...45%, 0.21 MB, 283 KB/s, 0 seconds passed
    ...46%, 0.22 MB, 293 KB/s, 0 seconds passed
    ...48%, 0.23 MB, 304 KB/s, 0 seconds passed
    ...50%, 0.23 MB, 314 KB/s, 0 seconds passed
    ...51%, 0.24 MB, 324 KB/s, 0 seconds passed
    ...53%, 0.25 MB, 335 KB/s, 0 seconds passed
    ...55%, 0.26 MB, 345 KB/s, 0 seconds passed
    ...56%, 0.27 MB, 354 KB/s, 0 seconds passed
    ...58%, 0.27 MB, 365 KB/s, 0 seconds passed
    ...60%, 0.28 MB, 375 KB/s, 0 seconds passed
    ...61%, 0.29 MB, 385 KB/s, 0 seconds passed
    ...63%, 0.30 MB, 395 KB/s, 0 seconds passed
    ...65%, 0.30 MB, 406 KB/s, 0 seconds passed
    ...66%, 0.31 MB, 416 KB/s, 0 seconds passed
    ...68%, 0.32 MB, 426 KB/s, 0 seconds passed
    ...70%, 0.33 MB, 436 KB/s, 0 second
 s passed
    ...71%, 0.34 MB, 447 KB/s, 0 seconds passed
    ...73%, 0.34 MB, 457 KB/s, 0 seconds passed
    ...75%, 0.35 MB, 467 KB/s, 0 seconds passed
    ...76%, 0.36 MB, 477 KB/s, 0 seconds passed
    ...78%, 0.37 MB, 487 KB/s, 0 seconds passed
    ...80%, 0.38 MB, 498 KB/s, 0 seconds passed
    ...81%, 0.38 MB, 508 KB/s, 0 seconds passed
    ...83%, 0.39 MB, 517 KB/s, 0 seconds passed
    ...85%, 0.40 MB, 528 KB/s, 0 seconds passed
    ...86%, 0.41 MB, 537 KB/s, 0 seconds passed
    ...88%, 0.41 MB, 548 KB/s, 0 seconds passed
    ...90%, 0.42 MB, 558 KB/s, 0 seconds passed
    ...91%, 0.43 MB, 568 KB/s, 0 seconds passed
    ...93%, 0.44 MB, 578 KB/s, 0 seconds passed
    ...95%, 0.45 MB, 589 KB/s, 0 seconds passed
    ...96%, 0.45 MB, 598 KB/s, 0 seconds passed
    ...98%, 0.46 MB, 609 KB/s, 0 seconds passed
    ...100%, 0.47 MB, 619 KB/s, 0 seconds passed
+
    ...1%, 0.01 MB, 41 KB/s, 0 seconds passed
    ...3%, 0.02 MB, 83 KB/s, 0 seconds passed
    ...5%, 0.02 MB, 124 KB/s, 0 seconds passed
    ...6%, 0.03 MB, 165 KB/s, 0 seconds passed
    ...8%, 0.04 MB, 201 KB/s, 0 seconds passed
    ...10%, 0.05 MB, 241 KB/s, 0 seconds passed
    ...11%, 0.05 MB, 281 KB/s, 0 seconds passed
    ...13%, 0.06 MB, 319 KB/s, 0 seconds passed
    ...15%, 0.07 MB, 359 KB/s, 0 seconds passed
    ...16%, 0.08 MB, 399 KB/s, 0 seconds passed
    ...18%, 0.09 MB, 437 KB/s, 0 seconds passed
    ...20%, 0.09 MB, 477 KB/s, 0 seconds passed
    ...21%, 0.10 MB, 515 KB/s, 0 seconds passed
    ...23%, 0.11 MB, 554 KB/s, 0 seconds passed
    ...25%, 0.12 MB, 581 KB/s, 0 seconds passed
    ...26%, 0.12 MB, 619 KB/s, 0 seconds passed
    ...28%, 0.13 MB, 658 KB/s, 0 seconds passed
    ...30%, 0.14 MB, 696 KB/s, 0 seconds passed
    ...31%, 0.15 MB, 734 KB/s, 0 seconds passed
    ...33%, 0.16 MB, 771 KB/s, 0 seconds passed
    ...35%, 0.16 MB, 809 KB/s, 0 seconds pa
 ssed
    ...36%, 0.17 MB, 847 KB/s, 0 seconds passed
    ...38%, 0.18 MB, 883 KB/s, 0 seconds passed
    ...40%, 0.19 MB, 921 KB/s, 0 seconds passed
    ...41%, 0.20 MB, 958 KB/s, 0 seconds passed
    ...43%, 0.20 MB, 996 KB/s, 0 seconds passed
    ...45%, 0.21 MB, 1030 KB/s, 0 seconds passed
    ...46%, 0.22 MB, 1068 KB/s, 0 seconds passed
    ...48%, 0.23 MB, 1106 KB/s, 0 seconds passed
    ...50%, 0.23 MB, 1143 KB/s, 0 seconds passed
    ...51%, 0.24 MB, 1179 KB/s, 0 seconds passed
    ...53%, 0.25 MB, 1216 KB/s, 0 seconds passed
    ...55%, 0.26 MB, 1254 KB/s, 0 seconds passed
    ...56%, 0.27 MB, 1291 KB/s, 0 seconds passed
    ...58%, 0.27 MB, 1310 KB/s, 0 seconds passed
    ...60%, 0.28 MB, 1347 KB/s, 0 seconds passed
    ...61%, 0.29 MB, 1383 KB/s, 0 seconds passed
    ...63%, 0.30 MB, 1420 KB/s, 0 seconds passed
    ...65%, 0.30 MB, 1455 KB/s, 0 seconds passed
    ...66%, 0.31 MB, 1492 KB/s, 0 seconds passed
    ...68%, 0.32 MB, 1525 KB/s, 0 seconds passed
    ...70%, 0.33 
 MB, 1561 KB/s, 0 seconds passed
    ...71%, 0.34 MB, 1598 KB/s, 0 seconds passed
    ...73%, 0.34 MB, 1635 KB/s, 0 seconds passed
    ...75%, 0.35 MB, 1671 KB/s, 0 seconds passed
    ...76%, 0.36 MB, 1708 KB/s, 0 seconds passed
    ...78%, 0.37 MB, 1738 KB/s, 0 seconds passed
    ...80%, 0.38 MB, 1775 KB/s, 0 seconds passed
    ...81%, 0.38 MB, 1811 KB/s, 0 seconds passed
    ...83%, 0.39 MB, 1847 KB/s, 0 seconds passed
    ...85%, 0.40 MB, 1883 KB/s, 0 seconds passed
    ...86%, 0.41 MB, 1919 KB/s, 0 seconds passed
    ...88%, 0.41 MB, 1955 KB/s, 0 seconds passed
    ...90%, 0.42 MB, 1992 KB/s, 0 seconds passed
    ...91%, 0.43 MB, 2028 KB/s, 0 seconds passed
    ...93%, 0.44 MB, 2064 KB/s, 0 seconds passed
    ...95%, 0.45 MB, 2100 KB/s, 0 seconds passed
    ...96%, 0.45 MB, 2137 KB/s, 0 seconds passed
    ...98%, 0.46 MB, 2173 KB/s, 0 seconds passed
    ...100%, 0.47 MB, 2208 KB/s, 0 seconds passed
     Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=('dense_small_batch.cuda', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (1000, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
 
 
diff --git a/docs/_sources/tutorials/get_started/sg_execution_times.rst.txt b/docs/_sources/tutorials/get_started/sg_execution_times.rst.txt
index 938e957..e3c0b7d 100644
--- a/docs/_sources/tutorials/get_started/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/get_started/sg_execution_times.rst.txt
@@ -5,9 +5,9 @@
 
 Computation times
 =================
-**00:17.187** total execution time for **tutorials_get_started** files:
+**00:16.955** total execution time for **tutorials_get_started** files:
 
-- **00:16.604**: :ref:`sphx_glr_tutorials_get_started_relay_quick_start.py` (``relay_quick_start.py``)
-- **00:00.359**: :ref:`sphx_glr_tutorials_get_started_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)
-- **00:00.133**: :ref:`sphx_glr_tutorials_get_started_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``)
-- **00:00.092**: :ref:`sphx_glr_tutorials_get_started_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)
+- **00:16.370**: :ref:`sphx_glr_tutorials_get_started_relay_quick_start.py` (``relay_quick_start.py``)
+- **00:00.363**: :ref:`sphx_glr_tutorials_get_started_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)
+- **00:00.131**: :ref:`sphx_glr_tutorials_get_started_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``)
+- **00:00.091**: :ref:`sphx_glr_tutorials_get_started_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)
diff --git a/docs/_sources/tutorials/index.rst.txt b/docs/_sources/tutorials/index.rst.txt
index 2773d3b..765cdf8 100644
--- a/docs/_sources/tutorials/index.rst.txt
+++ b/docs/_sources/tutorials/index.rst.txt
@@ -906,7 +906,7 @@ AutoScheduler : Template-free Auto Scheduling
 
 .. raw:: html
 
-    <div class="sphx-glr-thumbcontainer" tooltip="Different from the template-based tutorials-autotvm-sec which relies on manual templates to def...">
+    <div class="sphx-glr-thumbcontainer" tooltip="This is a tutorial on how to use the auto-scheduler for CPUs.">
 
 .. only:: html
 
@@ -926,7 +926,7 @@ AutoScheduler : Template-free Auto Scheduling
 
 .. raw:: html
 
-    <div class="sphx-glr-thumbcontainer" tooltip="Different from the template-based tutorials-autotvm-sec which relies on manual templates to def...">
+    <div class="sphx-glr-thumbcontainer" tooltip="This is a tutorial on how to use the auto-scheduler for GPUs.">
 
 .. only:: html
 
diff --git a/docs/_sources/tutorials/language/sg_execution_times.rst.txt b/docs/_sources/tutorials/language/sg_execution_times.rst.txt
index a5cfbd4..38d2a30 100644
--- a/docs/_sources/tutorials/language/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/language/sg_execution_times.rst.txt
@@ -5,13 +5,13 @@
 
 Computation times
 =================
-**00:04.364** total execution time for **tutorials_language** files:
+**00:04.585** total execution time for **tutorials_language** files:
 
-- **00:01.545**: :ref:`sphx_glr_tutorials_language_intrin_math.py` (``intrin_math.py``)
-- **00:00.759**: :ref:`sphx_glr_tutorials_language_tensorize.py` (``tensorize.py``)
-- **00:00.578**: :ref:`sphx_glr_tutorials_language_scan.py` (``scan.py``)
-- **00:00.538**: :ref:`sphx_glr_tutorials_language_reduction.py` (``reduction.py``)
-- **00:00.313**: :ref:`sphx_glr_tutorials_language_extern_op.py` (``extern_op.py``)
+- **00:01.632**: :ref:`sphx_glr_tutorials_language_intrin_math.py` (``intrin_math.py``)
+- **00:00.818**: :ref:`sphx_glr_tutorials_language_tensorize.py` (``tensorize.py``)
+- **00:00.613**: :ref:`sphx_glr_tutorials_language_scan.py` (``scan.py``)
+- **00:00.565**: :ref:`sphx_glr_tutorials_language_reduction.py` (``reduction.py``)
+- **00:00.317**: :ref:`sphx_glr_tutorials_language_extern_op.py` (``extern_op.py``)
 - **00:00.225**: :ref:`sphx_glr_tutorials_language_schedule_primitives.py` (``schedule_primitives.py``)
-- **00:00.210**: :ref:`sphx_glr_tutorials_language_tuple_inputs.py` (``tuple_inputs.py``)
-- **00:00.196**: :ref:`sphx_glr_tutorials_language_tedd.py` (``tedd.py``)
+- **00:00.215**: :ref:`sphx_glr_tutorials_language_tuple_inputs.py` (``tuple_inputs.py``)
+- **00:00.200**: :ref:`sphx_glr_tutorials_language_tedd.py` (``tedd.py``)
diff --git a/docs/_sources/tutorials/language/tensorize.rst.txt b/docs/_sources/tutorials/language/tensorize.rst.txt
index 4318e6c..34a0fad 100644
--- a/docs/_sources/tutorials/language/tensorize.rst.txt
+++ b/docs/_sources/tutorials/language/tensorize.rst.txt
@@ -119,8 +119,8 @@ Thus we break down the matmul loops to make the innermost loops a (16x64) GEMV.
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {B: Buffer(B_2: Pointer(float32), float32, [512, 64], []),
-                 C: Buffer(C_2: Pointer(float32), float32, [1024, 512], []),
+      buffers = {C: Buffer(C_2: Pointer(float32), float32, [1024, 512], []),
+                 B: Buffer(B_2: Pointer(float32), float32, [512, 64], []),
                  A: Buffer(A_2: Pointer(float32), float32, [1024, 64], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       for (i: int32, 0, 1024) {
@@ -236,8 +236,8 @@ such placeholder can be put to let TVM automatically bind the inferred value for
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {C: Buffer(C_2: Pointer(float32), float32, [1024, 512], []),
-                 B: Buffer(B_2: Pointer(float32), float32, [512, 64], []),
+      buffers = {B: Buffer(B_2: Pointer(float32), float32, [512, 64], []),
+                 C: Buffer(C_2: Pointer(float32), float32, [1024, 512], []),
                  A: Buffer(A_2: Pointer(float32), float32, [1024, 64], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       for (i: int32, 0, 1024) {
@@ -312,8 +312,8 @@ The importing needs to happen before the tensorized GEMV being executed.
                  B: Buffer(B_2: Pointer(float32), float32, [512, 64], []),
                  A: Buffer(A_2: Pointer(float32), float32, [1024, 64], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
-      attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmp9hp1s94b/input0.cc'
-    source_filename = "/tmp/tmp9hp1s94b/input0.cc"
+      attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmpt9fq1r9q/input0.cc'
+    source_filename = "/tmp/tmpt9fq1r9q/input0.cc"
     target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
     target triple = "x86_64-pc-linux-gnu"
 
diff --git a/docs/_sources/tutorials/language/tuple_inputs.rst.txt b/docs/_sources/tutorials/language/tuple_inputs.rst.txt
index 32941a3..2e75b9d 100644
--- a/docs/_sources/tutorials/language/tuple_inputs.rst.txt
+++ b/docs/_sources/tutorials/language/tuple_inputs.rst.txt
@@ -64,15 +64,15 @@ together in the next schedule procedure.
 
     primfn(A0_1: handle, A1_1: handle, B.v0_1: handle, B.v1_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {A1: Buffer(A1_2: Pointer(float32), float32, [m: int32, n: int32], [stride: int32, stride_1: int32], type="auto"),
-                 B.v1: Buffer(B.v1_2: Pointer(float32), float32, [m, n], [stride_2: int32, stride_3: int32], type="auto"),
-                 B.v0: Buffer(B.v0_2: Pointer(float32), float32, [m, n], [stride_4: int32, stride_5: int32], type="auto"),
+      buffers = {B.v1: Buffer(B.v1_2: Pointer(float32), float32, [m: int32, n: int32], [stride: int32, stride_1: int32], type="auto"),
+                 B.v0: Buffer(B.v0_2: Pointer(float32), float32, [m, n], [stride_2: int32, stride_3: int32], type="auto"),
+                 A1: Buffer(A1_2: Pointer(float32), float32, [m, n], [stride_4: int32, stride_5: int32], type="auto"),
                  A0: Buffer(A0_2: Pointer(float32), float32, [m, n], [stride_6: int32, stride_7: int32], type="auto")}
       buffer_map = {A0_1: A0, A1_1: A1, B.v0_1: B.v0, B.v1_1: B.v1} {
       for (i: int32, 0, m) {
         for (j: int32, 0, n) {
-          B.v0_2[((i*stride_4) + (j*stride_5))] = ((float32*)A0_2[((i*stride_6) + (j*stride_7))] + 2f32)
-          B.v1_2[((i*stride_2) + (j*stride_3))] = ((float32*)A1_2[((i*stride) + (j*stride_1))]*3f32)
+          B.v0_2[((i*stride_2) + (j*stride_3))] = ((float32*)A0_2[((i*stride_6) + (j*stride_7))] + 2f32)
+          B.v1_2[((i*stride) + (j*stride_1))] = ((float32*)A1_2[((i*stride_4) + (j*stride_5))]*3f32)
         }
       }
     }
@@ -136,16 +136,16 @@ with :py:func:`te.comm_reducer` as below:
     primfn(idx_1: handle, val_1: handle, T.v0_1: handle, T.v1_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
       buffers = {T.v1: Buffer(T.v1_2: Pointer(int32), int32, [m: int32], [stride: int32], type="auto"),
-                 val: Buffer(val_2: Pointer(int32), int32, [m, n: int32], [stride_1: int32, stride_2: int32], type="auto"),
-                 T.v0: Buffer(T.v0_2: Pointer(int32), int32, [m], [stride_3: int32], type="auto"),
+                 T.v0: Buffer(T.v0_2: Pointer(int32), int32, [m], [stride_1: int32], type="auto"),
+                 val: Buffer(val_2: Pointer(int32), int32, [m, n: int32], [stride_2: int32, stride_3: int32], type="auto"),
                  idx: Buffer(idx_2: Pointer(int32), int32, [m, n], [stride_4: int32, stride_5: int32], type="auto")}
       buffer_map = {idx_1: idx, val_1: val, T.v0_1: T.v0, T.v1_1: T.v1} {
       for (i: int32, 0, m) {
-        T.v0_2[(i*stride_3)] = -1
+        T.v0_2[(i*stride_1)] = -1
         T.v1_2[(i*stride)] = -2147483648
         for (k: int32, 0, n) {
-          T.v0_2[(i*stride_3)] = @tir.if_then_else(((int32*)val_2[((i*stride_1) + (k*stride_2))] <= (int32*)T.v1_2[(i*stride)]), (int32*)T.v0_2[(i*stride_3)], (int32*)idx_2[((i*stride_4) + (k*stride_5))], dtype=int32)
-          T.v1_2[(i*stride)] = @tir.if_then_else(((int32*)val_2[((i*stride_1) + (k*stride_2))] <= (int32*)T.v1_2[(i*stride)]), (int32*)T.v1_2[(i*stride)], (int32*)val_2[((i*stride_1) + (k*stride_2))], dtype=int32)
+          T.v0_2[(i*stride_1)] = @tir.if_then_else(((int32*)val_2[((i*stride_2) + (k*stride_3))] <= (int32*)T.v1_2[(i*stride)]), (int32*)T.v0_2[(i*stride_1)], (int32*)idx_2[((i*stride_4) + (k*stride_5))], dtype=int32)
+          T.v1_2[(i*stride)] = @tir.if_then_else(((int32*)val_2[((i*stride_2) + (k*stride_3))] <= (int32*)T.v1_2[(i*stride)]), (int32*)T.v1_2[(i*stride)], (int32*)val_2[((i*stride_2) + (k*stride_3))], dtype=int32)
         }
       }
     }
@@ -193,8 +193,8 @@ in terms of operation.
 
     primfn(A0_1: handle, A1_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {A1: Buffer(A1_2: Pointer(float32), float32, [m: int32, n: int32], [stride: int32, stride_1: int32], type="auto"),
-                 C: Buffer(C_2: Pointer(float32), float32, [m, n], [stride_2: int32, stride_3: int32], type="auto"),
+      buffers = {C: Buffer(C_2: Pointer(float32), float32, [m: int32, n: int32], [stride: int32, stride_1: int32], type="auto"),
+                 A1: Buffer(A1_2: Pointer(float32), float32, [m, n], [stride_2: int32, stride_3: int32], type="auto"),
                  A0: Buffer(A0_2: Pointer(float32), float32, [m, n], [stride_4: int32, stride_5: int32], type="auto")}
       buffer_map = {A0_1: A0, A1_1: A1, C_1: C} {
       attr [B.v0: Pointer(float32)] "storage_scope" = "global";
@@ -207,7 +207,7 @@ in terms of operation.
           B.v1[j] = ((float32*)A0_2[((i*stride_4) + (j*stride_5))]*3f32)
         }
         for (j_1: int32, 0, n) {
-          C_2[((i*stride_2) + (j_1*stride_3))] = ((float32*)A1_2[((i*stride) + (j_1*stride_1))] + (float32*)B.v0[j_1])
+          C_2[((i*stride) + (j_1*stride_1))] = ((float32*)A1_2[((i*stride_2) + (j_1*stride_3))] + (float32*)B.v0[j_1])
         }
       }
     }
diff --git a/docs/_sources/tutorials/micro/sg_execution_times.rst.txt b/docs/_sources/tutorials/micro/sg_execution_times.rst.txt
index a75d223..d5b2ed8 100644
--- a/docs/_sources/tutorials/micro/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/micro/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
 
 Computation times
 =================
-**00:05.876** total execution time for **tutorials_micro** files:
+**00:06.360** total execution time for **tutorials_micro** files:
 
-- **00:05.677**: :ref:`sphx_glr_tutorials_micro_micro_tflite.py` (``micro_tflite.py``)
-- **00:00.199**: :ref:`sphx_glr_tutorials_micro_micro_reference_vm.py` (``micro_reference_vm.py``)
+- **00:06.156**: :ref:`sphx_glr_tutorials_micro_micro_tflite.py` (``micro_tflite.py``)
+- **00:00.204**: :ref:`sphx_glr_tutorials_micro_micro_reference_vm.py` (``micro_reference_vm.py``)
diff --git a/docs/_sources/tutorials/optimize/opt_conv_cuda.rst.txt b/docs/_sources/tutorials/optimize/opt_conv_cuda.rst.txt
index 6d71fbf..dfc1f6b 100644
--- a/docs/_sources/tutorials/optimize/opt_conv_cuda.rst.txt
+++ b/docs/_sources/tutorials/optimize/opt_conv_cuda.rst.txt
@@ -296,7 +296,7 @@ latency of convolution.
 
  .. code-block:: none
 
-    Convolution: 42.502390 ms
+    Convolution: 53.219982 ms
 
 
 
diff --git a/docs/_sources/tutorials/optimize/opt_conv_tensorcore.rst.txt b/docs/_sources/tutorials/optimize/opt_conv_tensorcore.rst.txt
index c2983d4..0682e83 100644
--- a/docs/_sources/tutorials/optimize/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/tutorials/optimize/opt_conv_tensorcore.rst.txt
@@ -624,7 +624,7 @@ be able to run on our build server
 
  .. code-block:: none
 
-    conv2d with tensor core: 10.774292 ms
+    conv2d with tensor core: 13.385741 ms
 
 
 
diff --git a/docs/_sources/tutorials/optimize/opt_gemm.rst.txt b/docs/_sources/tutorials/optimize/opt_gemm.rst.txt
index 7a0840e..3db363f 100644
--- a/docs/_sources/tutorials/optimize/opt_gemm.rst.txt
+++ b/docs/_sources/tutorials/optimize/opt_gemm.rst.txt
@@ -118,8 +118,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 
  .. code-block:: none
 
-    Numpy running time: 0.013220
-    Baseline: 3.412455
+    Numpy running time: 0.007639
+    Baseline: 3.516577
 
 
 
@@ -206,7 +206,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 
  .. code-block:: none
 
-    Opt1: 0.284645
+    Opt1: 0.275058
 
 
 
@@ -230,8 +230,8 @@ Here is the generated IR after blocking.
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {B: Buffer(B_2: Pointer(float32), float32, [1024, 1024], []),
-                 C: Buffer(C_2: Pointer(float32), float32, [1024, 1024], []),
+      buffers = {C: Buffer(C_2: Pointer(float32), float32, [1024, 1024], []),
+                 B: Buffer(B_2: Pointer(float32), float32, [1024, 1024], []),
                  A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       for (x.outer: int32, 0, 32) {
@@ -300,7 +300,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
 
  .. code-block:: none
 
-    Opt2: 0.320737
+    Opt2: 0.318300
 
 
 
@@ -389,7 +389,7 @@ the access pattern for A matrix is more cache friendly.
 
  .. code-block:: none
 
-    Opt3: 0.111368
+    Opt3: 0.112766
 
 
 
@@ -499,7 +499,7 @@ the corresponding value from the packed array.
 
  .. code-block:: none
 
-    Opt4: 0.105002
+    Opt4: 0.105734
 
 
 
@@ -609,7 +609,7 @@ write to C when all the block results are ready.
 
  .. code-block:: none
 
-    Opt5: 0.097741
+    Opt5: 0.107047
 
 
 
@@ -725,7 +725,7 @@ Futhermore, we can also utilize multi-core processors to do the thread-level par
 
  .. code-block:: none
 
-    Opt6: 0.032057
+    Opt6: 0.035177
 
 
 
diff --git a/docs/_sources/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/tutorials/optimize/sg_execution_times.rst.txt
index 0fe4676..e4b849f 100644
--- a/docs/_sources/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,9 +5,9 @@
 
 Computation times
 =================
-**00:27.940** total execution time for **tutorials_optimize** files:
+**00:33.670** total execution time for **tutorials_optimize** files:
 
-- **00:25.269**: :ref:`sphx_glr_tutorials_optimize_opt_gemm.py` (``opt_gemm.py``)
-- **00:01.366**: :ref:`sphx_glr_tutorials_optimize_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``)
-- **00:01.095**: :ref:`sphx_glr_tutorials_optimize_opt_conv_cuda.py` (``opt_conv_cuda.py``)
-- **00:00.209**: :ref:`sphx_glr_tutorials_optimize_opt_matmul_auto_tensorcore.py` (``opt_matmul_auto_tensorcore.py``)
+- **00:25.214**: :ref:`sphx_glr_tutorials_optimize_opt_gemm.py` (``opt_gemm.py``)
+- **00:05.499**: :ref:`sphx_glr_tutorials_optimize_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``)
+- **00:02.747**: :ref:`sphx_glr_tutorials_optimize_opt_conv_cuda.py` (``opt_conv_cuda.py``)
+- **00:00.211**: :ref:`sphx_glr_tutorials_optimize_opt_matmul_auto_tensorcore.py` (``opt_matmul_auto_tensorcore.py``)
diff --git a/docs/_sources/tutorials/topi/intro_topi.rst.txt b/docs/_sources/tutorials/topi/intro_topi.rst.txt
index 6c76576..7a0970f 100644
--- a/docs/_sources/tutorials/topi/intro_topi.rst.txt
+++ b/docs/_sources/tutorials/topi/intro_topi.rst.txt
@@ -231,7 +231,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
 
  .. code-block:: none
 
-    [stage(a, placeholder(a, 0x18f92ee40)), stage(b, placeholder(b, 0x18f83f470)), stage(T_add, compute(T_add, body=[(a[ax0, ax1, ax2] + b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min=0, ext=10))], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[(a[ax0, ax1, ax2]*b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range( [...]
+    [stage(a, placeholder(a, 0x18ef4c780)), stage(b, placeholder(b, 0x18def4980)), stage(T_add, compute(T_add, body=[(a[ax0, ax1, ax2] + b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min=0, ext=10))], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[(a[ax0, ax1, ax2]*b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range( [...]
 
 
 
diff --git a/docs/_sources/tutorials/topi/sg_execution_times.rst.txt b/docs/_sources/tutorials/topi/sg_execution_times.rst.txt
index 8cf31b0..20d3ecf 100644
--- a/docs/_sources/tutorials/topi/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/topi/sg_execution_times.rst.txt
@@ -5,6 +5,6 @@
 
 Computation times
 =================
-**00:00.621** total execution time for **tutorials_topi** files:
+**00:00.647** total execution time for **tutorials_topi** files:
 
-- **00:00.621**: :ref:`sphx_glr_tutorials_topi_intro_topi.py` (``intro_topi.py``)
+- **00:00.647**: :ref:`sphx_glr_tutorials_topi_intro_topi.py` (``intro_topi.py``)
diff --git a/docs/_sources/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/vta/tutorials/autotvm/sg_execution_times.rst.txt
index 79b097c..84ca233 100644
--- a/docs/_sources/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,6 +5,6 @@
 
 Computation times
 =================
-**00:07.733** total execution time for **vta_tutorials_autotvm** files:
+**00:07.912** total execution time for **vta_tutorials_autotvm** files:
 
-- **00:07.733**: :ref:`sphx_glr_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``)
+- **00:07.912**: :ref:`sphx_glr_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``)
diff --git a/docs/_sources/vta/tutorials/autotvm/tune_relay_vta.rst.txt b/docs/_sources/vta/tutorials/autotvm/tune_relay_vta.rst.txt
index 8e0e76f..04a198c 100644
--- a/docs/_sources/vta/tutorials/autotvm/tune_relay_vta.rst.txt
+++ b/docs/_sources/vta/tutorials/autotvm/tune_relay_vta.rst.txt
@@ -497,7 +497,7 @@ Finally, we launch tuning jobs and evaluate the end-to-end performance.
  .. code-block:: none
 
     Extract tasks...
-
    ...1%, 0.01 MB, 33 KB/s, 0 seconds passed
    ...2%, 0.02 MB, 65 KB/s, 0 seconds passed
    ...3%, 0.02 MB, 98 KB/s, 0 seconds passed
    ...4%, 0.03 MB, 130 KB/s, 0 seconds passed
    ...5%, 0.04 MB, 159 KB/s, 0 seconds passed
    ...6%, 0.05 MB, 191 KB/s, 0 seconds passed
    ...7%, 0.05 MB, 222 KB/s, 0 seconds passed
    ...8%, 0.06 MB, 254 KB/s, 0 seconds passed
    ...9%, 0.07 MB, 285 KB/s, 0 seconds passed
    ...10%, 0.08 MB, 315 KB/s, 0 seconds passed
    ...11%, 0.09 MB, 347 KB/s, 0 seconds passed
    ...13%, 0.09 MB, 377 KB/s, 0 seconds passed
    ...14%, 0.10 MB, 408 KB/s, 0 seconds passed
    ...15%, 0.11 MB, 438 KB/s, 0 seconds passed
    ...16%, 0.12 MB, 463 KB/s, 0 seconds passed
    ...17%, 0.12 MB, 493 KB/s, 0 seconds passed
    ...18%, 0.13 MB, 523 KB/s, 0 seconds passed
    ...19%, 0.14 MB, 552 KB/s, 0 seconds passed
    ...20%, 0.15 MB, 583 KB/s, 0 seconds passed
    ...21%, 0.16 MB, 612 KB/s, 0 seconds passed
    ...22%, 0.16 MB, 642 KB/s, 0 seconds passed
     ...23%, 0.17 MB, 673 KB/s, 0 seconds passed
    ...24%, 0.18 MB, 701 KB/s, 0 seconds passed
    ...26%, 0.19 MB, 731 KB/s, 0 seconds passed
    ...27%, 0.20 MB, 760 KB/s, 0 seconds passed
    ...28%, 0.20 MB, 790 KB/s, 0 seconds passed
    ...29%, 0.21 MB, 820 KB/s, 0 seconds passed
    ...30%, 0.22 MB, 850 KB/s, 0 seconds passed
    ...31%, 0.23 MB, 878 KB/s, 0 seconds passed
    ...32%, 0.23 MB, 908 KB/s, 0 seconds passed
    ...33%, 0.24 MB, 938 KB/s, 0 seconds passed
    ...34%, 0.25 MB, 966 KB/s, 0 seconds passed
    ...35%, 0.26 MB, 985 KB/s, 0 seconds passed
    ...36%, 0.27 MB, 1015 KB/s, 0 seconds passed
    ...38%, 0.27 MB, 1043 KB/s, 0 seconds passed
    ...39%, 0.28 MB, 1073 KB/s, 0 seconds passed
    ...40%, 0.29 MB, 1096 KB/s, 0 seconds passed
    ...41%, 0.30 MB, 1125 KB/s, 0 seconds passed
    ...42%, 0.30 MB, 1155 KB/s, 0 seconds passed
    ...43%, 0.31 MB, 1183 KB/s, 0 seconds passed
    ...44%, 0.32 MB, 1211 KB/s, 0 seconds passed
    ...45%, 0.33 MB, 1240 KB/
 s, 0 seconds passed
    ...46%, 0.34 MB, 1265 KB/s, 0 seconds passed
    ...47%, 0.34 MB, 1294 KB/s, 0 seconds passed
    ...48%, 0.35 MB, 1319 KB/s, 0 seconds passed
    ...49%, 0.36 MB, 1348 KB/s, 0 seconds passed
    ...51%, 0.37 MB, 1373 KB/s, 0 seconds passed
    ...52%, 0.38 MB, 1402 KB/s, 0 seconds passed
    ...53%, 0.38 MB, 1430 KB/s, 0 seconds passed
    ...54%, 0.39 MB, 1459 KB/s, 0 seconds passed
    ...55%, 0.40 MB, 1475 KB/s, 0 seconds passed
    ...56%, 0.41 MB, 1504 KB/s, 0 seconds passed
    ...57%, 0.41 MB, 1531 KB/s, 0 seconds passed
    ...58%, 0.42 MB, 1559 KB/s, 0 seconds passed
    ...59%, 0.43 MB, 1583 KB/s, 0 seconds passed
    ...60%, 0.44 MB, 1611 KB/s, 0 seconds passed
    ...61%, 0.45 MB, 1636 KB/s, 0 seconds passed
    ...63%, 0.45 MB, 1664 KB/s, 0 seconds passed
    ...64%, 0.46 MB, 1689 KB/s, 0 seconds passed
    ...65%, 0.47 MB, 1717 KB/s, 0 seconds passed
    ...66%, 0.48 MB, 1742 KB/s, 0 seconds passed
    ...67%, 0.48 MB, 1770 KB/s, 0 seconds pass
 ed
    ...68%, 0.49 MB, 1796 KB/s, 0 seconds passed
    ...69%, 0.50 MB, 1825 KB/s, 0 seconds passed
    ...70%, 0.51 MB, 1847 KB/s, 0 seconds passed
    ...71%, 0.52 MB, 1875 KB/s, 0 seconds passed
    ...72%, 0.52 MB, 1897 KB/s, 0 seconds passed
    ...73%, 0.53 MB, 1925 KB/s, 0 seconds passed
    ...74%, 0.54 MB, 1949 KB/s, 0 seconds passed
    ...76%, 0.55 MB, 1976 KB/s, 0 seconds passed
    ...77%, 0.55 MB, 1992 KB/s, 0 seconds passed
    ...78%, 0.56 MB, 2020 KB/s, 0 seconds passed
    ...79%, 0.57 MB, 2045 KB/s, 0 seconds passed
    ...80%, 0.58 MB, 2072 KB/s, 0 seconds passed
    ...81%, 0.59 MB, 2088 KB/s, 0 seconds passed
    ...82%, 0.59 MB, 2115 KB/s, 0 seconds passed
    ...83%, 0.60 MB, 2140 KB/s, 0 seconds passed
    ...84%, 0.61 MB, 2167 KB/s, 0 seconds passed
    ...85%, 0.62 MB, 2191 KB/s, 0 seconds passed
    ...86%, 0.62 MB, 2218 KB/s, 0 seconds passed
    ...87%, 0.63 MB, 2245 KB/s, 0 seconds passed
    ...89%, 0.64 MB, 2272 KB/s, 0 seconds passed
    ...90%, 0.
 65 MB, 2294 KB/s, 0 seconds passed
    ...91%, 0.66 MB, 2321 KB/s, 0 seconds passed
    ...92%, 0.66 MB, 2341 KB/s, 0 seconds passed
    ...93%, 0.67 MB, 2368 KB/s, 0 seconds passed
    ...94%, 0.68 MB, 2389 KB/s, 0 seconds passed
    ...95%, 0.69 MB, 2416 KB/s, 0 seconds passed
    ...96%, 0.70 MB, 2443 KB/s, 0 seconds passed
    ...97%, 0.70 MB, 2469 KB/s, 0 seconds passed
    ...98%, 0.71 MB, 2491 KB/s, 0 seconds passed
    ...99%, 0.72 MB, 2517 KB/s, 0 seconds passed
    ...100%, 0.73 MB, 2543 KB/s, 0 seconds passed
+
    ...1%, 0.01 MB, 41 KB/s, 0 seconds passed
    ...2%, 0.02 MB, 82 KB/s, 0 seconds passed
    ...3%, 0.02 MB, 122 KB/s, 0 seconds passed
    ...4%, 0.03 MB, 163 KB/s, 0 seconds passed
    ...5%, 0.04 MB, 198 KB/s, 0 seconds passed
    ...6%, 0.05 MB, 237 KB/s, 0 seconds passed
    ...7%, 0.05 MB, 276 KB/s, 0 seconds passed
    ...8%, 0.06 MB, 316 KB/s, 0 seconds passed
    ...9%, 0.07 MB, 354 KB/s, 0 seconds passed
    ...10%, 0.08 MB, 393 KB/s, 0 seconds passed
    ...11%, 0.09 MB, 432 KB/s, 0 seconds passed
    ...13%, 0.09 MB, 469 KB/s, 0 seconds passed
    ...14%, 0.10 MB, 508 KB/s, 0 seconds passed
    ...15%, 0.11 MB, 545 KB/s, 0 seconds passed
    ...16%, 0.12 MB, 574 KB/s, 0 seconds passed
    ...17%, 0.12 MB, 611 KB/s, 0 seconds passed
    ...18%, 0.13 MB, 649 KB/s, 0 seconds passed
    ...19%, 0.14 MB, 685 KB/s, 0 seconds passed
    ...20%, 0.15 MB, 722 KB/s, 0 seconds passed
    ...21%, 0.16 MB, 760 KB/s, 0 seconds passed
    ...22%, 0.16 MB, 798 KB/s, 0 seconds passed
 
    ...23%, 0.17 MB, 833 KB/s, 0 seconds passed
    ...24%, 0.18 MB, 869 KB/s, 0 seconds passed
    ...26%, 0.19 MB, 907 KB/s, 0 seconds passed
    ...27%, 0.20 MB, 944 KB/s, 0 seconds passed
    ...28%, 0.20 MB, 981 KB/s, 0 seconds passed
    ...29%, 0.21 MB, 1016 KB/s, 0 seconds passed
    ...30%, 0.22 MB, 1053 KB/s, 0 seconds passed
    ...31%, 0.23 MB, 1090 KB/s, 0 seconds passed
    ...32%, 0.23 MB, 1127 KB/s, 0 seconds passed
    ...33%, 0.24 MB, 1162 KB/s, 0 seconds passed
    ...34%, 0.25 MB, 1199 KB/s, 0 seconds passed
    ...35%, 0.26 MB, 1236 KB/s, 0 seconds passed
    ...36%, 0.27 MB, 1255 KB/s, 0 seconds passed
    ...38%, 0.27 MB, 1292 KB/s, 0 seconds passed
    ...39%, 0.28 MB, 1328 KB/s, 0 seconds passed
    ...40%, 0.29 MB, 1360 KB/s, 0 seconds passed
    ...41%, 0.30 MB, 1396 KB/s, 0 seconds passed
    ...42%, 0.30 MB, 1427 KB/s, 0 seconds passed
    ...43%, 0.31 MB, 1463 KB/s, 0 seconds passed
    ...44%, 0.32 MB, 1499 KB/s, 0 seconds passed
    ...45%, 0.33 MB, 
 1533 KB/s, 0 seconds passed
    ...46%, 0.34 MB, 1568 KB/s, 0 seconds passed
    ...47%, 0.34 MB, 1604 KB/s, 0 seconds passed
    ...48%, 0.35 MB, 1635 KB/s, 0 seconds passed
    ...49%, 0.36 MB, 1670 KB/s, 0 seconds passed
    ...51%, 0.37 MB, 1701 KB/s, 0 seconds passed
    ...52%, 0.38 MB, 1736 KB/s, 0 seconds passed
    ...53%, 0.38 MB, 1772 KB/s, 0 seconds passed
    ...54%, 0.39 MB, 1801 KB/s, 0 seconds passed
    ...55%, 0.40 MB, 1836 KB/s, 0 seconds passed
    ...56%, 0.41 MB, 1859 KB/s, 0 seconds passed
    ...57%, 0.41 MB, 1893 KB/s, 0 seconds passed
    ...58%, 0.42 MB, 1928 KB/s, 0 seconds passed
    ...59%, 0.43 MB, 1955 KB/s, 0 seconds passed
    ...60%, 0.44 MB, 1990 KB/s, 0 seconds passed
    ...61%, 0.45 MB, 2017 KB/s, 0 seconds passed
    ...63%, 0.45 MB, 2052 KB/s, 0 seconds passed
    ...64%, 0.46 MB, 2086 KB/s, 0 seconds passed
    ...65%, 0.47 MB, 2120 KB/s, 0 seconds passed
    ...66%, 0.48 MB, 2154 KB/s, 0 seconds passed
    ...67%, 0.48 MB, 2180 KB/s, 0 seco
 nds passed
    ...68%, 0.49 MB, 2214 KB/s, 0 seconds passed
    ...69%, 0.50 MB, 2248 KB/s, 0 seconds passed
    ...70%, 0.51 MB, 2277 KB/s, 0 seconds passed
    ...71%, 0.52 MB, 2311 KB/s, 0 seconds passed
    ...72%, 0.52 MB, 2345 KB/s, 0 seconds passed
    ...73%, 0.53 MB, 2375 KB/s, 0 seconds passed
    ...74%, 0.54 MB, 2409 KB/s, 0 seconds passed
    ...76%, 0.55 MB, 2437 KB/s, 0 seconds passed
    ...77%, 0.55 MB, 2471 KB/s, 0 seconds passed
    ...78%, 0.56 MB, 2505 KB/s, 0 seconds passed
    ...79%, 0.57 MB, 2513 KB/s, 0 seconds passed
    ...80%, 0.58 MB, 2546 KB/s, 0 seconds passed
    ...81%, 0.59 MB, 2580 KB/s, 0 seconds passed
    ...82%, 0.59 MB, 2611 KB/s, 0 seconds passed
    ...83%, 0.60 MB, 2644 KB/s, 0 seconds passed
    ...84%, 0.61 MB, 2672 KB/s, 0 seconds passed
    ...85%, 0.62 MB, 2705 KB/s, 0 seconds passed
    ...86%, 0.62 MB, 2738 KB/s, 0 seconds passed
    ...87%, 0.63 MB, 2758 KB/s, 0 seconds passed
    ...89%, 0.64 MB, 2791 KB/s, 0 seconds passed
    ..
 .90%, 0.65 MB, 2824 KB/s, 0 seconds passed
    ...91%, 0.66 MB, 2855 KB/s, 0 seconds passed
    ...92%, 0.66 MB, 2888 KB/s, 0 seconds passed
    ...93%, 0.67 MB, 2916 KB/s, 0 seconds passed
    ...94%, 0.68 MB, 2949 KB/s, 0 seconds passed
    ...95%, 0.69 MB, 2981 KB/s, 0 seconds passed
    ...96%, 0.70 MB, 3003 KB/s, 0 seconds passed
    ...97%, 0.70 MB, 3035 KB/s, 0 seconds passed
    ...98%, 0.71 MB, 3068 KB/s, 0 seconds passed
    ...99%, 0.72 MB, 3094 KB/s, 0 seconds passed
    ...100%, 0.73 MB, 3125 KB/s, 0 seconds passed
     Extracted 10 conv2d tasks:
     (1, 14, 14, 256, 512, 1, 1, 0, 0, 2, 2)
     (1, 28, 28, 128, 256, 1, 1, 0, 0, 2, 2)
diff --git a/docs/_sources/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/vta/tutorials/frontend/deploy_classification.rst.txt
index 1bbfc4f..e7c602c 100644
--- a/docs/_sources/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -243,8 +243,8 @@ The compilation steps are:
 
  .. code-block:: none
 
-
    ...12%, 0.01 MB, 33 KB/s, 0 seconds passed
    ...25%, 0.02 MB, 66 KB/s, 0 seconds passed
    ...38%, 0.02 MB, 99 KB/s, 0 seconds passed
    ...51%, 0.03 MB, 131 KB/s, 0 seconds passed
    ...64%, 0.04 MB, 160 KB/s, 0 seconds passed
    ...77%, 0.05 MB, 193 KB/s, 0 seconds passed
    ...89%, 0.05 MB, 225 KB/s, 0 seconds passed
    ...100%, 0.06 MB, 256 KB/s, 0 seconds passed
-    resnet18_v1 inference graph built in 8.20s!
+
    ...12%, 0.01 MB, 42 KB/s, 0 seconds passed
    ...25%, 0.02 MB, 85 KB/s, 0 seconds passed
    ...38%, 0.02 MB, 127 KB/s, 0 seconds passed
    ...51%, 0.03 MB, 170 KB/s, 0 seconds passed
    ...64%, 0.04 MB, 205 KB/s, 0 seconds passed
    ...77%, 0.05 MB, 246 KB/s, 0 seconds passed
    ...89%, 0.05 MB, 286 KB/s, 0 seconds passed
    ...100%, 0.06 MB, 326 KB/s, 0 seconds passed
+    resnet18_v1 inference graph built in 8.22s!
 
 
 
diff --git a/docs/_sources/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/vta/tutorials/frontend/sg_execution_times.rst.txt
index cd9ab96..3497ff3 100644
--- a/docs/_sources/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,6 +5,6 @@
 
 Computation times
 =================
-**00:30.175** total execution time for **vta_tutorials_frontend** files:
+**00:29.299** total execution time for **vta_tutorials_frontend** files:
 
-- **00:30.175**: :ref:`sphx_glr_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``)
+- **00:29.299**: :ref:`sphx_glr_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``)
diff --git a/docs/_sources/vta/tutorials/matrix_multiply.rst.txt b/docs/_sources/vta/tutorials/matrix_multiply.rst.txt
index 993795b..aa5bdcc 100644
--- a/docs/_sources/vta/tutorials/matrix_multiply.rst.txt
+++ b/docs/_sources/vta/tutorials/matrix_multiply.rst.txt
@@ -304,8 +304,8 @@ After we construct the schedule, by default the schedule computes
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {B: Buffer(B_2: Pointer(int8), int8, [16, 16, 16, 16], []),
-                 C: Buffer(C_2: Pointer(int8), int8, [1, 16, 1, 16], []),
+      buffers = {C: Buffer(C_2: Pointer(int8), int8, [1, 16, 1, 16], []),
+                 B: Buffer(B_2: Pointer(int8), int8, [16, 16, 16, 16], []),
                  A: Buffer(A_2: Pointer(int8), int8, [1, 16, 1, 16], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       attr [A_buf: Pointer(int8)] "storage_scope" = "global";
@@ -450,8 +450,8 @@ moving the copy operations into the matrix multiplication loop.
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {B: Buffer(B_2: Pointer(int8), int8, [16, 16, 16, 16], []),
-                 C: Buffer(C_2: Pointer(int8), int8, [1, 16, 1, 16], []),
+      buffers = {C: Buffer(C_2: Pointer(int8), int8, [1, 16, 1, 16], []),
+                 B: Buffer(B_2: Pointer(int8), int8, [16, 16, 16, 16], []),
                  A: Buffer(A_2: Pointer(int8), int8, [1, 16, 1, 16], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       attr [C_buf: Pointer(int32)] "storage_scope" = "local.acc_buffer";
diff --git a/docs/_sources/vta/tutorials/optimize/convolution_opt.rst.txt b/docs/_sources/vta/tutorials/optimize/convolution_opt.rst.txt
index 08e1871..0ca92de 100644
--- a/docs/_sources/vta/tutorials/optimize/convolution_opt.rst.txt
+++ b/docs/_sources/vta/tutorials/optimize/convolution_opt.rst.txt
@@ -252,8 +252,8 @@ Those include:
 
     primfn(data_1: handle, kernel_1: handle, res_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {kernel: Buffer(kernel_2: Pointer(int8), int8, [16, 16, 3, 3, 16, 16], []),
-                 res: Buffer(res_2: Pointer(int8), int8, [1, 16, 14, 14, 1, 16], []),
+      buffers = {res: Buffer(res_2: Pointer(int8), int8, [1, 16, 14, 14, 1, 16], []),
+                 kernel: Buffer(kernel_2: Pointer(int8), int8, [16, 16, 3, 3, 16, 16], []),
                  data: Buffer(data_2: Pointer(int8), int8, [1, 16, 14, 14, 1, 16], [])}
       buffer_map = {data_1: data, kernel_1: kernel, res_1: res} {
       attr [data_buf: Pointer(int8)] "storage_scope" = "global";
@@ -448,8 +448,8 @@ below.
 
     primfn(data_1: handle, kernel_1: handle, res_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {kernel: Buffer(kernel_2: Pointer(int8), int8, [16, 16, 3, 3, 16, 16], []),
-                 res: Buffer(res_2: Pointer(int8), int8, [1, 16, 14, 14, 1, 16], []),
+      buffers = {res: Buffer(res_2: Pointer(int8), int8, [1, 16, 14, 14, 1, 16], []),
+                 kernel: Buffer(kernel_2: Pointer(int8), int8, [16, 16, 3, 3, 16, 16], []),
                  data: Buffer(data_2: Pointer(int8), int8, [1, 16, 14, 14, 1, 16], [])}
       buffer_map = {data_1: data, kernel_1: kernel, res_1: res} {
       attr [data_buf: Pointer(int8)] "storage_scope" = "global";
diff --git a/docs/_sources/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/vta/tutorials/optimize/sg_execution_times.rst.txt
index 0aa2d74..3a4ed1c 100644
--- a/docs/_sources/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
 
 Computation times
 =================
-**00:03.794** total execution time for **vta_tutorials_optimize** files:
+**00:03.782** total execution time for **vta_tutorials_optimize** files:
 
-- **00:03.265**: :ref:`sphx_glr_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)
-- **00:00.528**: :ref:`sphx_glr_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``)
+- **00:03.232**: :ref:`sphx_glr_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)
+- **00:00.550**: :ref:`sphx_glr_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``)
diff --git a/docs/_sources/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/vta/tutorials/sg_execution_times.rst.txt
index 23d3cc1..9ddf4a8 100644
--- a/docs/_sources/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/vta/tutorials/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
 
 Computation times
 =================
-**00:00.948** total execution time for **vta_tutorials** files:
+**00:00.989** total execution time for **vta_tutorials** files:
 
-- **00:00.481**: :ref:`sphx_glr_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``)
-- **00:00.467**: :ref:`sphx_glr_vta_tutorials_vta_get_started.py` (``vta_get_started.py``)
+- **00:00.504**: :ref:`sphx_glr_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``)
+- **00:00.485**: :ref:`sphx_glr_vta_tutorials_vta_get_started.py` (``vta_get_started.py``)
diff --git a/docs/_sources/vta/tutorials/vta_get_started.rst.txt b/docs/_sources/vta/tutorials/vta_get_started.rst.txt
index ec845bc..b90d68a 100644
--- a/docs/_sources/vta/tutorials/vta_get_started.rst.txt
+++ b/docs/_sources/vta/tutorials/vta_get_started.rst.txt
@@ -423,8 +423,8 @@ with an :code:`env.alu` pragma.
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {C: Buffer(C_2: Pointer(int8), int8, [1, 64, 1, 16], []),
-                 B: Buffer(B_2: Pointer(int32), int32, [1, 64, 1, 16], []),
+      buffers = {B: Buffer(B_2: Pointer(int32), int32, [1, 64, 1, 16], []),
+                 C: Buffer(C_2: Pointer(int8), int8, [1, 64, 1, 16], []),
                  A: Buffer(A_2: Pointer(int32), int32, [1, 64, 1, 16], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       attr [A_buf: Pointer(int32)] "storage_scope" = "local.acc_buffer" {
diff --git a/docs/api/doxygen/crt_2packed__func_8h.html b/docs/api/doxygen/crt_2packed__func_8h.html
index 476cea0..5a0100c 100644
--- a/docs/api/doxygen/crt_2packed__func_8h.html
+++ b/docs/api/doxygen/crt_2packed__func_8h.html
@@ -107,7 +107,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 </div><div class="textblock"><div class="dynheader">
 Include dependency graph for packed_func.h:</div>
 <div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="crt_2packed__func_8h__incl.svg" width="764" height="470"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="crt_2packed__func_8h__incl.svg" width="847" height="470"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
 </div>
 </div>
 </div><div class="textblock"><div class="dynheader">
diff --git a/docs/api/doxygen/crt_2packed__func_8h__incl.svg b/docs/api/doxygen/crt_2packed__func_8h__incl.svg
index 0c1c705..5b3ffa8 100644
--- a/docs/api/doxygen/crt_2packed__func_8h__incl.svg
+++ b/docs/api/doxygen/crt_2packed__func_8h__incl.svg
@@ -4,11 +4,11 @@
 <!-- Generated by graphviz version 2.38.0 (20140413.2041)
  -->
 <!-- Title: include/tvm/runtime/crt/packed_func.h Pages: 1 -->
-<svg width="573pt" height="352pt"
- viewBox="0.00 0.00 573.00 352.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<svg width="635pt" height="352pt"
+ viewBox="0.00 0.00 635.00 352.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
 <g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 348)">
 <title>include/tvm/runtime/crt/packed_func.h</title>
-<polygon fill="white" stroke="none" points="-4,4 -4,-348 569,-348 569,4 -4,4"/>
+<polygon fill="white" stroke="none" points="-4,4 -4,-348 631,-348 631,4 -4,4"/>
 <!-- Node1 -->
 <g id="node1" class="node"><title>Node1</title>
 <polygon fill="#bfbfbf" stroke="black" points="164.5,-313.5 164.5,-343.5 277.5,-343.5 277.5,-313.5 164.5,-313.5"/>
@@ -48,16 +48,16 @@
 <!-- Node5 -->
 <g id="node5" class="node"><title>Node5</title>
 <g id="a_node5"><a xlink:href="c__runtime__api_8h.html" target="_top" xlink:title="tvm/runtime/c_runtime\l_api.h">
-<polygon fill="white" stroke="black" points="203.5,-56.5 203.5,-86.5 330.5,-86.5 330.5,-56.5 203.5,-56.5"/>
-<text text-anchor="start" x="211.5" y="-74.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/c_runtime</text>
-<text text-anchor="middle" x="267" y="-63.5" font-family="Helvetica,sans-Serif" font-size="10.00">_api.h</text>
+<polygon fill="white" stroke="black" points="250.5,-56.5 250.5,-86.5 377.5,-86.5 377.5,-56.5 250.5,-56.5"/>
+<text text-anchor="start" x="258.5" y="-74.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/c_runtime</text>
+<text text-anchor="middle" x="314" y="-63.5" font-family="Helvetica,sans-Serif" font-size="10.00">_api.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node5 -->
 <g id="edge4" class="edge"><title>Node1&#45;&gt;Node5</title>
-<path fill="none" stroke="midnightblue" d="M220.414,-313.249C219.428,-279.427 219.279,-191.981 240,-123 242.826,-113.592 247.493,-103.922 252.159,-95.5956"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="255.253,-97.2397 257.328,-86.85 249.227,-93.6779 255.253,-97.2397"/>
+<path fill="none" stroke="midnightblue" d="M218.464,-313.373C212.938,-278.321 203.084,-185.675 240,-123 247.559,-110.167 259.817,-99.8685 272.201,-92.0042"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="274.343,-94.8041 281.175,-86.7065 270.784,-88.7761 274.343,-94.8041"/>
 </g>
 <!-- Node9 -->
 <g id="node9" class="node"><title>Node9</title>
@@ -75,55 +75,55 @@
 <!-- Node13 -->
 <g id="node13" class="node"><title>Node13</title>
 <g id="a_node13"><a xlink:href="platform_8h.html" target="_top" xlink:title="The virtual memory manager for micro&#45;controllers. ">
-<polygon fill="white" stroke="black" points="421,-196 421,-215 565,-215 565,-196 421,-196"/>
-<text text-anchor="middle" x="493" y="-203" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/platform.h</text>
+<polygon fill="white" stroke="black" points="471,-196 471,-215 615,-215 615,-196 471,-196"/>
+<text text-anchor="middle" x="543" y="-203" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/platform.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node13 -->
 <g id="edge14" class="edge"><title>Node1&#45;&gt;Node13</title>
-<path fill="none" stroke="midnightblue" d="M277.517,-316.838C313.249,-308.792 359.74,-295.869 398,-277 427.76,-262.323 457.794,-238.092 475.904,-222.212"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="478.662,-224.44 483.782,-215.165 473.995,-219.223 478.662,-224.44"/>
+<path fill="none" stroke="midnightblue" d="M277.795,-314.479C313.011,-305.629 358.847,-292.693 398,-277 415.715,-269.9 418.922,-265.519 436,-257 462.62,-243.721 493.364,-229.305 515.047,-219.288"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="516.706,-222.377 524.325,-215.015 513.778,-216.019 516.706,-222.377"/>
 </g>
-<!-- Node14 -->
-<g id="node14" class="node"><title>Node14</title>
+<!-- Node15 -->
+<g id="node15" class="node"><title>Node15</title>
 <polygon fill="white" stroke="#bfbfbf" points="445,-257.5 445,-276.5 517,-276.5 517,-257.5 445,-257.5"/>
 <text text-anchor="middle" x="481" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">crt_config.h</text>
 </g>
-<!-- Node1&#45;&gt;Node14 -->
-<g id="edge16" class="edge"><title>Node1&#45;&gt;Node14</title>
+<!-- Node1&#45;&gt;Node15 -->
+<g id="edge18" class="edge"><title>Node1&#45;&gt;Node15</title>
 <path fill="none" stroke="midnightblue" d="M277.737,-314.516C324.868,-303.73 391.074,-288.579 434.964,-278.535"/>
 <polygon fill="midnightblue" stroke="midnightblue" points="435.828,-281.928 444.795,-276.285 434.266,-275.105 435.828,-281.928"/>
 </g>
 <!-- Node6 -->
 <g id="node6" class="node"><title>Node6</title>
-<polygon fill="white" stroke="#bfbfbf" points="131,-0.5 131,-19.5 221,-19.5 221,-0.5 131,-0.5"/>
-<text text-anchor="middle" x="176" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">dlpack/dlpack.h</text>
+<polygon fill="white" stroke="#bfbfbf" points="224,-0.5 224,-19.5 314,-19.5 314,-0.5 224,-0.5"/>
+<text text-anchor="middle" x="269" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">dlpack/dlpack.h</text>
 </g>
 <!-- Node5&#45;&gt;Node6 -->
 <g id="edge5" class="edge"><title>Node5&#45;&gt;Node6</title>
-<path fill="none" stroke="midnightblue" d="M245.434,-56.3993C231.12,-47.0402 212.388,-34.7924 197.989,-25.3771"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="199.526,-22.2009 189.241,-19.6578 195.696,-28.0597 199.526,-22.2009"/>
+<path fill="none" stroke="midnightblue" d="M303.336,-56.3993C296.864,-47.8424 288.566,-36.8708 281.749,-27.857"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="284.372,-25.5223 275.548,-19.6578 278.789,-29.7449 284.372,-25.5223"/>
 </g>
 <!-- Node7 -->
 <g id="node7" class="node"><title>Node7</title>
-<polygon fill="white" stroke="#bfbfbf" points="239.5,-0.5 239.5,-19.5 294.5,-19.5 294.5,-0.5 239.5,-0.5"/>
-<text text-anchor="middle" x="267" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">stddef.h</text>
+<polygon fill="white" stroke="#bfbfbf" points="459.5,-0.5 459.5,-19.5 514.5,-19.5 514.5,-0.5 459.5,-0.5"/>
+<text text-anchor="middle" x="487" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">stddef.h</text>
 </g>
 <!-- Node5&#45;&gt;Node7 -->
 <g id="edge6" class="edge"><title>Node5&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M267,-56.3993C267,-48.4664 267,-38.458 267,-29.8583"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="270.5,-29.6577 267,-19.6578 263.5,-29.6578 270.5,-29.6577"/>
+<path fill="none" stroke="midnightblue" d="M354.999,-56.3993C384.632,-46.2074 424.226,-32.5898 452.298,-22.935"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="453.761,-26.133 462.08,-19.5709 451.485,-19.5136 453.761,-26.133"/>
 </g>
 <!-- Node8 -->
 <g id="node8" class="node"><title>Node8</title>
-<polygon fill="white" stroke="#bfbfbf" points="312.5,-0.5 312.5,-19.5 365.5,-19.5 365.5,-0.5 312.5,-0.5"/>
-<text text-anchor="middle" x="339" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">stdint.h</text>
+<polygon fill="white" stroke="#bfbfbf" points="332.5,-0.5 332.5,-19.5 385.5,-19.5 385.5,-0.5 332.5,-0.5"/>
+<text text-anchor="middle" x="359" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">stdint.h</text>
 </g>
 <!-- Node5&#45;&gt;Node8 -->
 <g id="edge7" class="edge"><title>Node5&#45;&gt;Node8</title>
-<path fill="none" stroke="midnightblue" d="M284.063,-56.3993C295.065,-47.3076 309.365,-35.4899 320.618,-26.1909"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="323.044,-28.726 328.523,-19.6578 318.585,-23.33 323.044,-28.726"/>
+<path fill="none" stroke="midnightblue" d="M324.664,-56.3993C331.136,-47.8424 339.434,-36.8708 346.251,-27.857"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="349.211,-29.7449 352.452,-19.6578 343.628,-25.5223 349.211,-29.7449"/>
 </g>
 <!-- Node10 -->
 <g id="node10" class="node"><title>Node10</title>
@@ -136,51 +136,66 @@
 </g>
 <!-- Node9&#45;&gt;Node10 -->
 <g id="edge9" class="edge"><title>Node9&#45;&gt;Node10</title>
-<path fill="none" stroke="midnightblue" d="M307.734,-257.427C298.034,-249.275 284.721,-236.046 279,-221 274.104,-208.122 274.889,-203.15 279,-190 282.166,-179.875 288.154,-169.995 294.294,-161.698"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="297.177,-163.695 300.636,-153.681 291.687,-159.352 297.177,-163.695"/>
+<path fill="none" stroke="midnightblue" d="M318.656,-257.305C317.905,-238.298 316.094,-192.47 314.964,-163.9"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="318.461,-163.742 314.569,-153.888 311.466,-164.018 318.461,-163.742"/>
 </g>
 <!-- Node11 -->
 <g id="node11" class="node"><title>Node11</title>
 <g id="a_node11"><a xlink:href="func__registry_8h.html" target="_top" xlink:title="Defines generic string&#45;based function lookup structs. ">
-<polygon fill="white" stroke="black" points="287.5,-190.5 287.5,-220.5 402.5,-220.5 402.5,-190.5 287.5,-190.5"/>
-<text text-anchor="start" x="295.5" y="-208.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/func</text>
-<text text-anchor="middle" x="345" y="-197.5" font-family="Helvetica,sans-Serif" font-size="10.00">_registry.h</text>
+<polygon fill="white" stroke="black" points="337.5,-190.5 337.5,-220.5 452.5,-220.5 452.5,-190.5 337.5,-190.5"/>
+<text text-anchor="start" x="345.5" y="-208.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/func</text>
+<text text-anchor="middle" x="395" y="-197.5" font-family="Helvetica,sans-Serif" font-size="10.00">_registry.h</text>
 </a>
 </g>
 </g>
 <!-- Node9&#45;&gt;Node11 -->
 <g id="edge11" class="edge"><title>Node9&#45;&gt;Node11</title>
-<path fill="none" stroke="midnightblue" d="M322.725,-257.475C325.884,-250.245 330.569,-239.525 334.828,-229.779"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="338.075,-231.088 338.872,-220.523 331.661,-228.285 338.075,-231.088"/>
+<path fill="none" stroke="midnightblue" d="M329.889,-257.475C340.086,-249.492 355.717,-237.254 369.109,-226.77"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="371.372,-229.443 377.088,-220.523 367.057,-223.932 371.372,-229.443"/>
 </g>
 <!-- Node10&#45;&gt;Node5 -->
 <g id="edge10" class="edge"><title>Node10&#45;&gt;Node5</title>
-<path fill="none" stroke="midnightblue" d="M303.802,-123.396C297.766,-115.049 289.985,-104.287 283.133,-94.8113"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="285.874,-92.6292 277.178,-86.5765 280.202,-96.7309 285.874,-92.6292"/>
+<path fill="none" stroke="midnightblue" d="M314,-123.396C314,-115.645 314,-105.812 314,-96.8601"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="317.5,-96.5765 314,-86.5765 310.5,-96.5765 317.5,-96.5765"/>
 </g>
 <!-- Node11&#45;&gt;Node10 -->
 <g id="edge12" class="edge"><title>Node11&#45;&gt;Node10</title>
-<path fill="none" stroke="midnightblue" d="M338.273,-190.396C334.414,-182.304 329.473,-171.944 325.057,-162.685"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="328.177,-161.096 320.713,-153.577 321.859,-164.109 328.177,-161.096"/>
+<path fill="none" stroke="midnightblue" d="M377.424,-190.396C366.385,-181.538 351.958,-169.96 339.653,-160.086"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="341.531,-157.105 331.541,-153.577 337.15,-162.565 341.531,-157.105"/>
 </g>
 <!-- Node12 -->
 <g id="node12" class="node"><title>Node12</title>
 <g id="a_node12"><a xlink:href="error__codes_8h.html" target="_top" xlink:title="Defines integral error codes returned by the CRT. ">
-<polygon fill="white" stroke="black" points="415,-123.5 415,-153.5 533,-153.5 533,-123.5 415,-123.5"/>
-<text text-anchor="start" x="423" y="-141.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/error</text>
-<text text-anchor="middle" x="474" y="-130.5" font-family="Helvetica,sans-Serif" font-size="10.00">_codes.h</text>
+<polygon fill="white" stroke="black" points="397,-123.5 397,-153.5 515,-153.5 515,-123.5 397,-123.5"/>
+<text text-anchor="start" x="405" y="-141.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/error</text>
+<text text-anchor="middle" x="456" y="-130.5" font-family="Helvetica,sans-Serif" font-size="10.00">_codes.h</text>
 </a>
 </g>
 </g>
 <!-- Node11&#45;&gt;Node12 -->
 <g id="edge13" class="edge"><title>Node11&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M372.991,-190.396C391.755,-180.941 416.665,-168.39 437.053,-158.117"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="438.708,-161.202 446.064,-153.577 435.558,-154.951 438.708,-161.202"/>
+<path fill="none" stroke="midnightblue" d="M408.236,-190.396C416.31,-181.793 426.789,-170.626 435.875,-160.945"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="438.499,-163.264 442.79,-153.577 433.394,-158.473 438.499,-163.264"/>
+</g>
+<!-- Node13&#45;&gt;Node7 -->
+<g id="edge16" class="edge"><title>Node13&#45;&gt;Node7</title>
+<path fill="none" stroke="midnightblue" d="M541.255,-195.87C538.174,-180.923 531.392,-149.295 524,-123 514.613,-89.6082 501.45,-51.3722 493.617,-29.3207"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="496.814,-27.8659 490.147,-19.6312 490.223,-30.2258 496.814,-27.8659"/>
 </g>
 <!-- Node13&#45;&gt;Node12 -->
-<g id="edge15" class="edge"><title>Node13&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M490.438,-195.734C488.018,-187.456 484.278,-174.662 480.977,-163.367"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="484.283,-162.205 478.118,-153.589 477.565,-164.169 484.283,-162.205"/>
+<g id="edge17" class="edge"><title>Node13&#45;&gt;Node12</title>
+<path fill="none" stroke="midnightblue" d="M531.267,-195.734C518.944,-186.527 499.142,-171.733 482.942,-159.629"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="484.964,-156.77 474.858,-153.589 480.774,-162.378 484.964,-156.77"/>
+</g>
+<!-- Node14 -->
+<g id="node14" class="node"><title>Node14</title>
+<polygon fill="white" stroke="#bfbfbf" points="571,-129 571,-148 627,-148 627,-129 571,-129"/>
+<text text-anchor="middle" x="599" y="-136" font-family="Helvetica,sans-Serif" font-size="10.00">stdarg.h</text>
+</g>
+<!-- Node13&#45;&gt;Node14 -->
+<g id="edge15" class="edge"><title>Node13&#45;&gt;Node14</title>
+<path fill="none" stroke="midnightblue" d="M550.552,-195.734C559.285,-185.598 573.852,-168.69 584.737,-156.056"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="587.692,-157.988 591.567,-148.127 582.388,-153.419 587.692,-157.988"/>
 </g>
 </g>
 </svg>
diff --git a/docs/api/doxygen/device__api_8h.html b/docs/api/doxygen/device__api_8h.html
index 8e1e298..b26d7aa 100644
--- a/docs/api/doxygen/device__api_8h.html
+++ b/docs/api/doxygen/device__api_8h.html
@@ -150,8 +150,20 @@ Functions</h2></td></tr>
 <tr class="memitem:a9109e4efe269213052ed6a94853c0c00"><td class="memItemLeft" align="right" valign="top">const char *&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a9109e4efe269213052ed6a94853c0c00">tvm::runtime::DeviceName</a> (int type)</td></tr>
 <tr class="memdesc:a9109e4efe269213052ed6a94853c0c00"><td class="mdescLeft">&#160;</td><td class="mdescRight">The name of Device API factory.  <a href="namespacetvm_1_1runtime.html#a9109e4efe269213052ed6a94853c0c00">More...</a><br /></td></tr>
 <tr class="separator:a9109e4efe269213052ed6a94853c0c00"><td class="memSeparator" colspan="2">&#160;</td></tr>
-<tr class="memitem:a0ce391c2492dfc73b5c6c6459693c6a6"><td class="memItemLeft" align="right" valign="top">std::ostream &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a0ce391c2492dfc73b5c6c6459693c6a6">tvm::runtime::operator&lt;&lt;</a> (std::ostream &amp;os, DLContext ctx)</td></tr>
-<tr class="separator:a0ce391c2492dfc73b5c6c6459693c6a6"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:af2a8f6198750ead46feeb72ef4f9de4c"><td class="memItemLeft" align="right" valign="top">bool&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#af2a8f6198750ead46feeb72ef4f9de4c">tvm::runtime::IsRPCSessionContext</a> (<a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> ctx)</td></tr>
+<tr class="memdesc:af2a8f6198750ead46feeb72ef4f9de4c"><td class="mdescLeft">&#160;</td><td class="mdescRight">Return true if a TVMContext is owned by an RPC session.  <a href="namespacetvm_1_1runtime.html#af2a8f6198750ead46feeb72ef4f9de4c">More...</a><br /></td></tr>
+<tr class="separator:af2a8f6198750ead46feeb72ef4f9de4c"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a9ac54b0d7a3e3c22fd0ddef0a731cfd5"><td class="memItemLeft" align="right" valign="top">int&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a9ac54b0d7a3e3c22fd0ddef0a731cfd5">tvm::runtime::GetRPCSessionIndex</a> (<a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> ctx)</td></tr>
+<tr class="memdesc:a9ac54b0d7a3e3c22fd0ddef0a731cfd5"><td class="mdescLeft">&#160;</td><td class="mdescRight">Return the RPCSessTable index of the RPC Session that owns this context.  <a href="namespacetvm_1_1runtime.html#a9ac54b0d7a3e3c22fd0ddef0a731cfd5">More...</a><br /></td></tr>
+<tr class="separator:a9ac54b0d7a3e3c22fd0ddef0a731cfd5"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:aea8fddcdd83b2bce46fbff699f43eee6"><td class="memItemLeft" align="right" valign="top"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a>&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#aea8fddcdd83b2bce46fbff699f43eee6">tvm::runtime::RemoveRPCSessionMask</a> (<a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> ctx)</td></tr>
+<tr class="memdesc:aea8fddcdd83b2bce46fbff699f43eee6"><td class="mdescLeft">&#160;</td><td class="mdescRight">Remove the RPC session mask from a TVMContext. RPC clients typically do this when encoding a TVMContext for transmission to an RPC remote. On the wire, RPCContext are expected to be valid on the server without interpretation.  <a href="namespacetvm_1_1runtime.html#aea8fddcdd83b2bce46fbff699f43eee6">More...</a><br /></td></tr>
+<tr class="separator:aea8fddcdd83b2bce46fbff699f43eee6"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a3578b5c107d5e8ee58b73a2a776e19f1"><td class="memItemLeft" align="right" valign="top">std::ostream &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a3578b5c107d5e8ee58b73a2a776e19f1">tvm::runtime::operator&lt;&lt;</a> (std::ostream &amp;os, DLContext ctx)</td></tr>
+<tr class="separator:a3578b5c107d5e8ee58b73a2a776e19f1"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a409b50f5d118a11f7a9f234498be7c27"><td class="memItemLeft" align="right" valign="top"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a>&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a409b50f5d118a11f7a9f234498be7c27">tvm::runtime::AddRPCSessionMask</a> (<a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> ctx, int session_table_inde [...]
+<tr class="memdesc:a409b50f5d118a11f7a9f234498be7c27"><td class="mdescLeft">&#160;</td><td class="mdescRight">Add a RPC session mask to a TVMContext. RPC clients typically do this when decoding a TVMContext received from a RPC remote.  <a href="namespacetvm_1_1runtime.html#a409b50f5d118a11f7a9f234498be7c27">More...</a><br /></td></tr>
+<tr class="separator:a409b50f5d118a11f7a9f234498be7c27"><td class="memSeparator" colspan="2">&#160;</td></tr>
 </table><table class="memberdecls">
 <tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="var-members"></a>
 Variables</h2></td></tr>
diff --git a/docs/api/doxygen/device__api_8h_source.html b/docs/api/doxygen/device__api_8h_source.html
index 7e8ce56..f7fbf04 100644
--- a/docs/api/doxygen/device__api_8h_source.html
+++ b/docs/api/doxygen/device__api_8h_source.html
@@ -89,7 +89,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="title">device_api.h</div>  </div>
 </div><!--header-->
 <div class="contents">
-<a href="device__api_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more [...]
+<a href="device__api_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more [...]
 <div class="ttc" id="classtvm_1_1runtime_1_1TVMRetValue_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1TVMRetValue.html">tvm::runtime::TVMRetValue</a></div><div class="ttdoc">Return Value container, Unlike TVMArgValue, which only holds reference and do not delete the underlyi...</div><div class="ttdef"><b>Definition:</b> packed_func.h:571</div></div>
 <div class="ttc" id="namespacetvm_1_1runtime_html_a46fef1ca0ccc05473e9bb0a8c6b66619a69fe0643750b0c49e8b8aefb1cada337"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a46fef1ca0ccc05473e9bb0a8c6b66619a69fe0643750b0c49e8b8aefb1cada337">tvm::runtime::kApiVersion</a></div><div class="ttdef"><b>Definition:</b> device_api.h:49</div></div>
 <div class="ttc" id="c__runtime__api_8h_html_a57cbccb14c35a0e62dbc1b911188fcefacdc33f5efa9ddabe89e886c28d1ff65b"><div class="ttname"><a href="c__runtime__api_8h.html#a57cbccb14c35a0e62dbc1b911188fcefacdc33f5efa9ddabe89e886c28d1ff65b">kDLSDAccel</a></div><div class="ttdef"><b>Definition:</b> c_runtime_api.h:81</div></div>
@@ -97,7 +97,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1DeviceAPI_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1DeviceAPI.html">tvm::runtime::DeviceAPI</a></div><div class="ttdoc">TVM Runtime Device API, abstracts the device specific interface for memory management. </div><div class="ttdef"><b>Definition:</b> device_api.h:65</div></div>
 <div class="ttc" id="c__runtime__api_8h_html_a57cbccb14c35a0e62dbc1b911188fcefad77aa5af5411ed9f3719f48af1f04b02"><div class="ttname"><a href="c__runtime__api_8h.html#a57cbccb14c35a0e62dbc1b911188fcefad77aa5af5411ed9f3719f48af1f04b02">kDLAOCL</a></div><div class="ttdef"><b>Definition:</b> c_runtime_api.h:80</div></div>
+<div class="ttc" id="namespacetvm_1_1runtime_html_aea8fddcdd83b2bce46fbff699f43eee6"><div class="ttname"><a href="namespacetvm_1_1runtime.html#aea8fddcdd83b2bce46fbff699f43eee6">tvm::runtime::RemoveRPCSessionMask</a></div><div class="ttdeci">TVMContext RemoveRPCSessionMask(TVMContext ctx)</div><div class="ttdoc">Remove the RPC session mask from a TVMContext. RPC clients typically do this when encoding a TVMConte...</div><div class="ttdef"><b>Definition:</b> device_api.h:264</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1DeviceAPI_html_ab35f07aacd4717465f8912aeded7001c"><div class="ttname"><a href="classtvm_1_1runtime_1_1DeviceAPI.html#ab35f07aacd4717465f8912aeded7001c">tvm::runtime::DeviceAPI::NeedSetDeviceContext</a></div><div class="ttdeci">static bool NeedSetDeviceContext(int device_type)</div><div class="ttdoc">Whether a certian device type requires set device context before launching the kernel function...</div><div class="ttdef"><b>Definition:</b> device [...]
+<div class="ttc" id="namespacetvm_1_1runtime_html_af2a8f6198750ead46feeb72ef4f9de4c"><div class="ttname"><a href="namespacetvm_1_1runtime.html#af2a8f6198750ead46feeb72ef4f9de4c">tvm::runtime::IsRPCSessionContext</a></div><div class="ttdeci">bool IsRPCSessionContext(TVMContext ctx)</div><div class="ttdoc">Return true if a TVMContext is owned by an RPC session. </div><div class="ttdef"><b>Definition:</b> device_api.h:246</div></div>
 <div class="ttc" id="c__runtime__api_8h_html_ab1d5f6b7945e1410602a8a057fda5757"><div class="ttname"><a href="c__runtime__api_8h.html#ab1d5f6b7945e1410602a8a057fda5757">TVMStreamHandle</a></div><div class="ttdeci">void * TVMStreamHandle</div><div class="ttdoc">The stream that is specific to device can be NULL, which indicates the default one. </div><div class="ttdef"><b>Definition:</b> c_runtime_api.h:172</div></div>
 <div class="ttc" id="namespacetvm_1_1runtime_html_a48cbe06e6c95ca6fabc20dd1cbacc2c9"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a48cbe06e6c95ca6fabc20dd1cbacc2c9">tvm::runtime::kRPCSessMask</a></div><div class="ttdeci">constexpr int kRPCSessMask</div><div class="ttdoc">The device type bigger than this is RPC device. </div><div class="ttdef"><b>Definition:</b> device_api.h:200</div></div>
 <div class="ttc" id="c__runtime__api_8h_html_a57cbccb14c35a0e62dbc1b911188fcefa54e2a159c9e3f421d9a3093bb5def358"><div class="ttname"><a href="c__runtime__api_8h.html#a57cbccb14c35a0e62dbc1b911188fcefa54e2a159c9e3f421d9a3093bb5def358">kDLWebGPU</a></div><div class="ttdef"><b>Definition:</b> c_runtime_api.h:85</div></div>
@@ -106,6 +108,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="ttc" id="namespacetvm_1_1runtime_html_a8f5819cabea098a1818cf7cda40fdb1f"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a8f5819cabea098a1818cf7cda40fdb1f">tvm::runtime::kTempAllocaAlignment</a></div><div class="ttdeci">constexpr int kTempAllocaAlignment</div><div class="ttdoc">Number of bytes each allocation must align to in temporary allocation. </div><div class="ttdef"><b>Definition:</b> device_api.h:56</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1DeviceAPI_html_a2bf972e88ccbb2f896b061730655cf46"><div class="ttname"><a href="classtvm_1_1runtime_1_1DeviceAPI.html#a2bf972e88ccbb2f896b061730655cf46">tvm::runtime::DeviceAPI::SetStream</a></div><div class="ttdeci">virtual void SetStream(TVMContext ctx, TVMStreamHandle stream)</div><div class="ttdoc">Set the stream. </div><div class="ttdef"><b>Definition:</b> device_api.h:141</div></div>
 <div class="ttc" id="namespacetvm_1_1runtime_html_a46fef1ca0ccc05473e9bb0a8c6b66619afae6abc73ecd8ccc7f556da2f56e40eb"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a46fef1ca0ccc05473e9bb0a8c6b66619afae6abc73ecd8ccc7f556da2f56e40eb">tvm::runtime::kExist</a></div><div class="ttdef"><b>Definition:</b> device_api.h:38</div></div>
+<div class="ttc" id="namespacetvm_1_1runtime_html_a409b50f5d118a11f7a9f234498be7c27"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a409b50f5d118a11f7a9f234498be7c27">tvm::runtime::AddRPCSessionMask</a></div><div class="ttdeci">TVMContext AddRPCSessionMask(TVMContext ctx, int session_table_index)</div><div class="ttdoc">Add a RPC session mask to a TVMContext. RPC clients typically do this when decoding a TVMContext rece...</div><div class="ttdef"><b>Definition:</b> device_api. [...]
 <div class="ttc" id="c__runtime__api_8h_html_a57cbccb14c35a0e62dbc1b911188fcefa3357ff71d095bc9bdbe5116599bade5f"><div class="ttname"><a href="c__runtime__api_8h.html#a57cbccb14c35a0e62dbc1b911188fcefa3357ff71d095bc9bdbe5116599bade5f">kDLMicroDev</a></div><div class="ttdef"><b>Definition:</b> c_runtime_api.h:83</div></div>
 <div class="ttc" id="namespacetvm_1_1runtime_html_a46fef1ca0ccc05473e9bb0a8c6b66619adff7742765a9f6f50973675bf34ad264"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a46fef1ca0ccc05473e9bb0a8c6b66619adff7742765a9f6f50973675bf34ad264">tvm::runtime::kMaxSharedMemoryPerBlock</a></div><div class="ttdef"><b>Definition:</b> device_api.h:41</div></div>
 <div class="ttc" id="namespacetvm_1_1runtime_html_a46fef1ca0ccc05473e9bb0a8c6b66619a90ebfaf325917db841553c65ce2ae630"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a46fef1ca0ccc05473e9bb0a8c6b66619a90ebfaf325917db841553c65ce2ae630">tvm::runtime::kMaxClockRate</a></div><div class="ttdef"><b>Definition:</b> device_api.h:44</div></div>
@@ -118,6 +121,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="ttc" id="c__runtime__api_8h_html_a9363bb701f16ce5bbb381f2a013d25b4"><div class="ttname"><a href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a></div><div class="ttdeci">DLContext TVMContext</div><div class="ttdoc">The Device information, abstract away common device types. </div><div class="ttdef"><b>Definition:</b> c_runtime_api.h:135</div></div>
 <div class="ttc" id="namespacetvm_1_1runtime_html_a46fef1ca0ccc05473e9bb0a8c6b66619"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a46fef1ca0ccc05473e9bb0a8c6b66619">tvm::runtime::DeviceAttrKind</a></div><div class="ttdeci">DeviceAttrKind</div><div class="ttdoc">the query type into GetAttr </div><div class="ttdef"><b>Definition:</b> device_api.h:37</div></div>
 <div class="ttc" id="namespacetvm_1_1runtime_html_a46fef1ca0ccc05473e9bb0a8c6b66619a0ac04959bdda893a53c05024409de9ca"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a46fef1ca0ccc05473e9bb0a8c6b66619a0ac04959bdda893a53c05024409de9ca">tvm::runtime::kDeviceName</a></div><div class="ttdef"><b>Definition:</b> device_api.h:43</div></div>
+<div class="ttc" id="namespacetvm_1_1runtime_html_a9ac54b0d7a3e3c22fd0ddef0a731cfd5"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a9ac54b0d7a3e3c22fd0ddef0a731cfd5">tvm::runtime::GetRPCSessionIndex</a></div><div class="ttdeci">int GetRPCSessionIndex(TVMContext ctx)</div><div class="ttdoc">Return the RPCSessTable index of the RPC Session that owns this context. </div><div class="ttdef"><b>Definition:</b> device_api.h:252</div></div>
 <div class="ttc" id="namespacetvm_1_1runtime_html_a46fef1ca0ccc05473e9bb0a8c6b66619a463bdbf9ce7f9dc87a73d0b787da43cd"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a46fef1ca0ccc05473e9bb0a8c6b66619a463bdbf9ce7f9dc87a73d0b787da43cd">tvm::runtime::kMultiProcessorCount</a></div><div class="ttdef"><b>Definition:</b> device_api.h:45</div></div>
 <div class="ttc" id="c__runtime__api_8h_html"><div class="ttname"><a href="c__runtime__api_8h.html">c_runtime_api.h</a></div></div>
 <div class="ttc" id="namespacetvm_1_1runtime_html_a46fef1ca0ccc05473e9bb0a8c6b66619ac1e3197d589b7cbc7464ea7269f34357"><div class="ttname"><a href="namespacetvm_1_1runtime.html#a46fef1ca0ccc05473e9bb0a8c6b66619ac1e3197d589b7cbc7464ea7269f34357">tvm::runtime::kMaxRegistersPerBlock</a></div><div class="ttdef"><b>Definition:</b> device_api.h:47</div></div>
diff --git a/docs/api/doxygen/globals_func.html b/docs/api/doxygen/globals_func.html
index 69d21c3..5efa335 100644
--- a/docs/api/doxygen/globals_func.html
+++ b/docs/api/doxygen/globals_func.html
@@ -281,6 +281,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>TVMPlatformAbort()
 : <a class="el" href="platform_8h.html#a47980e4ea2182978f94ca87cc15ca0c8">platform.h</a>
 </li>
+<li>TVMPlatformFormatMessage()
+: <a class="el" href="platform_8h.html#a6dfecb024ace62e724817f90b6407285">platform.h</a>
+</li>
 <li>TVMSetStream()
 : <a class="el" href="c__runtime__api_8h.html#ac414ed248ddb1bfb561685bba3de5e89">c_runtime_api.h</a>
 </li>
diff --git a/docs/api/doxygen/globals_t.html b/docs/api/doxygen/globals_t.html
index a2ea479..df43a3d 100644
--- a/docs/api/doxygen/globals_t.html
+++ b/docs/api/doxygen/globals_t.html
@@ -543,6 +543,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>TVMPlatformAbort()
 : <a class="el" href="platform_8h.html#a47980e4ea2182978f94ca87cc15ca0c8">platform.h</a>
 </li>
+<li>TVMPlatformFormatMessage()
+: <a class="el" href="platform_8h.html#a6dfecb024ace62e724817f90b6407285">platform.h</a>
+</li>
 <li>TVMRetValueHandle
 : <a class="el" href="c__runtime__api_8h.html#a6cd1076476117e74454f67931c2da1d4">c_runtime_api.h</a>
 </li>
diff --git a/docs/api/doxygen/graph__runtime_8h.html b/docs/api/doxygen/graph__runtime_8h.html
index 3226835..9690fd2 100644
--- a/docs/api/doxygen/graph__runtime_8h.html
+++ b/docs/api/doxygen/graph__runtime_8h.html
@@ -102,7 +102,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 </div><div class="textblock"><div class="dynheader">
 Include dependency graph for graph_runtime.h:</div>
 <div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="graph__runtime_8h__incl.svg" width="859" height="559"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="graph__runtime_8h__incl.svg" width="847" height="559"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
 </div>
 </div>
 </div>
diff --git a/docs/api/doxygen/graph__runtime_8h__incl.svg b/docs/api/doxygen/graph__runtime_8h__incl.svg
index f8e7085..5377ad0 100644
--- a/docs/api/doxygen/graph__runtime_8h__incl.svg
+++ b/docs/api/doxygen/graph__runtime_8h__incl.svg
@@ -4,16 +4,16 @@
 <!-- Generated by graphviz version 2.38.0 (20140413.2041)
  -->
 <!-- Title: include/tvm/runtime/crt/graph_runtime.h Pages: 1 -->
-<svg width="644pt" height="419pt"
- viewBox="0.00 0.00 644.00 419.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<svg width="635pt" height="419pt"
+ viewBox="0.00 0.00 635.00 419.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
 <g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 415)">
 <title>include/tvm/runtime/crt/graph_runtime.h</title>
-<polygon fill="white" stroke="none" points="-4,4 -4,-415 640,-415 640,4 -4,4"/>
+<polygon fill="white" stroke="none" points="-4,4 -4,-415 631,-415 631,4 -4,4"/>
 <!-- Node1 -->
 <g id="node1" class="node"><title>Node1</title>
-<polygon fill="#bfbfbf" stroke="black" points="33.5,-380.5 33.5,-410.5 148.5,-410.5 148.5,-380.5 33.5,-380.5"/>
-<text text-anchor="start" x="41.5" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="91" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/graph_runtime.h</text>
+<polygon fill="#bfbfbf" stroke="black" points="24.5,-380.5 24.5,-410.5 139.5,-410.5 139.5,-380.5 24.5,-380.5"/>
+<text text-anchor="start" x="32.5" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="82" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/graph_runtime.h</text>
 </g>
 <!-- Node2 -->
 <g id="node2" class="node"><title>Node2</title>
@@ -22,189 +22,204 @@
 </g>
 <!-- Node1&#45;&gt;Node2 -->
 <g id="edge1" class="edge"><title>Node1&#45;&gt;Node2</title>
-<path fill="none" stroke="midnightblue" d="M82.1307,-380.313C68.7637,-357.537 45,-310.999 45,-268 45,-268 45,-268 45,-137.5 45,-99.2024 45,-54.3829 45,-29.6971"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="48.5001,-29.5894 45,-19.5894 41.5001,-29.5895 48.5001,-29.5894"/>
+<path fill="none" stroke="midnightblue" d="M74.8525,-380.496C63.8448,-357.594 44,-310.423 44,-268 44,-268 44,-268 44,-137.5 44,-99.2012 44.464,-54.3821 44.7599,-29.6968"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="48.2609,-29.6319 44.8853,-19.5892 41.2614,-29.545 48.2609,-29.6319"/>
 </g>
 <!-- Node3 -->
 <g id="node3" class="node"><title>Node3</title>
 <g id="a_node3"><a xlink:href="c__runtime__api_8h.html" target="_top" xlink:title="tvm/runtime/c_runtime\l_api.h">
-<polygon fill="white" stroke="black" points="73.5,-56.5 73.5,-86.5 200.5,-86.5 200.5,-56.5 73.5,-56.5"/>
-<text text-anchor="start" x="81.5" y="-74.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/c_runtime</text>
-<text text-anchor="middle" x="137" y="-63.5" font-family="Helvetica,sans-Serif" font-size="10.00">_api.h</text>
+<polygon fill="white" stroke="black" points="72.5,-56.5 72.5,-86.5 199.5,-86.5 199.5,-56.5 72.5,-56.5"/>
+<text text-anchor="start" x="80.5" y="-74.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/c_runtime</text>
+<text text-anchor="middle" x="136" y="-63.5" font-family="Helvetica,sans-Serif" font-size="10.00">_api.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node3 -->
 <g id="edge2" class="edge"><title>Node1&#45;&gt;Node3</title>
-<path fill="none" stroke="midnightblue" d="M91,-380.461C91,-357.118 91,-308.849 91,-268 91,-268 91,-268 91,-204.5 91,-164.474 110.093,-121.233 123.678,-95.5336"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="126.843,-97.0402 128.563,-86.5859 120.699,-93.6856 126.843,-97.0402"/>
+<path fill="none" stroke="midnightblue" d="M82,-380.461C82,-357.118 82,-308.849 82,-268 82,-268 82,-268 82,-204.5 82,-163.704 104.369,-120.742 120.313,-95.3016"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="123.347,-97.0547 125.843,-86.758 117.471,-93.251 123.347,-97.0547"/>
 </g>
 <!-- Node6 -->
 <g id="node6" class="node"><title>Node6</title>
 <g id="a_node6"><a xlink:href="crt_2packed__func_8h.html" target="_top" xlink:title="Type&#45;erased function used across TVM API. ">
-<polygon fill="white" stroke="black" points="225.5,-313.5 225.5,-343.5 354.5,-343.5 354.5,-313.5 225.5,-313.5"/>
-<text text-anchor="start" x="233.5" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/packed</text>
-<text text-anchor="middle" x="290" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">_func.h</text>
+<polygon fill="white" stroke="black" points="216.5,-313.5 216.5,-343.5 345.5,-343.5 345.5,-313.5 216.5,-313.5"/>
+<text text-anchor="start" x="224.5" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/packed</text>
+<text text-anchor="middle" x="281" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">_func.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node6 -->
 <g id="edge6" class="edge"><title>Node1&#45;&gt;Node6</title>
-<path fill="none" stroke="midnightblue" d="M134.18,-380.396C164.43,-370.515 205.035,-357.252 237.21,-346.743"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="238.485,-350.008 246.904,-343.577 236.312,-343.354 238.485,-350.008"/>
+<path fill="none" stroke="midnightblue" d="M125.18,-380.396C155.43,-370.515 196.035,-357.252 228.21,-346.743"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="229.485,-350.008 237.904,-343.577 227.312,-343.354 229.485,-350.008"/>
 </g>
 <!-- Node3&#45;&gt;Node2 -->
 <g id="edge3" class="edge"><title>Node3&#45;&gt;Node2</title>
-<path fill="none" stroke="midnightblue" d="M115.197,-56.3993C100.588,-46.9511 81.4271,-34.5589 66.8154,-25.1089"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="68.6845,-22.1495 58.3868,-19.6578 64.883,-28.0274 68.6845,-22.1495"/>
+<path fill="none" stroke="midnightblue" d="M114.434,-56.3993C100.12,-47.0402 81.3884,-34.7924 66.9885,-25.3771"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="68.5264,-22.2009 58.2413,-19.6578 64.6956,-28.0597 68.5264,-22.2009"/>
 </g>
 <!-- Node4 -->
 <g id="node4" class="node"><title>Node4</title>
-<polygon fill="white" stroke="#bfbfbf" points="108.5,-0.5 108.5,-19.5 163.5,-19.5 163.5,-0.5 108.5,-0.5"/>
-<text text-anchor="middle" x="136" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">stddef.h</text>
+<polygon fill="white" stroke="#bfbfbf" points="324.5,-0.5 324.5,-19.5 379.5,-19.5 379.5,-0.5 324.5,-0.5"/>
+<text text-anchor="middle" x="352" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">stddef.h</text>
 </g>
 <!-- Node3&#45;&gt;Node4 -->
 <g id="edge4" class="edge"><title>Node3&#45;&gt;Node4</title>
-<path fill="none" stroke="midnightblue" d="M136.763,-56.3993C136.63,-48.4664 136.461,-38.458 136.317,-29.8583"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="139.813,-29.5975 136.146,-19.6578 132.814,-29.7152 139.813,-29.5975"/>
+<path fill="none" stroke="midnightblue" d="M186.915,-56.4747C225.966,-45.7177 278.99,-31.1117 314.294,-21.3866"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="315.489,-24.6878 324.201,-18.6577 313.63,-17.9392 315.489,-24.6878"/>
 </g>
 <!-- Node5 -->
 <g id="node5" class="node"><title>Node5</title>
-<polygon fill="white" stroke="#bfbfbf" points="181.5,-0.5 181.5,-19.5 234.5,-19.5 234.5,-0.5 181.5,-0.5"/>
-<text text-anchor="middle" x="208" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">stdint.h</text>
+<polygon fill="white" stroke="#bfbfbf" points="109.5,-0.5 109.5,-19.5 162.5,-19.5 162.5,-0.5 109.5,-0.5"/>
+<text text-anchor="middle" x="136" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">stdint.h</text>
 </g>
 <!-- Node3&#45;&gt;Node5 -->
 <g id="edge5" class="edge"><title>Node3&#45;&gt;Node5</title>
-<path fill="none" stroke="midnightblue" d="M153.826,-56.3993C164.675,-47.3076 178.777,-35.4899 189.873,-26.1909"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="192.252,-28.7634 197.669,-19.6578 187.756,-23.3983 192.252,-28.7634"/>
+<path fill="none" stroke="midnightblue" d="M136,-56.3993C136,-48.4664 136,-38.458 136,-29.8583"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="139.5,-29.6577 136,-19.6578 132.5,-29.6578 139.5,-29.6577"/>
 </g>
 <!-- Node6&#45;&gt;Node3 -->
 <g id="edge10" class="edge"><title>Node6&#45;&gt;Node3</title>
-<path fill="none" stroke="midnightblue" d="M225.339,-320.529C197.019,-313.968 166.161,-301.319 148,-277 107.881,-223.276 120.901,-137.73 130.54,-96.3994"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="133.945,-97.2088 132.943,-86.6614 127.149,-95.5315 133.945,-97.2088"/>
+<path fill="none" stroke="midnightblue" d="M216.277,-320.927C187.761,-314.46 156.758,-301.776 139,-277 99.9903,-222.575 116.826,-137.361 128.399,-96.2537"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="131.782,-97.1527 131.261,-86.5708 125.07,-95.1684 131.782,-97.1527"/>
 </g>
 <!-- Node7 -->
 <g id="node7" class="node"><title>Node7</title>
-<polygon fill="white" stroke="#bfbfbf" points="157,-257.5 157,-276.5 213,-276.5 213,-257.5 157,-257.5"/>
-<text text-anchor="middle" x="185" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">assert.h</text>
+<polygon fill="white" stroke="#bfbfbf" points="148,-257.5 148,-276.5 204,-276.5 204,-257.5 148,-257.5"/>
+<text text-anchor="middle" x="176" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">assert.h</text>
 </g>
 <!-- Node6&#45;&gt;Node7 -->
 <g id="edge7" class="edge"><title>Node6&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M265.116,-313.399C248.286,-303.862 226.161,-291.325 209.427,-281.842"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="210.704,-278.543 200.278,-276.658 207.253,-284.633 210.704,-278.543"/>
+<path fill="none" stroke="midnightblue" d="M256.116,-313.399C239.286,-303.862 217.161,-291.325 200.427,-281.842"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="201.704,-278.543 191.278,-276.658 198.253,-284.633 201.704,-278.543"/>
 </g>
 <!-- Node8 -->
 <g id="node8" class="node"><title>Node8</title>
-<polygon fill="white" stroke="#bfbfbf" points="231.5,-257.5 231.5,-276.5 280.5,-276.5 280.5,-257.5 231.5,-257.5"/>
-<text text-anchor="middle" x="256" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">stdio.h</text>
+<polygon fill="white" stroke="#bfbfbf" points="222.5,-257.5 222.5,-276.5 271.5,-276.5 271.5,-257.5 222.5,-257.5"/>
+<text text-anchor="middle" x="247" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">stdio.h</text>
 </g>
 <!-- Node6&#45;&gt;Node8 -->
 <g id="edge8" class="edge"><title>Node6&#45;&gt;Node8</title>
-<path fill="none" stroke="midnightblue" d="M281.942,-313.399C277.155,-305.021 271.044,-294.327 265.956,-285.423"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="268.948,-283.604 260.947,-276.658 262.87,-287.077 268.948,-283.604"/>
+<path fill="none" stroke="midnightblue" d="M272.942,-313.399C268.155,-305.021 262.044,-294.327 256.956,-285.423"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="259.948,-283.604 251.947,-276.658 253.87,-287.077 259.948,-283.604"/>
 </g>
 <!-- Node9 -->
 <g id="node9" class="node"><title>Node9</title>
-<polygon fill="white" stroke="#bfbfbf" points="298.5,-257.5 298.5,-276.5 349.5,-276.5 349.5,-257.5 298.5,-257.5"/>
-<text text-anchor="middle" x="324" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">stdlib.h</text>
+<polygon fill="white" stroke="#bfbfbf" points="289.5,-257.5 289.5,-276.5 340.5,-276.5 340.5,-257.5 289.5,-257.5"/>
+<text text-anchor="middle" x="315" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">stdlib.h</text>
 </g>
 <!-- Node6&#45;&gt;Node9 -->
 <g id="edge9" class="edge"><title>Node6&#45;&gt;Node9</title>
-<path fill="none" stroke="midnightblue" d="M298.058,-313.399C302.845,-305.021 308.956,-294.327 314.044,-285.423"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="317.13,-287.077 319.053,-276.658 311.052,-283.604 317.13,-287.077"/>
+<path fill="none" stroke="midnightblue" d="M289.058,-313.399C293.845,-305.021 299.956,-294.327 305.044,-285.423"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="308.13,-287.077 310.053,-276.658 302.052,-283.604 308.13,-287.077"/>
 </g>
 <!-- Node10 -->
 <g id="node10" class="node"><title>Node10</title>
 <g id="a_node10"><a xlink:href="runtime_2crt_2module_8h.html" target="_top" xlink:title="Runtime container of the functions. ">
-<polygon fill="white" stroke="black" points="368,-257.5 368,-276.5 508,-276.5 508,-257.5 368,-257.5"/>
-<text text-anchor="middle" x="438" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/module.h</text>
+<polygon fill="white" stroke="black" points="359,-257.5 359,-276.5 499,-276.5 499,-257.5 359,-257.5"/>
+<text text-anchor="middle" x="429" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/module.h</text>
 </a>
 </g>
 </g>
 <!-- Node6&#45;&gt;Node10 -->
 <g id="edge11" class="edge"><title>Node6&#45;&gt;Node10</title>
-<path fill="none" stroke="midnightblue" d="M325.074,-313.399C350.017,-303.372 383.208,-290.028 407.144,-280.405"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="408.492,-283.635 416.465,-276.658 405.881,-277.141 408.492,-283.635"/>
+<path fill="none" stroke="midnightblue" d="M316.074,-313.399C341.017,-303.372 374.208,-290.028 398.144,-280.405"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="399.492,-283.635 407.465,-276.658 396.881,-277.141 399.492,-283.635"/>
 </g>
 <!-- Node14 -->
 <g id="node14" class="node"><title>Node14</title>
 <g id="a_node14"><a xlink:href="platform_8h.html" target="_top" xlink:title="The virtual memory manager for micro&#45;controllers. ">
-<polygon fill="white" stroke="black" points="489,-196 489,-215 633,-215 633,-196 489,-196"/>
-<text text-anchor="middle" x="561" y="-203" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/platform.h</text>
+<polygon fill="white" stroke="black" points="455,-196 455,-215 599,-215 599,-196 455,-196"/>
+<text text-anchor="middle" x="527" y="-203" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/platform.h</text>
 </a>
 </g>
 </g>
 <!-- Node6&#45;&gt;Node14 -->
 <g id="edge17" class="edge"><title>Node6&#45;&gt;Node14</title>
-<path fill="none" stroke="midnightblue" d="M354.574,-326.284C403.194,-322.446 469.379,-310.701 517,-277 535.162,-264.147 547.41,-241.195 554.278,-224.976"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="557.616,-226.051 558.022,-215.464 551.103,-223.487 557.616,-226.051"/>
+<path fill="none" stroke="midnightblue" d="M345.754,-320.585C404.398,-313.027 485.025,-298.978 508,-277 521.857,-263.744 525.972,-241.535 527.019,-225.585"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="530.527,-225.396 527.336,-215.293 523.53,-225.181 530.527,-225.396"/>
 </g>
-<!-- Node15 -->
-<g id="node15" class="node"><title>Node15</title>
-<polygon fill="white" stroke="#bfbfbf" points="564,-257.5 564,-276.5 636,-276.5 636,-257.5 564,-257.5"/>
-<text text-anchor="middle" x="600" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">crt_config.h</text>
+<!-- Node16 -->
+<g id="node16" class="node"><title>Node16</title>
+<polygon fill="white" stroke="#bfbfbf" points="555,-257.5 555,-276.5 627,-276.5 627,-257.5 555,-257.5"/>
+<text text-anchor="middle" x="591" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">crt_config.h</text>
 </g>
-<!-- Node6&#45;&gt;Node15 -->
-<g id="edge19" class="edge"><title>Node6&#45;&gt;Node15</title>
-<path fill="none" stroke="midnightblue" d="M354.609,-315.099C414.164,-303.669 501.086,-286.985 553.904,-276.847"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="554.834,-280.233 563.995,-274.911 553.515,-273.358 554.834,-280.233"/>
+<!-- Node6&#45;&gt;Node16 -->
+<g id="edge21" class="edge"><title>Node6&#45;&gt;Node16</title>
+<path fill="none" stroke="midnightblue" d="M345.609,-315.099C405.164,-303.669 492.086,-286.985 544.904,-276.847"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="545.834,-280.233 554.995,-274.911 544.515,-273.358 545.834,-280.233"/>
 </g>
 <!-- Node11 -->
 <g id="node11" class="node"><title>Node11</title>
 <g id="a_node11"><a xlink:href="c__backend__api_8h.html" target="_top" xlink:title="TVM runtime backend API. ">
-<polygon fill="white" stroke="black" points="262,-123.5 262,-153.5 392,-153.5 392,-123.5 262,-123.5"/>
-<text text-anchor="start" x="270" y="-141.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/c_backend</text>
-<text text-anchor="middle" x="327" y="-130.5" font-family="Helvetica,sans-Serif" font-size="10.00">_api.h</text>
+<polygon fill="white" stroke="black" points="202,-123.5 202,-153.5 332,-153.5 332,-123.5 202,-123.5"/>
+<text text-anchor="start" x="210" y="-141.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/c_backend</text>
+<text text-anchor="middle" x="267" y="-130.5" font-family="Helvetica,sans-Serif" font-size="10.00">_api.h</text>
 </a>
 </g>
 </g>
 <!-- Node10&#45;&gt;Node11 -->
 <g id="edge12" class="edge"><title>Node10&#45;&gt;Node11</title>
-<path fill="none" stroke="midnightblue" d="M404.446,-257.478C384.623,-250.751 360.62,-239.361 346,-221 333.109,-204.811 328.681,-181.406 327.289,-163.828"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="330.78,-163.557 326.767,-153.752 323.79,-163.92 330.78,-163.557"/>
+<path fill="none" stroke="midnightblue" d="M387.094,-257.458C362.939,-250.813 333.249,-239.514 312,-221 294.192,-205.484 282.052,-181.125 274.911,-163.075"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="278.132,-161.696 271.367,-153.542 271.571,-164.135 278.132,-161.696"/>
 </g>
 <!-- Node12 -->
 <g id="node12" class="node"><title>Node12</title>
 <g id="a_node12"><a xlink:href="func__registry_8h.html" target="_top" xlink:title="Defines generic string&#45;based function lookup structs. ">
-<polygon fill="white" stroke="black" points="355.5,-190.5 355.5,-220.5 470.5,-220.5 470.5,-190.5 355.5,-190.5"/>
-<text text-anchor="start" x="363.5" y="-208.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/func</text>
-<text text-anchor="middle" x="413" y="-197.5" font-family="Helvetica,sans-Serif" font-size="10.00">_registry.h</text>
+<polygon fill="white" stroke="black" points="321.5,-190.5 321.5,-220.5 436.5,-220.5 436.5,-190.5 321.5,-190.5"/>
+<text text-anchor="start" x="329.5" y="-208.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/func</text>
+<text text-anchor="middle" x="379" y="-197.5" font-family="Helvetica,sans-Serif" font-size="10.00">_registry.h</text>
 </a>
 </g>
 </g>
 <!-- Node10&#45;&gt;Node12 -->
 <g id="edge14" class="edge"><title>Node10&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M434.418,-257.475C431.38,-250.245 426.876,-239.525 422.781,-229.779"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="425.992,-228.386 418.892,-220.523 419.539,-231.098 425.992,-228.386"/>
+<path fill="none" stroke="midnightblue" d="M421.836,-257.475C415.444,-249.869 405.806,-238.399 397.29,-228.265"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="399.897,-225.927 390.784,-220.523 394.538,-230.43 399.897,-225.927"/>
 </g>
 <!-- Node11&#45;&gt;Node3 -->
 <g id="edge13" class="edge"><title>Node11&#45;&gt;Node3</title>
-<path fill="none" stroke="midnightblue" d="M285.773,-123.396C257.016,-113.558 218.457,-100.367 187.8,-89.879"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="188.741,-86.5019 178.147,-86.5765 186.475,-93.125 188.741,-86.5019"/>
+<path fill="none" stroke="midnightblue" d="M238.575,-123.396C219.52,-113.941 194.224,-101.39 173.52,-91.1168"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="174.883,-87.8861 164.37,-86.5765 171.772,-94.1566 174.883,-87.8861"/>
 </g>
 <!-- Node12&#45;&gt;Node11 -->
 <g id="edge15" class="edge"><title>Node12&#45;&gt;Node11</title>
-<path fill="none" stroke="midnightblue" d="M394.339,-190.396C382.506,-181.452 367.006,-169.737 353.861,-159.802"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="355.712,-156.814 345.624,-153.577 351.492,-162.398 355.712,-156.814"/>
+<path fill="none" stroke="midnightblue" d="M354.698,-190.396C338.7,-181.112 317.557,-168.841 300.039,-158.674"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="301.661,-155.569 291.255,-153.577 298.147,-161.623 301.661,-155.569"/>
 </g>
 <!-- Node13 -->
 <g id="node13" class="node"><title>Node13</title>
 <g id="a_node13"><a xlink:href="error__codes_8h.html" target="_top" xlink:title="Defines integral error codes returned by the CRT. ">
-<polygon fill="white" stroke="black" points="456,-123.5 456,-153.5 574,-153.5 574,-123.5 456,-123.5"/>
-<text text-anchor="start" x="464" y="-141.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/error</text>
-<text text-anchor="middle" x="515" y="-130.5" font-family="Helvetica,sans-Serif" font-size="10.00">_codes.h</text>
+<polygon fill="white" stroke="black" points="350,-123.5 350,-153.5 468,-153.5 468,-123.5 350,-123.5"/>
+<text text-anchor="start" x="358" y="-141.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/error</text>
+<text text-anchor="middle" x="409" y="-130.5" font-family="Helvetica,sans-Serif" font-size="10.00">_codes.h</text>
 </a>
 </g>
 </g>
 <!-- Node12&#45;&gt;Node13 -->
 <g id="edge16" class="edge"><title>Node12&#45;&gt;Node13</title>
-<path fill="none" stroke="midnightblue" d="M435.132,-190.396C449.568,-181.197 468.604,-169.066 484.471,-158.955"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="486.358,-161.902 492.911,-153.577 482.597,-155.999 486.358,-161.902"/>
+<path fill="none" stroke="midnightblue" d="M385.51,-190.396C389.244,-182.304 394.026,-171.944 398.299,-162.685"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="401.49,-164.123 402.503,-153.577 395.135,-161.189 401.49,-164.123"/>
+</g>
+<!-- Node14&#45;&gt;Node4 -->
+<g id="edge19" class="edge"><title>Node14&#45;&gt;Node4</title>
+<path fill="none" stroke="midnightblue" d="M522.772,-195.871C514.959,-180.382 497.203,-147.219 477,-123 444.3,-83.7996 397.814,-45.7988 371.726,-25.7149"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="373.787,-22.8855 363.71,-19.6149 369.548,-28.456 373.787,-22.8855"/>
 </g>
 <!-- Node14&#45;&gt;Node13 -->
-<g id="edge18" class="edge"><title>Node14&#45;&gt;Node13</title>
-<path fill="none" stroke="midnightblue" d="M554.796,-195.734C548.699,-187.118 539.139,-173.61 530.918,-161.992"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="533.604,-159.73 524.971,-153.589 527.89,-163.774 533.604,-159.73"/>
+<g id="edge20" class="edge"><title>Node14&#45;&gt;Node13</title>
+<path fill="none" stroke="midnightblue" d="M511.086,-195.734C493.835,-186.231 465.78,-170.777 443.441,-158.472"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="445.025,-155.348 434.577,-153.589 441.647,-161.48 445.025,-155.348"/>
+</g>
+<!-- Node15 -->
+<g id="node15" class="node"><title>Node15</title>
+<polygon fill="white" stroke="#bfbfbf" points="524,-129 524,-148 580,-148 580,-129 524,-129"/>
+<text text-anchor="middle" x="552" y="-136" font-family="Helvetica,sans-Serif" font-size="10.00">stdarg.h</text>
+</g>
+<!-- Node14&#45;&gt;Node15 -->
+<g id="edge18" class="edge"><title>Node14&#45;&gt;Node15</title>
+<path fill="none" stroke="midnightblue" d="M530.372,-195.734C534.12,-185.988 540.277,-169.981 545.065,-157.532"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="548.359,-158.717 548.682,-148.127 541.825,-156.204 548.359,-158.717"/>
 </g>
 </g>
 </svg>
diff --git a/docs/api/doxygen/namespacemembers.html b/docs/api/doxygen/namespacemembers.html
index ff798b1..0e27f51 100644
--- a/docs/api/doxygen/namespacemembers.html
+++ b/docs/api/doxygen/namespacemembers.html
@@ -149,6 +149,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>address_of()
 : <a class="el" href="namespacetvm_1_1tir_1_1builtin.html#a700b7018f2c1f1fba8b4e28f264d8bbb">tvm::tir::builtin</a>
 </li>
+<li>AddRPCSessionMask()
+: <a class="el" href="namespacetvm_1_1runtime.html#a409b50f5d118a11f7a9f234498be7c27">tvm::runtime</a>
+</li>
 <li>adv_index()
 : <a class="el" href="namespacetvm_1_1topi.html#a6d9189f6ceb05cf0a309dbe3f2730b16">tvm::topi</a>
 </li>
diff --git a/docs/api/doxygen/namespacemembers_func.html b/docs/api/doxygen/namespacemembers_func.html
index f23b1f7..50d6720 100644
--- a/docs/api/doxygen/namespacemembers_func.html
+++ b/docs/api/doxygen/namespacemembers_func.html
@@ -149,6 +149,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>address_of()
 : <a class="el" href="namespacetvm_1_1tir_1_1builtin.html#a700b7018f2c1f1fba8b4e28f264d8bbb">tvm::tir::builtin</a>
 </li>
+<li>AddRPCSessionMask()
+: <a class="el" href="namespacetvm_1_1runtime.html#a409b50f5d118a11f7a9f234498be7c27">tvm::runtime</a>
+</li>
 <li>adv_index()
 : <a class="el" href="namespacetvm_1_1topi.html#a6d9189f6ceb05cf0a309dbe3f2730b16">tvm::topi</a>
 </li>
diff --git a/docs/api/doxygen/namespacemembers_func_g.html b/docs/api/doxygen/namespacemembers_func_g.html
index 83c079c..9af16b1 100644
--- a/docs/api/doxygen/namespacemembers_func_g.html
+++ b/docs/api/doxygen/namespacemembers_func_g.html
@@ -173,6 +173,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>GetRef()
 : <a class="el" href="namespacetvm_1_1runtime.html#aa4a97de4fefd23aa5942c6a545544a05">tvm::runtime</a>
 </li>
+<li>GetRPCSessionIndex()
+: <a class="el" href="namespacetvm_1_1runtime.html#a9ac54b0d7a3e3c22fd0ddef0a731cfd5">tvm::runtime</a>
+</li>
 <li>GetRuntimeDataType()
 : <a class="el" href="namespacetvm.html#a0447e9aa45f6cab707f6dc9f9281b3f5">tvm</a>
 </li>
@@ -189,10 +192,10 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 : <a class="el" href="namespacetvm_1_1te.html#a0de1399717049f2b3582f0344b267d56">tvm::te</a>
 </li>
 <li>greater()
-: <a class="el" href="namespacetvm_1_1topi.html#ab27229573a7b77a21c9f7fbe7390a094">tvm::topi</a>
+: <a class="el" href="namespacetvm_1_1topi.html#a99c98f4fa4a36166f6da0985f77ec462">tvm::topi</a>
 </li>
 <li>greater_equal()
-: <a class="el" href="namespacetvm_1_1topi.html#a7690570d47d66ab60727a4a41ed2f78b">tvm::topi</a>
+: <a class="el" href="namespacetvm_1_1topi.html#a4ab87f8165493b3fa0acc00a83c0a2e4">tvm::topi</a>
 </li>
 <li>group_conv2d_ngchw()
 : <a class="el" href="namespacetvm_1_1topi.html#a4c2a0e74a45381e899f9ff788365eff0">tvm::topi</a>
diff --git a/docs/api/doxygen/namespacemembers_func_i.html b/docs/api/doxygen/namespacemembers_func_i.html
index e0acf91..e1d6935 100644
--- a/docs/api/doxygen/namespacemembers_func_i.html
+++ b/docs/api/doxygen/namespacemembers_func_i.html
@@ -229,6 +229,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>IsPrimitiveOp()
 : <a class="el" href="namespacetvm.html#a8259e23409eda017c6bde908e050b670">tvm</a>
 </li>
+<li>IsRPCSessionContext()
+: <a class="el" href="namespacetvm_1_1runtime.html#af2a8f6198750ead46feeb72ef4f9de4c">tvm::runtime</a>
+</li>
 <li>IsVoidType()
 : <a class="el" href="namespacetvm.html#a196edb73fc9f13d965b8de1c9287a2db">tvm</a>
 </li>
diff --git a/docs/api/doxygen/namespacemembers_func_r.html b/docs/api/doxygen/namespacemembers_func_r.html
index c17e095..3300b66 100644
--- a/docs/api/doxygen/namespacemembers_func_r.html
+++ b/docs/api/doxygen/namespacemembers_func_r.html
@@ -142,6 +142,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>RemoveNoOp()
 : <a class="el" href="namespacetvm_1_1tir_1_1transform.html#a8aad1159425e29be796562b2ec629b10">tvm::tir::transform</a>
 </li>
+<li>RemoveRPCSessionMask()
+: <a class="el" href="namespacetvm_1_1runtime.html#aea8fddcdd83b2bce46fbff699f43eee6">tvm::runtime</a>
+</li>
 <li>RemoveUnusedFunctions()
 : <a class="el" href="namespacetvm_1_1relay_1_1transform.html#afbbf5f3e5ffb775fafb9c48473dbfa24">tvm::relay::transform</a>
 </li>
@@ -168,7 +171,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 : <a class="el" href="namespacetvm_1_1tir_1_1transform.html#a4fe43327c4454dd05b6e925577443f49">tvm::tir::transform</a>
 </li>
 <li>right_shift()
-: <a class="el" href="namespacetvm_1_1topi.html#a9673b9caffb46404b566c3f04a492dfe">tvm::topi</a>
+: <a class="el" href="namespacetvm_1_1topi.html#aec8705eed0238733dc89e2a34465e9d0">tvm::topi</a>
 </li>
 <li>rocblas_batch_matmul()
 : <a class="el" href="namespacetvm_1_1topi_1_1contrib.html#abf1113dd429e1285752b48f62fe12848">tvm::topi::contrib</a>
diff --git a/docs/api/doxygen/namespacemembers_g.html b/docs/api/doxygen/namespacemembers_g.html
index 05f329d..bbe2d80 100644
--- a/docs/api/doxygen/namespacemembers_g.html
+++ b/docs/api/doxygen/namespacemembers_g.html
@@ -173,6 +173,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>GetRef()
 : <a class="el" href="namespacetvm_1_1runtime.html#aa4a97de4fefd23aa5942c6a545544a05">tvm::runtime</a>
 </li>
+<li>GetRPCSessionIndex()
+: <a class="el" href="namespacetvm_1_1runtime.html#a9ac54b0d7a3e3c22fd0ddef0a731cfd5">tvm::runtime</a>
+</li>
 <li>GetRuntimeDataType()
 : <a class="el" href="namespacetvm.html#a0447e9aa45f6cab707f6dc9f9281b3f5">tvm</a>
 </li>
@@ -201,7 +204,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 : <a class="el" href="namespacetvm_1_1te.html#a0de1399717049f2b3582f0344b267d56">tvm::te</a>
 </li>
 <li>greater()
-: <a class="el" href="namespacetvm_1_1topi.html#ab27229573a7b77a21c9f7fbe7390a094">tvm::topi</a>
+: <a class="el" href="namespacetvm_1_1topi.html#ab35dd0e7cf7caa332e34ac352253d88f">tvm::topi</a>
 </li>
 <li>greater_equal()
 : <a class="el" href="namespacetvm_1_1topi.html#a7690570d47d66ab60727a4a41ed2f78b">tvm::topi</a>
diff --git a/docs/api/doxygen/namespacemembers_i.html b/docs/api/doxygen/namespacemembers_i.html
index 4e89a48..ca1663f 100644
--- a/docs/api/doxygen/namespacemembers_i.html
+++ b/docs/api/doxygen/namespacemembers_i.html
@@ -244,6 +244,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>IsPrimitiveOp()
 : <a class="el" href="namespacetvm.html#a8259e23409eda017c6bde908e050b670">tvm</a>
 </li>
+<li>IsRPCSessionContext()
+: <a class="el" href="namespacetvm_1_1runtime.html#af2a8f6198750ead46feeb72ef4f9de4c">tvm::runtime</a>
+</li>
 <li>IsVoidType()
 : <a class="el" href="namespacetvm.html#a196edb73fc9f13d965b8de1c9287a2db">tvm</a>
 </li>
diff --git a/docs/api/doxygen/namespacemembers_r.html b/docs/api/doxygen/namespacemembers_r.html
index 1f92c7c..6eac128 100644
--- a/docs/api/doxygen/namespacemembers_r.html
+++ b/docs/api/doxygen/namespacemembers_r.html
@@ -160,6 +160,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <li>RemoveNoOp()
 : <a class="el" href="namespacetvm_1_1tir_1_1transform.html#a8aad1159425e29be796562b2ec629b10">tvm::tir::transform</a>
 </li>
+<li>RemoveRPCSessionMask()
+: <a class="el" href="namespacetvm_1_1runtime.html#aea8fddcdd83b2bce46fbff699f43eee6">tvm::runtime</a>
+</li>
 <li>RemoveUnusedFunctions()
 : <a class="el" href="namespacetvm_1_1relay_1_1transform.html#afbbf5f3e5ffb775fafb9c48473dbfa24">tvm::relay::transform</a>
 </li>
diff --git a/docs/api/doxygen/namespacetvm_1_1runtime.html b/docs/api/doxygen/namespacetvm_1_1runtime.html
index b8859c3..8b26f89 100644
--- a/docs/api/doxygen/namespacetvm_1_1runtime.html
+++ b/docs/api/doxygen/namespacetvm_1_1runtime.html
@@ -385,8 +385,20 @@ Functions</h2></td></tr>
 <tr class="memitem:a9109e4efe269213052ed6a94853c0c00"><td class="memItemLeft" align="right" valign="top">const char *&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a9109e4efe269213052ed6a94853c0c00">DeviceName</a> (int type)</td></tr>
 <tr class="memdesc:a9109e4efe269213052ed6a94853c0c00"><td class="mdescLeft">&#160;</td><td class="mdescRight">The name of Device API factory.  <a href="#a9109e4efe269213052ed6a94853c0c00">More...</a><br /></td></tr>
 <tr class="separator:a9109e4efe269213052ed6a94853c0c00"><td class="memSeparator" colspan="2">&#160;</td></tr>
-<tr class="memitem:a0ce391c2492dfc73b5c6c6459693c6a6"><td class="memItemLeft" align="right" valign="top">std::ostream &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a0ce391c2492dfc73b5c6c6459693c6a6">operator&lt;&lt;</a> (std::ostream &amp;os, DLContext ctx)</td></tr>
-<tr class="separator:a0ce391c2492dfc73b5c6c6459693c6a6"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:af2a8f6198750ead46feeb72ef4f9de4c"><td class="memItemLeft" align="right" valign="top">bool&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#af2a8f6198750ead46feeb72ef4f9de4c">IsRPCSessionContext</a> (<a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> ctx)</td></tr>
+<tr class="memdesc:af2a8f6198750ead46feeb72ef4f9de4c"><td class="mdescLeft">&#160;</td><td class="mdescRight">Return true if a TVMContext is owned by an RPC session.  <a href="#af2a8f6198750ead46feeb72ef4f9de4c">More...</a><br /></td></tr>
+<tr class="separator:af2a8f6198750ead46feeb72ef4f9de4c"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a9ac54b0d7a3e3c22fd0ddef0a731cfd5"><td class="memItemLeft" align="right" valign="top">int&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a9ac54b0d7a3e3c22fd0ddef0a731cfd5">GetRPCSessionIndex</a> (<a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> ctx)</td></tr>
+<tr class="memdesc:a9ac54b0d7a3e3c22fd0ddef0a731cfd5"><td class="mdescLeft">&#160;</td><td class="mdescRight">Return the RPCSessTable index of the RPC Session that owns this context.  <a href="#a9ac54b0d7a3e3c22fd0ddef0a731cfd5">More...</a><br /></td></tr>
+<tr class="separator:a9ac54b0d7a3e3c22fd0ddef0a731cfd5"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:aea8fddcdd83b2bce46fbff699f43eee6"><td class="memItemLeft" align="right" valign="top"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a>&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#aea8fddcdd83b2bce46fbff699f43eee6">RemoveRPCSessionMask</a> (<a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> ctx)</td></tr>
+<tr class="memdesc:aea8fddcdd83b2bce46fbff699f43eee6"><td class="mdescLeft">&#160;</td><td class="mdescRight">Remove the RPC session mask from a TVMContext. RPC clients typically do this when encoding a TVMContext for transmission to an RPC remote. On the wire, RPCContext are expected to be valid on the server without interpretation.  <a href="#aea8fddcdd83b2bce46fbff699f43eee6">More...</a><br /></td></tr>
+<tr class="separator:aea8fddcdd83b2bce46fbff699f43eee6"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a3578b5c107d5e8ee58b73a2a776e19f1"><td class="memItemLeft" align="right" valign="top">std::ostream &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a3578b5c107d5e8ee58b73a2a776e19f1">operator&lt;&lt;</a> (std::ostream &amp;os, DLContext ctx)</td></tr>
+<tr class="separator:a3578b5c107d5e8ee58b73a2a776e19f1"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a409b50f5d118a11f7a9f234498be7c27"><td class="memItemLeft" align="right" valign="top"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a>&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a409b50f5d118a11f7a9f234498be7c27">AddRPCSessionMask</a> (<a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> ctx, int session_table_index)</td></tr>
+<tr class="memdesc:a409b50f5d118a11f7a9f234498be7c27"><td class="mdescLeft">&#160;</td><td class="mdescRight">Add a RPC session mask to a TVMContext. RPC clients typically do this when decoding a TVMContext received from a RPC remote.  <a href="#a409b50f5d118a11f7a9f234498be7c27">More...</a><br /></td></tr>
+<tr class="separator:a409b50f5d118a11f7a9f234498be7c27"><td class="memSeparator" colspan="2">&#160;</td></tr>
 <tr class="memitem:a93466f4543eedc3925c66ed0e7ef2671"><td class="memTemplParams" colspan="2">template&lt;typename T , typename... Args&gt; </td></tr>
 <tr class="memitem:a93466f4543eedc3925c66ed0e7ef2671"><td class="memTemplItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html">ObjectPtr</a>&lt; T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1runtime.html#a93466f4543eedc3925c66ed0e7ef2671">make_object</a> (Args &amp;&amp;...args)</td></tr>
 <tr class="memdesc:a93466f4543eedc3925c66ed0e7ef2671"><td class="mdescLeft">&#160;</td><td class="mdescRight">Allocate an object using default allocator.  <a href="#a93466f4543eedc3925c66ed0e7ef2671">More...</a><br /></td></tr>
@@ -490,6 +502,50 @@ Variables</h2></td></tr>
 </div>
 </div>
 <h2 class="groupheader">Function Documentation</h2>
+<a class="anchor" id="a409b50f5d118a11f7a9f234498be7c27"></a>
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+  <tr>
+  <td class="mlabels-left">
+      <table class="memname">
+        <tr>
+          <td class="memname"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> tvm::runtime::AddRPCSessionMask </td>
+          <td>(</td>
+          <td class="paramtype"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a>&#160;</td>
+          <td class="paramname"><em>ctx</em>, </td>
+        </tr>
+        <tr>
+          <td class="paramkey"></td>
+          <td></td>
+          <td class="paramtype">int&#160;</td>
+          <td class="paramname"><em>session_table_index</em>&#160;</td>
+        </tr>
+        <tr>
+          <td></td>
+          <td>)</td>
+          <td></td><td></td>
+        </tr>
+      </table>
+  </td>
+  <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">inline</span></span>  </td>
+  </tr>
+</table>
+</div><div class="memdoc">
+
+<p>Add a RPC session mask to a TVMContext. RPC clients typically do this when decoding a TVMContext received from a RPC remote. </p>
+<dl class="params"><dt>Parameters</dt><dd>
+  <table class="params">
+    <tr><td class="paramname">ctx</td><td>A TVMContext without any RPC Session mask, valid on the RPC server. </td></tr>
+    <tr><td class="paramname">session_table_index</td><td>Numeric index of the RPC session in the session table. </td></tr>
+  </table>
+  </dd>
+</dl>
+<dl class="section return"><dt>Returns</dt><dd>A TVMContext with RPC session mask added, valid on the RPC client. </dd></dl>
+
+</div>
+</div>
 <a class="anchor" id="a129050a60cebb0bbe18f96b41a36a948"></a>
 <div class="memitem">
 <div class="memproto">
@@ -930,6 +986,33 @@ template&lt;typename RefType , typename ObjType &gt; </div>
 
 </div>
 </div>
+<a class="anchor" id="a9ac54b0d7a3e3c22fd0ddef0a731cfd5"></a>
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+  <tr>
+  <td class="mlabels-left">
+      <table class="memname">
+        <tr>
+          <td class="memname">int tvm::runtime::GetRPCSessionIndex </td>
+          <td>(</td>
+          <td class="paramtype"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a>&#160;</td>
+          <td class="paramname"><em>ctx</em></td><td>)</td>
+          <td></td>
+        </tr>
+      </table>
+  </td>
+  <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">inline</span></span>  </td>
+  </tr>
+</table>
+</div><div class="memdoc">
+
+<p>Return the RPCSessTable index of the RPC Session that owns this context. </p>
+<dl class="section return"><dt>Returns</dt><dd>the table index. </dd></dl>
+
+</div>
+</div>
 <a class="anchor" id="ad01a53416152b68029d67190c3709d25"></a>
 <div class="memitem">
 <div class="memproto">
@@ -996,6 +1079,32 @@ template&lt;typename RefType , typename ObjType &gt; </div>
 
 </div>
 </div>
+<a class="anchor" id="af2a8f6198750ead46feeb72ef4f9de4c"></a>
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+  <tr>
+  <td class="mlabels-left">
+      <table class="memname">
+        <tr>
+          <td class="memname">bool tvm::runtime::IsRPCSessionContext </td>
+          <td>(</td>
+          <td class="paramtype"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a>&#160;</td>
+          <td class="paramname"><em>ctx</em></td><td>)</td>
+          <td></td>
+        </tr>
+      </table>
+  </td>
+  <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">inline</span></span>  </td>
+  </tr>
+</table>
+</div><div class="memdoc">
+
+<p>Return true if a TVMContext is owned by an RPC session. </p>
+
+</div>
+</div>
 <a class="anchor" id="a144496aaff68cd251b6bc0a7b24ca041"></a>
 <div class="memitem">
 <div class="memproto">
@@ -1667,7 +1776,7 @@ template&lt;&gt; </div>
 
 </div>
 </div>
-<a class="anchor" id="a0ce391c2492dfc73b5c6c6459693c6a6"></a>
+<a class="anchor" id="a3578b5c107d5e8ee58b73a2a776e19f1"></a>
 <div class="memitem">
 <div class="memproto">
 <table class="mlabels">
@@ -1675,7 +1784,7 @@ template&lt;&gt; </div>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname">std::ostream&amp; tvm::runtime::operator&lt;&lt; </td>
+          <td class="memname">std::ostream &amp; tvm::runtime::operator&lt;&lt; </td>
           <td>(</td>
           <td class="paramtype">std::ostream &amp;&#160;</td>
           <td class="paramname"><em>os</em>, </td>
@@ -2519,6 +2628,39 @@ template&lt;&gt; </div>
 
 </div>
 </div>
+<a class="anchor" id="aea8fddcdd83b2bce46fbff699f43eee6"></a>
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+  <tr>
+  <td class="mlabels-left">
+      <table class="memname">
+        <tr>
+          <td class="memname"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a> tvm::runtime::RemoveRPCSessionMask </td>
+          <td>(</td>
+          <td class="paramtype"><a class="el" href="c__runtime__api_8h.html#a9363bb701f16ce5bbb381f2a013d25b4">TVMContext</a>&#160;</td>
+          <td class="paramname"><em>ctx</em></td><td>)</td>
+          <td></td>
+        </tr>
+      </table>
+  </td>
+  <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">inline</span></span>  </td>
+  </tr>
+</table>
+</div><div class="memdoc">
+
+<p>Remove the RPC session mask from a TVMContext. RPC clients typically do this when encoding a TVMContext for transmission to an RPC remote. On the wire, RPCContext are expected to be valid on the server without interpretation. </p>
+<dl class="params"><dt>Parameters</dt><dd>
+  <table class="params">
+    <tr><td class="paramname">ctx</td><td>A TVMContext with non-zero RPC Session mask, valid on the RPC client. </td></tr>
+  </table>
+  </dd>
+</dl>
+<dl class="section return"><dt>Returns</dt><dd>A TVMContext without any RPC Session mask, valid on the RPC server. </dd></dl>
+
+</div>
+</div>
 <a class="anchor" id="abbea0c23882ae01431ac7fe6506b32a7"></a>
 <div class="memitem">
 <div class="memproto">
diff --git a/docs/api/doxygen/platform_8h.html b/docs/api/doxygen/platform_8h.html
index 3a03a0e..103fd81 100644
--- a/docs/api/doxygen/platform_8h.html
+++ b/docs/api/doxygen/platform_8h.html
@@ -94,11 +94,13 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 
 <p>The virtual memory manager for micro-controllers.  
 <a href="#details">More...</a></p>
-<div class="textblock"><code>#include &lt;<a class="el" href="error__codes_8h_source.html">tvm/runtime/crt/error_codes.h</a>&gt;</code><br />
+<div class="textblock"><code>#include &lt;stdarg.h&gt;</code><br />
+<code>#include &lt;stddef.h&gt;</code><br />
+<code>#include &lt;<a class="el" href="error__codes_8h_source.html">tvm/runtime/crt/error_codes.h</a>&gt;</code><br />
 </div><div class="textblock"><div class="dynheader">
 Include dependency graph for platform.h:</div>
 <div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="platform_8h__incl.svg" width="168" height="142"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="platform_8h__incl.svg" width="366" height="142"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
 </div>
 </div>
 </div><div class="textblock"><div class="dynheader">
@@ -115,6 +117,9 @@ Functions</h2></td></tr>
 <tr class="memitem:a47980e4ea2182978f94ca87cc15ca0c8"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="platform_8h.html#a47980e4ea2182978f94ca87cc15ca0c8">TVMPlatformAbort</a> (<a class="el" href="error__codes_8h.html#a77b4da0131882f0c9b887a47dd34467a">tvm_crt_error_t</a> code)</td></tr>
 <tr class="memdesc:a47980e4ea2182978f94ca87cc15ca0c8"><td class="mdescLeft">&#160;</td><td class="mdescRight">Called when an internal error occurs and execution cannot continue.  <a href="#a47980e4ea2182978f94ca87cc15ca0c8">More...</a><br /></td></tr>
 <tr class="separator:a47980e4ea2182978f94ca87cc15ca0c8"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a6dfecb024ace62e724817f90b6407285"><td class="memItemLeft" align="right" valign="top">size_t&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="platform_8h.html#a6dfecb024ace62e724817f90b6407285">TVMPlatformFormatMessage</a> (char *out_buf, size_t out_buf_size_bytes, const char *fmt, va_list args)</td></tr>
+<tr class="memdesc:a6dfecb024ace62e724817f90b6407285"><td class="mdescLeft">&#160;</td><td class="mdescRight">Called by the microTVM RPC server to implement TVMLogf.  <a href="#a6dfecb024ace62e724817f90b6407285">More...</a><br /></td></tr>
+<tr class="separator:a6dfecb024ace62e724817f90b6407285"><td class="memSeparator" colspan="2">&#160;</td></tr>
 </table>
 <a name="details" id="details"></a><h2 class="groupheader">Detailed Description</h2>
 <div class="textblock"><p>The virtual memory manager for micro-controllers. </p>
@@ -144,6 +149,57 @@ Functions</h2></td></tr>
 
 </div>
 </div>
+<a class="anchor" id="a6dfecb024ace62e724817f90b6407285"></a>
+<div class="memitem">
+<div class="memproto">
+      <table class="memname">
+        <tr>
+          <td class="memname">size_t TVMPlatformFormatMessage </td>
+          <td>(</td>
+          <td class="paramtype">char *&#160;</td>
+          <td class="paramname"><em>out_buf</em>, </td>
+        </tr>
+        <tr>
+          <td class="paramkey"></td>
+          <td></td>
+          <td class="paramtype">size_t&#160;</td>
+          <td class="paramname"><em>out_buf_size_bytes</em>, </td>
+        </tr>
+        <tr>
+          <td class="paramkey"></td>
+          <td></td>
+          <td class="paramtype">const char *&#160;</td>
+          <td class="paramname"><em>fmt</em>, </td>
+        </tr>
+        <tr>
+          <td class="paramkey"></td>
+          <td></td>
+          <td class="paramtype">va_list&#160;</td>
+          <td class="paramname"><em>args</em>&#160;</td>
+        </tr>
+        <tr>
+          <td></td>
+          <td>)</td>
+          <td></td><td></td>
+        </tr>
+      </table>
+</div><div class="memdoc">
+
+<p>Called by the microTVM RPC server to implement TVMLogf. </p>
+<p>Not required to be implemented when the RPC server is not linked into the binary. This function's signature matches that of vsnprintf, so trivial implementations can just call vsnprintf.</p>
+<dl class="params"><dt>Parameters</dt><dd>
+  <table class="params">
+    <tr><td class="paramname">out_buf</td><td>A char buffer where the formatted string should be written. </td></tr>
+    <tr><td class="paramname">out_buf_size_bytes</td><td>Number of bytes available for writing in out_buf. </td></tr>
+    <tr><td class="paramname">fmt</td><td>The printf-style formatstring. </td></tr>
+    <tr><td class="paramname">args</td><td>extra arguments to be formatted. </td></tr>
+  </table>
+  </dd>
+</dl>
+<dl class="section return"><dt>Returns</dt><dd>number of bytes written. </dd></dl>
+
+</div>
+</div>
 </div><!-- contents -->
 <!-- start footer part -->
 <hr class="footer"/><address class="footer"><small>
diff --git a/docs/api/doxygen/platform_8h__incl.svg b/docs/api/doxygen/platform_8h__incl.svg
index 23ec639..9a22fc8 100644
--- a/docs/api/doxygen/platform_8h__incl.svg
+++ b/docs/api/doxygen/platform_8h__incl.svg
@@ -4,30 +4,50 @@
 <!-- Generated by graphviz version 2.38.0 (20140413.2041)
  -->
 <!-- Title: include/tvm/runtime/crt/platform.h Pages: 1 -->
-<svg width="126pt" height="106pt"
- viewBox="0.00 0.00 126.00 106.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<svg width="274pt" height="106pt"
+ viewBox="0.00 0.00 274.00 106.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
 <g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 102)">
 <title>include/tvm/runtime/crt/platform.h</title>
-<polygon fill="white" stroke="none" points="-4,4 -4,-102 122,-102 122,4 -4,4"/>
+<polygon fill="white" stroke="none" points="-4,4 -4,-102 270,-102 270,4 -4,4"/>
 <!-- Node1 -->
 <g id="node1" class="node"><title>Node1</title>
-<polygon fill="#bfbfbf" stroke="black" points="2.5,-67.5 2.5,-97.5 115.5,-97.5 115.5,-67.5 2.5,-67.5"/>
-<text text-anchor="start" x="10.5" y="-85.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="59" y="-74.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/platform.h</text>
+<polygon fill="#bfbfbf" stroke="black" points="45.5,-67.5 45.5,-97.5 158.5,-97.5 158.5,-67.5 45.5,-67.5"/>
+<text text-anchor="start" x="53.5" y="-85.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="102" y="-74.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/platform.h</text>
 </g>
 <!-- Node2 -->
 <g id="node2" class="node"><title>Node2</title>
-<g id="a_node2"><a xlink:href="error__codes_8h.html" target="_top" xlink:title="Defines integral error codes returned by the CRT. ">
-<polygon fill="white" stroke="black" points="0,-0.5 0,-30.5 118,-30.5 118,-0.5 0,-0.5"/>
-<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/error</text>
-<text text-anchor="middle" x="59" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">_codes.h</text>
-</a>
-</g>
+<polygon fill="white" stroke="#bfbfbf" points="0,-6 0,-25 56,-25 56,-6 0,-6"/>
+<text text-anchor="middle" x="28" y="-13" font-family="Helvetica,sans-Serif" font-size="10.00">stdarg.h</text>
 </g>
 <!-- Node1&#45;&gt;Node2 -->
 <g id="edge1" class="edge"><title>Node1&#45;&gt;Node2</title>
-<path fill="none" stroke="midnightblue" d="M59,-67.396C59,-59.6448 59,-49.8122 59,-40.8601"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="62.5001,-40.5765 59,-30.5765 55.5001,-40.5765 62.5001,-40.5765"/>
+<path fill="none" stroke="midnightblue" d="M85.9431,-67.396C74.0622,-56.96 57.8855,-42.7507 45.6437,-31.9978"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="47.7214,-29.1644 37.8984,-25.1945 43.1018,-34.4236 47.7214,-29.1644"/>
+</g>
+<!-- Node3 -->
+<g id="node3" class="node"><title>Node3</title>
+<polygon fill="white" stroke="#bfbfbf" points="74.5,-6 74.5,-25 129.5,-25 129.5,-6 74.5,-6"/>
+<text text-anchor="middle" x="102" y="-13" font-family="Helvetica,sans-Serif" font-size="10.00">stddef.h</text>
+</g>
+<!-- Node1&#45;&gt;Node3 -->
+<g id="edge2" class="edge"><title>Node1&#45;&gt;Node3</title>
+<path fill="none" stroke="midnightblue" d="M102,-67.396C102,-58.0638 102,-45.7143 102,-35.5173"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="105.5,-35.1945 102,-25.1945 98.5001,-35.1946 105.5,-35.1945"/>
+</g>
+<!-- Node4 -->
+<g id="node4" class="node"><title>Node4</title>
+<g id="a_node4"><a xlink:href="error__codes_8h.html" target="_top" xlink:title="Defines integral error codes returned by the CRT. ">
+<polygon fill="white" stroke="black" points="148,-0.5 148,-30.5 266,-30.5 266,-0.5 148,-0.5"/>
+<text text-anchor="start" x="156" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00">tvm/runtime/crt/error</text>
+<text text-anchor="middle" x="207" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">_codes.h</text>
+</a>
+</g>
+</g>
+<!-- Node1&#45;&gt;Node4 -->
+<g id="edge3" class="edge"><title>Node1&#45;&gt;Node4</title>
+<path fill="none" stroke="midnightblue" d="M124.783,-67.396C139.644,-58.1968 159.24,-46.0658 175.573,-35.9546"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="177.601,-38.816 184.261,-30.5765 173.916,-32.8642 177.601,-38.816"/>
 </g>
 </g>
 </svg>
diff --git a/docs/api/doxygen/platform_8h_source.html b/docs/api/doxygen/platform_8h_source.html
index 8b9111a..1cc7585 100644
--- a/docs/api/doxygen/platform_8h_source.html
+++ b/docs/api/doxygen/platform_8h_source.html
@@ -89,9 +89,10 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="title">platform.h</div>  </div>
 </div><!--header-->
 <div class="contents">
-<a href="platform_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more co [...]
+<a href="platform_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more co [...]
 <div class="ttc" id="error__codes_8h_html_a77b4da0131882f0c9b887a47dd34467a"><div class="ttname"><a href="error__codes_8h.html#a77b4da0131882f0c9b887a47dd34467a">tvm_crt_error_t</a></div><div class="ttdeci">tvm_crt_error_t</div><div class="ttdef"><b>Definition:</b> error_codes.h:46</div></div>
 <div class="ttc" id="platform_8h_html_a47980e4ea2182978f94ca87cc15ca0c8"><div class="ttname"><a href="platform_8h.html#a47980e4ea2182978f94ca87cc15ca0c8">TVMPlatformAbort</a></div><div class="ttdeci">void TVMPlatformAbort(tvm_crt_error_t code)</div><div class="ttdoc">Called when an internal error occurs and execution cannot continue. </div></div>
+<div class="ttc" id="platform_8h_html_a6dfecb024ace62e724817f90b6407285"><div class="ttname"><a href="platform_8h.html#a6dfecb024ace62e724817f90b6407285">TVMPlatformFormatMessage</a></div><div class="ttdeci">size_t TVMPlatformFormatMessage(char *out_buf, size_t out_buf_size_bytes, const char *fmt, va_list args)</div><div class="ttdoc">Called by the microTVM RPC server to implement TVMLogf. </div></div>
 </div><!-- fragment --></div><!-- contents -->
 <!-- start footer part -->
 <hr class="footer"/><address class="footer"><small>
diff --git a/docs/api/doxygen/search/all_1.js b/docs/api/doxygen/search/all_1.js
index 0c1c354..9aa1161 100644
--- a/docs/api/doxygen/search/all_1.js
+++ b/docs/api/doxygen/search/all_1.js
@@ -26,6 +26,7 @@ var searchData=
   ['addnode',['AddNode',['../classtvm_1_1tir_1_1AddNode.html',1,'tvm::tir']]],
   ['address_5fof',['address_of',['../namespacetvm_1_1tir_1_1builtin.html#a700b7018f2c1f1fba8b4e28f264d8bbb',1,'tvm::tir::builtin']]],
   ['addressof',['AddressOf',['../classtvm_1_1runtime_1_1InplaceArrayBase.html#ae4f845e2695ce301c6c3916a6e280c49',1,'tvm::runtime::InplaceArrayBase']]],
+  ['addrpcsessionmask',['AddRPCSessionMask',['../namespacetvm_1_1runtime.html#a409b50f5d118a11f7a9f234498be7c27',1,'tvm::runtime']]],
   ['addtag',['AddTag',['../classtvm_1_1TargetTag.html#a467b19e6f2764e006f6ed412e3a8b48c',1,'tvm::TargetTag']]],
   ['addtypedef',['AddTypeDef',['../classtvm_1_1IRModuleNode.html#a4284c66981befd976af5deadaca2b7f6',1,'tvm::IRModuleNode']]],
   ['addtypedefunchecked',['AddTypeDefUnchecked',['../classtvm_1_1IRModuleNode.html#a1c4aaf62ebed8952d523c3e832051299',1,'tvm::IRModuleNode']]],
@@ -93,8 +94,8 @@ var searchData=
   ['any',['Any',['../classtvm_1_1tir_1_1Any.html#afdb8854b3952dbfa4b02f151ead3bdfd',1,'tvm::tir::Any::Any()'],['../namespacetvm_1_1relay.html#abe473e7f103d7aa63b7b09fee09df932',1,'tvm::relay::Any()'],['../namespacetvm.html#a1ff6a87d9ea4b883f6ee2ea8611da94c',1,'tvm::any()'],['../namespacetvm_1_1topi.html#afb48d90f345698b1b3417bafa1911504',1,'tvm::topi::any()']]],
   ['anycodegenstrategy',['AnyCodegenStrategy',['../namespacetvm_1_1relay.html#adab76fedc831b249d1c80d69c4a620a3',1,'tvm::relay']]],
   ['anyerrors',['AnyErrors',['../classtvm_1_1ErrorReporter.html#a7ec11efb5e9680cfd57e05d573fc0927',1,'tvm::ErrorReporter']]],
-  ['anynode',['AnyNode',['../namespacetvm_1_1relay.html#a63c360628faf2eeb9de326634bc6e80e',1,'tvm::relay']]],
   ['anynode',['AnyNode',['../classtvm_1_1tir_1_1AnyNode.html',1,'tvm::tir']]],
+  ['anynode',['AnyNode',['../namespacetvm_1_1relay.html#a63c360628faf2eeb9de326634bc6e80e',1,'tvm::relay']]],
   ['applystageidoffset',['ApplyStageIdOffset',['../classtvm_1_1auto__scheduler_1_1AttachMap.html#a5be9edb9b3cbc816d6a30cf1429b1378',1,'tvm::auto_scheduler::AttachMap']]],
   ['applysteps',['ApplySteps',['../classtvm_1_1auto__scheduler_1_1ComputeDAG.html#a01db4ed6a9b7a9c8d58a5d9ae1dd530d',1,'tvm::auto_scheduler::ComputeDAG']]],
   ['applytoschedule',['ApplyToSchedule',['../classtvm_1_1auto__scheduler_1_1AnnotationStepNode.html#af468810a1d31012ccd34888d9d300021',1,'tvm::auto_scheduler::AnnotationStepNode::ApplyToSchedule()'],['../classtvm_1_1auto__scheduler_1_1FuseStepNode.html#a43c729be465fd7838e58ea2cbce6404b',1,'tvm::auto_scheduler::FuseStepNode::ApplyToSchedule()'],['../classtvm_1_1auto__scheduler_1_1PragmaStepNode.html#acb91d79dd01d23e0f3fa51733787a8f5',1,'tvm::auto_scheduler::PragmaStepNode::ApplyToSchedule [...]
@@ -287,12 +288,12 @@ var searchData=
   ['attrsnode_3c_20upsamplingattrs_20_3e',['AttrsNode&lt; UpSamplingAttrs &gt;',['../classtvm_1_1AttrsNode.html',1,'tvm']]],
   ['attrsnode_3c_20varianceattrs_20_3e',['AttrsNode&lt; VarianceAttrs &gt;',['../classtvm_1_1AttrsNode.html',1,'tvm']]],
   ['attrsnode_3c_20yoloreorgattrs_20_3e',['AttrsNode&lt; YoloReorgAttrs &gt;',['../classtvm_1_1AttrsNode.html',1,'tvm']]],
-  ['attrssequalvisitor',['AttrsSEqualVisitor',['../classtvm_1_1detail_1_1AttrsSEqualVisitor.html',1,'tvm::detail']]],
   ['attrssequalvisitor',['AttrsSEqualVisitor',['../classtvm_1_1detail_1_1AttrsSEqualVisitor.html#ac67ceda6a413da78e61fa91ca61fcf26',1,'tvm::detail::AttrsSEqualVisitor']]],
+  ['attrssequalvisitor',['AttrsSEqualVisitor',['../classtvm_1_1detail_1_1AttrsSEqualVisitor.html',1,'tvm::detail']]],
   ['attrsshashvisitor',['AttrsSHashVisitor',['../classtvm_1_1detail_1_1AttrsSHashVisitor.html',1,'tvm::detail']]],
   ['attrsshashvisitor',['AttrsSHashVisitor',['../classtvm_1_1detail_1_1AttrsSHashVisitor.html#af5b71e60c1383705d275a5087f3073bb',1,'tvm::detail::AttrsSHashVisitor']]],
-  ['attrstmt',['AttrStmt',['../classtvm_1_1tir_1_1AttrStmt.html#aa13c219c2fe4bfacc7493da15505e2c6',1,'tvm::tir::AttrStmt']]],
   ['attrstmt',['AttrStmt',['../classtvm_1_1tir_1_1AttrStmt.html',1,'tvm::tir']]],
+  ['attrstmt',['AttrStmt',['../classtvm_1_1tir_1_1AttrStmt.html#aa13c219c2fe4bfacc7493da15505e2c6',1,'tvm::tir::AttrStmt']]],
   ['attrstmtnode',['AttrStmtNode',['../classtvm_1_1tir_1_1AttrStmtNode.html',1,'tvm::tir']]],
   ['attrswithdefaultvalues',['AttrsWithDefaultValues',['../namespacetvm.html#a2e3193a20ee748b08d5a528275859dbe',1,'tvm']]],
   ['attrtriggernondefaultentry',['AttrTriggerNonDefaultEntry',['../structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html#a572356cfd8d20c258b03f7a5c62d3909',1,'tvm::detail::AttrTriggerNonDefaultEntry']]],
diff --git a/docs/api/doxygen/search/all_12.js b/docs/api/doxygen/search/all_12.js
index ded9c24..f58271f 100644
--- a/docs/api/doxygen/search/all_12.js
+++ b/docs/api/doxygen/search/all_12.js
@@ -67,8 +67,8 @@ var searchData=
   ['reflection_2eh',['reflection.h',['../reflection_8h.html',1,'']]],
   ['reflectiontrait',['ReflectionTrait',['../structtvm_1_1detail_1_1ReflectionTrait.html',1,'tvm::detail']]],
   ['reflectionvtable',['ReflectionVTable',['../classtvm_1_1ReflectionVTable.html',1,'tvm']]],
-  ['refread',['RefRead',['../classtvm_1_1relay_1_1RefRead.html',1,'tvm::relay']]],
   ['refread',['RefRead',['../classtvm_1_1relay_1_1RefRead.html#ae00e55b7051c34f3f2a57f4566913071',1,'tvm::relay::RefRead']]],
+  ['refread',['RefRead',['../classtvm_1_1relay_1_1RefRead.html',1,'tvm::relay']]],
   ['refreadnode',['RefReadNode',['../classtvm_1_1relay_1_1RefReadNode.html',1,'tvm::relay']]],
   ['refvalue',['RefValue',['../classtvm_1_1relay_1_1RefValue.html',1,'tvm::relay']]],
   ['refvalue',['RefValue',['../classtvm_1_1relay_1_1RefValue.html#a00145f9fe1eaf86bfecdbf3c2aac0b0c',1,'tvm::relay::RefValue']]],
@@ -111,6 +111,7 @@ var searchData=
   ['remapthreadaxis',['RemapThreadAxis',['../namespacetvm_1_1tir_1_1transform.html#a25b5de58d543c6786325d87eaad83692',1,'tvm::tir::transform']]],
   ['remove',['Remove',['../classtvm_1_1IRModuleNode.html#a1350c7d68665605f9c4f10850f4a90b9',1,'tvm::IRModuleNode::Remove()'],['../classtvm_1_1runtime_1_1Registry.html#aad89aa915515019c59364b7b569c4648',1,'tvm::runtime::Registry::Remove()']]],
   ['removenoop',['RemoveNoOp',['../namespacetvm_1_1tir_1_1transform.html#a8aad1159425e29be796562b2ec629b10',1,'tvm::tir::transform']]],
+  ['removerpcsessionmask',['RemoveRPCSessionMask',['../namespacetvm_1_1runtime.html#aea8fddcdd83b2bce46fbff699f43eee6',1,'tvm::runtime']]],
   ['removeunusedfunctions',['RemoveUnusedFunctions',['../namespacetvm_1_1relay_1_1transform.html#afbbf5f3e5ffb775fafb9c48473dbfa24',1,'tvm::relay::transform']]],
   ['rend',['rend',['../classtvm_1_1runtime_1_1Array.html#ab9f93fb26aa3d08fd8665abde9d8bacf',1,'tvm::runtime::Array']]],
   ['render',['Render',['../classtvm_1_1DiagnosticRenderer.html#a186c087a55cedd9f55b56c2925f5a559',1,'tvm::DiagnosticRenderer::Render()'],['../classtvm_1_1DiagnosticContext.html#a118fc9eccb99eb0772013eca507d97eb',1,'tvm::DiagnosticContext::Render()']]],
@@ -191,8 +192,8 @@ var searchData=
   ['root_5fiter_5fvars',['root_iter_vars',['../classtvm_1_1te_1_1OperationNode.html#a8d15cfe7d0d721da305c1b36e9f5a914',1,'tvm::te::OperationNode::root_iter_vars()'],['../classtvm_1_1te_1_1PlaceholderOpNode.html#aed3620e14c76716f976ffec15a68f074',1,'tvm::te::PlaceholderOpNode::root_iter_vars()'],['../classtvm_1_1te_1_1BaseComputeOpNode.html#aab7b5b43122ee14bb00640906267361a',1,'tvm::te::BaseComputeOpNode::root_iter_vars()'],['../classtvm_1_1te_1_1ScanOpNode.html#a7a2670bdbf28281b2a8d977e4 [...]
   ['round',['round',['../namespacetvm.html#a660170263d6864b1caa60728619971be',1,'tvm::round()'],['../namespacetvm_1_1topi.html#ac8101cdce02816930697ab74213ff059',1,'tvm::topi::round()']]],
   ['rounding',['rounding',['../structtvm_1_1relay_1_1qnn_1_1RequantizeAttrs.html#ae786b4706ed872d99ad26d6c42467f87',1,'tvm::relay::qnn::RequantizeAttrs']]],
-  ['rpcrunner',['RPCRunner',['../classtvm_1_1auto__scheduler_1_1RPCRunner.html#a732414c413c56320bf1e98572c8517e5',1,'tvm::auto_scheduler::RPCRunner']]],
   ['rpcrunner',['RPCRunner',['../classtvm_1_1auto__scheduler_1_1RPCRunner.html',1,'tvm::auto_scheduler']]],
+  ['rpcrunner',['RPCRunner',['../classtvm_1_1auto__scheduler_1_1RPCRunner.html#a732414c413c56320bf1e98572c8517e5',1,'tvm::auto_scheduler::RPCRunner']]],
   ['rpcrunnernode',['RPCRunnerNode',['../classtvm_1_1auto__scheduler_1_1RPCRunnerNode.html',1,'tvm::auto_scheduler']]],
   ['rpcwrappedfunc',['RPCWrappedFunc',['../classtvm_1_1runtime_1_1NDArray_1_1Container.html#a6ccaf80c7bc6037e59b208845b20db11',1,'tvm::runtime::NDArray::Container']]],
   ['rpn_5fmin_5fsize',['rpn_min_size',['../structtvm_1_1relay_1_1ProposalAttrs.html#abee4a0809679e2a5a4f00e07e9650b5e',1,'tvm::relay::ProposalAttrs']]],
diff --git a/docs/api/doxygen/search/all_13.js b/docs/api/doxygen/search/all_13.js
index 8a1d750..beb798a 100644
--- a/docs/api/doxygen/search/all_13.js
+++ b/docs/api/doxygen/search/all_13.js
@@ -134,7 +134,7 @@ var searchData=
   ['setvalue_3c_20int_20_3e',['SetValue&lt; int &gt;',['../namespacetvm_1_1detail.html#a107ebbb0ef4a94f47cd25cb2213dcd96',1,'tvm::detail']]],
   ['setvalue_3c_20int64_5ft_20_3e',['SetValue&lt; int64_t &gt;',['../namespacetvm_1_1detail.html#ad20586749a52e831a52c20984a926d67',1,'tvm::detail']]],
   ['setvalue_3c_20uint64_5ft_20_3e',['SetValue&lt; uint64_t &gt;',['../namespacetvm_1_1detail.html#acb3382242cbf538f64edae13e4ec5a84',1,'tvm::detail']]],
-  ['shape',['Shape',['../classtvm_1_1runtime_1_1NDArray.html#a04129f44f5d17ab63a10e107a939f282',1,'tvm::runtime::NDArray::Shape()'],['../classtvm_1_1TensorTypeNode.html#a98fa347833e4504dd6f8056d9863a708',1,'tvm::TensorTypeNode::shape()'],['../structtvm_1_1relay_1_1InitOpAttrs.html#aaaec76cc5ea9a543c4ea174a6b38bf5e',1,'tvm::relay::InitOpAttrs::shape()'],['../classtvm_1_1relay_1_1ShapePatternNode.html#a749813cbbd38f8021a7df897d527d6e0',1,'tvm::relay::ShapePatternNode::shape()'],['../struct [...]
+  ['shape',['Shape',['../classtvm_1_1runtime_1_1NDArray.html#a04129f44f5d17ab63a10e107a939f282',1,'tvm::runtime::NDArray::Shape()'],['../classtvm_1_1TensorTypeNode.html#a98fa347833e4504dd6f8056d9863a708',1,'tvm::TensorTypeNode::shape()'],['../structtvm_1_1relay_1_1InitOpAttrs.html#aaaec76cc5ea9a543c4ea174a6b38bf5e',1,'tvm::relay::InitOpAttrs::shape()'],['../classtvm_1_1relay_1_1ShapePatternNode.html#a749813cbbd38f8021a7df897d527d6e0',1,'tvm::relay::ShapePatternNode::shape()'],['../struct [...]
   ['shape_5f',['shape_',['../classtvm_1_1runtime_1_1NDArray_1_1ContainerBase.html#a852a3d49f916098ea6012237dbd242fc',1,'tvm::runtime::NDArray::ContainerBase']]],
   ['shape_5fcount',['shape_count',['../structTVMGraphRuntimeGraphAttr.html#a6e889d9164cb5943b6acff98940b353b',1,'TVMGraphRuntimeGraphAttr']]],
   ['shape_5fof',['shape_of',['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#ab1743df34d06ff8073f4f5c565e0d7f7',1,'tvm::runtime::vm::Instruction']]],
diff --git a/docs/api/doxygen/search/all_14.js b/docs/api/doxygen/search/all_14.js
index 4e4a32a..6e9d818 100644
--- a/docs/api/doxygen/search/all_14.js
+++ b/docs/api/doxygen/search/all_14.js
@@ -365,6 +365,7 @@ var searchData=
   ['tvmpackedfunc_5fsetargs',['TVMPackedFunc_SetArgs',['../crt_2packed__func_8h.html#af145c1c723cc05360ab7b66bcf6f435e',1,'packed_func.h']]],
   ['tvmparallelgroupenv',['TVMParallelGroupEnv',['../structTVMParallelGroupEnv.html',1,'']]],
   ['tvmplatformabort',['TVMPlatformAbort',['../platform_8h.html#a47980e4ea2182978f94ca87cc15ca0c8',1,'platform.h']]],
+  ['tvmplatformformatmessage',['TVMPlatformFormatMessage',['../platform_8h.html#a6dfecb024ace62e724817f90b6407285',1,'platform.h']]],
   ['tvmpodvalue_5f',['TVMPODValue_',['../classtvm_1_1runtime_1_1TVMPODValue__.html',1,'tvm::runtime']]],
   ['tvmpodvalue_5f',['TVMPODValue_',['../classtvm_1_1runtime_1_1NDArray.html#a9a9fd94393cfd7d4b6e6029348e3e19a',1,'tvm::runtime::NDArray::TVMPODValue_()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#a9a9fd94393cfd7d4b6e6029348e3e19a',1,'tvm::runtime::ObjectPtr::TVMPODValue_()'],['../classtvm_1_1runtime_1_1TVMPODValue__.html#a2f46b59a6c1d5eb4575d7f583b5f1a0c',1,'tvm::runtime::TVMPODValue_::TVMPODValue_()'],['../classtvm_1_1runtime_1_1TVMPODValue__.html#afe1837bdbafe8341c2031c5cebcf6e74',1, [...]
   ['tvmretvalue',['TVMRetValue',['../classtvm_1_1runtime_1_1TVMRetValue.html',1,'tvm::runtime']]],
@@ -379,8 +380,8 @@ var searchData=
   ['tvmsynchronize',['TVMSynchronize',['../c__runtime__api_8h.html#a386d7efd946bc750af8bf109f93f6ce2',1,'c_runtime_api.h']]],
   ['tvmsystemlibentrypoint',['TVMSystemLibEntryPoint',['../runtime_2crt_2module_8h.html#a32fdb5a1df93075a184a36d2549833fa',1,'module.h']]],
   ['tvmvalue',['TVMValue',['../unionTVMValue.html',1,'']]],
-  ['type',['Type',['../classtvm_1_1Type.html',1,'tvm']]],
   ['type',['type',['../structtvm_1_1detail_1_1is__specialized.html#a3ea7783c457d7ddc82100674292724f4',1,'tvm::detail::is_specialized::type()'],['../structtvm_1_1detail_1_1is__specialized_3_01Container_3_01Args_8_8_8_01_4_00_01Container_01_4.html#a8dee3a1604498d6bc64948f1c0d19dc2',1,'tvm::detail::is_specialized&lt; Container&lt; Args... &gt;, Container &gt;::type()'],['../classtvm_1_1relay_1_1TypePatternNode.html#aab5faa2a58862707b8dc18b59cccac19',1,'tvm::relay::TypePatternNode::type()'], [...]
+  ['type',['Type',['../classtvm_1_1Type.html',1,'tvm']]],
   ['type_2eh',['type.h',['../ir_2type_8h.html',1,'']]],
   ['type_2eh',['type.h',['../relay_2type_8h.html',1,'']]],
   ['type_5fannotation',['type_annotation',['../classtvm_1_1relay_1_1VarPatternNode.html#ac5c8070ca813620d9d01497292188657',1,'tvm::relay::VarPatternNode::type_annotation()'],['../classtvm_1_1relay_1_1VarNode.html#a79a56885eaf2a9326ff490164a5c1f0e',1,'tvm::relay::VarNode::type_annotation()'],['../classtvm_1_1tir_1_1VarNode.html#a7a84c6d137a79e9a5b9c4b6183f18353',1,'tvm::tir::VarNode::type_annotation()']]],
@@ -402,15 +403,15 @@ var searchData=
   ['type_5frelation_2eh',['type_relation.h',['../type__relation_8h.html',1,'']]],
   ['type_5fvars',['type_vars',['../classtvm_1_1TypeDataNode.html#a350a23efc88be1def5b93d27ac6fa88b',1,'tvm::TypeDataNode']]],
   ['typeannotation',['TypeAnnotation',['../namespacetvm_1_1tir.html#abf355a4fdeb063b1adb4946cad5fca68',1,'tvm::tir']]],
-  ['typecall',['TypeCall',['../classtvm_1_1TypeCall.html#a54ca5beebff2a428241cf7564b496e02',1,'tvm::TypeCall::TypeCall()'],['../namespacetvm_1_1relay.html#ab406a37acee11226e3e2e119beee439e',1,'tvm::relay::TypeCall()']]],
   ['typecall',['TypeCall',['../classtvm_1_1TypeCall.html',1,'tvm']]],
+  ['typecall',['TypeCall',['../classtvm_1_1TypeCall.html#a54ca5beebff2a428241cf7564b496e02',1,'tvm::TypeCall::TypeCall()'],['../namespacetvm_1_1relay.html#ab406a37acee11226e3e2e119beee439e',1,'tvm::relay::TypeCall()']]],
   ['typecallnode',['TypeCallNode',['../namespacetvm_1_1relay.html#af4dccabc877b8fd7db47cb73fb93883e',1,'tvm::relay']]],
   ['typecallnode',['TypeCallNode',['../classtvm_1_1TypeCallNode.html',1,'tvm']]],
   ['typecode',['TypeCode',['../classtvm_1_1runtime_1_1DataType.html#a3c9ce1627be2550f656cd37b6c698c7d',1,'tvm::runtime::DataType']]],
   ['typeconstraint',['TypeConstraint',['../classtvm_1_1TypeConstraint.html',1,'tvm']]],
   ['typeconstraint',['TypeConstraint',['../namespacetvm_1_1relay.html#a64e2e93fe04716efd8334ab4e39c92ce',1,'tvm::relay']]],
-  ['typeconstraintnode',['TypeConstraintNode',['../namespacetvm_1_1relay.html#a565e027589acded20ca38df22be098dc',1,'tvm::relay']]],
   ['typeconstraintnode',['TypeConstraintNode',['../classtvm_1_1TypeConstraintNode.html',1,'tvm']]],
+  ['typeconstraintnode',['TypeConstraintNode',['../namespacetvm_1_1relay.html#a565e027589acded20ca38df22be098dc',1,'tvm::relay']]],
   ['typedata',['TypeData',['../classtvm_1_1TypeData.html#a0a98fd1095812379d2bd1337db1511c1',1,'tvm::TypeData::TypeData()'],['../namespacetvm_1_1relay.html#a6e725a1cb4c83346e261eac7dc7292a8',1,'tvm::relay::TypeData()']]],
   ['typedata',['TypeData',['../classtvm_1_1TypeData.html',1,'tvm']]],
   ['typedatanode',['TypeDataNode',['../namespacetvm_1_1relay.html#a2b8c0d5920eaca88569907e92df6066f',1,'tvm::relay']]],
@@ -438,8 +439,8 @@ var searchData=
   ['typekind',['TypeKind',['../namespacetvm.html#acd267f8d7f55da6ac681239831963279',1,'tvm']]],
   ['typematch',['TypeMatch',['../namespacetvm_1_1runtime.html#adbabb7cfb79bfb6d802f65a9803e4eb6',1,'tvm::runtime']]],
   ['typemutator',['TypeMutator',['../classtvm_1_1TypeMutator.html',1,'tvm']]],
-  ['typename',['TypeName',['../structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Array_3_01T_01_4_01_4.html#aab22b555cfe16d040c204527c73a3287',1,'tvm::runtime::ObjectTypeChecker&lt; Array&lt; T &gt; &gt;::TypeName()'],['../structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Map_3_01K_00_01V_01_4_01_4.html#a9cb994fc6604ad2c287d9e824e67d2e2',1,'tvm::runtime::ObjectTypeChecker&lt; Map&lt; K, V &gt; &gt;::TypeName()'],['../structtvm_1_1runtime_1_1ObjectTypeChecker.html#a3498eb545b33e1c23a417fa58ec51dd [...]
   ['typename',['TypeName',['../structtvm_1_1detail_1_1TypeName.html',1,'tvm::detail']]],
+  ['typename',['TypeName',['../structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Array_3_01T_01_4_01_4.html#aab22b555cfe16d040c204527c73a3287',1,'tvm::runtime::ObjectTypeChecker&lt; Array&lt; T &gt; &gt;::TypeName()'],['../structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Map_3_01K_00_01V_01_4_01_4.html#a9cb994fc6604ad2c287d9e824e67d2e2',1,'tvm::runtime::ObjectTypeChecker&lt; Map&lt; K, V &gt; &gt;::TypeName()'],['../structtvm_1_1runtime_1_1ObjectTypeChecker.html#a3498eb545b33e1c23a417fa58ec51dd [...]
   ['typename_3c_20bool_20_3e',['TypeName&lt; bool &gt;',['../structtvm_1_1detail_1_1TypeName_3_01bool_01_4.html',1,'tvm::detail']]],
   ['typename_3c_20datatype_20_3e',['TypeName&lt; DataType &gt;',['../structtvm_1_1detail_1_1TypeName_3_01DataType_01_4.html',1,'tvm::detail']]],
   ['typename_3c_20double_20_3e',['TypeName&lt; double &gt;',['../structtvm_1_1detail_1_1TypeName_3_01double_01_4.html',1,'tvm::detail']]],
@@ -447,18 +448,18 @@ var searchData=
   ['typename_3c_20int64_5ft_20_3e',['TypeName&lt; int64_t &gt;',['../structtvm_1_1detail_1_1TypeName_3_01int64__t_01_4.html',1,'tvm::detail']]],
   ['typename_3c_20uint64_5ft_20_3e',['TypeName&lt; uint64_t &gt;',['../structtvm_1_1detail_1_1TypeName_3_01uint64__t_01_4.html',1,'tvm::detail']]],
   ['typename_3c_20void_20_2a_20_3e',['TypeName&lt; void * &gt;',['../structtvm_1_1detail_1_1TypeName_3_01void_01_5_01_4.html',1,'tvm::detail']]],
-  ['typenode',['TypeNode',['../classtvm_1_1TypeNode.html',1,'tvm']]],
   ['typenode',['TypeNode',['../namespacetvm_1_1relay.html#af6995f0c848d0d5cc4124a38f43aaf12',1,'tvm::relay']]],
+  ['typenode',['TypeNode',['../classtvm_1_1TypeNode.html',1,'tvm']]],
   ['typepattern',['TypePattern',['../classtvm_1_1relay_1_1TypePattern.html#a3364c4747a676e0e33e8127fe17632ea',1,'tvm::relay::TypePattern']]],
   ['typepattern',['TypePattern',['../classtvm_1_1relay_1_1TypePattern.html',1,'tvm::relay']]],
   ['typepatternnode',['TypePatternNode',['../classtvm_1_1relay_1_1TypePatternNode.html',1,'tvm::relay']]],
-  ['typerelation',['TypeRelation',['../classtvm_1_1TypeRelation.html',1,'tvm']]],
   ['typerelation',['TypeRelation',['../classtvm_1_1TypeRelation.html#ac26b1897eab8197ed26606ab81b7403b',1,'tvm::TypeRelation::TypeRelation()'],['../namespacetvm_1_1relay.html#adab0d56fd993df71df3068dea0cd5456',1,'tvm::relay::TypeRelation()']]],
+  ['typerelation',['TypeRelation',['../classtvm_1_1TypeRelation.html',1,'tvm']]],
   ['typerelationfn',['TypeRelationFn',['../namespacetvm.html#a72dcba4493adfcd8908663898ece3514',1,'tvm::TypeRelationFn()'],['../namespacetvm_1_1relay.html#af253112249297a6cfb2a9b94cde0f235',1,'tvm::relay::TypeRelationFn()']]],
-  ['typerelationnode',['TypeRelationNode',['../namespacetvm_1_1relay.html#a89d812eaf13520b04e89a9414c51748c',1,'tvm::relay']]],
   ['typerelationnode',['TypeRelationNode',['../classtvm_1_1TypeRelationNode.html',1,'tvm']]],
-  ['typereporter',['TypeReporter',['../classtvm_1_1TypeReporter.html#a8e7e05a07f9f7ad9bea91f27afac9051',1,'tvm::TypeReporter::TypeReporter()'],['../classtvm_1_1TypeReporter.html#aa3dc38a3c84d324d0b3a9f358460a091',1,'tvm::TypeReporter::TypeReporter(ObjectPtr&lt; Object &gt; n)'],['../namespacetvm_1_1relay.html#afa9be9990c2006832cbfc02ebb35e527',1,'tvm::relay::TypeReporter()']]],
+  ['typerelationnode',['TypeRelationNode',['../namespacetvm_1_1relay.html#a89d812eaf13520b04e89a9414c51748c',1,'tvm::relay']]],
   ['typereporter',['TypeReporter',['../classtvm_1_1TypeReporter.html',1,'tvm']]],
+  ['typereporter',['TypeReporter',['../classtvm_1_1TypeReporter.html#a8e7e05a07f9f7ad9bea91f27afac9051',1,'tvm::TypeReporter::TypeReporter()'],['../classtvm_1_1TypeReporter.html#aa3dc38a3c84d324d0b3a9f358460a091',1,'tvm::TypeReporter::TypeReporter(ObjectPtr&lt; Object &gt; n)'],['../namespacetvm_1_1relay.html#afa9be9990c2006832cbfc02ebb35e527',1,'tvm::relay::TypeReporter()']]],
   ['typereporternode',['TypeReporterNode',['../namespacetvm_1_1relay.html#aaa3b5700ea20db399f539cec1abcb12b',1,'tvm::relay']]],
   ['typereporternode',['TypeReporterNode',['../classtvm_1_1TypeReporterNode.html',1,'tvm']]],
   ['typevar',['TypeVar',['../classtvm_1_1TypeVar.html#adf5ef8e89d162735519b5d125c89e3e3',1,'tvm::TypeVar::TypeVar()'],['../namespacetvm_1_1relay.html#a63321eb51080f3f57dd7563a3ca0bfa6',1,'tvm::relay::TypeVar()']]],
diff --git a/docs/api/doxygen/search/all_7.js b/docs/api/doxygen/search/all_7.js
index a43ce73..d03cc7b 100644
--- a/docs/api/doxygen/search/all_7.js
+++ b/docs/api/doxygen/search/all_7.js
@@ -67,6 +67,7 @@ var searchData=
   ['getrealaxis',['GetRealAxis',['../namespacetvm_1_1topi.html#aa45cdc15f72e867eff29c74b2dffd185',1,'tvm::topi']]],
   ['getref',['GetRef',['../classtvm_1_1runtime_1_1ObjectPtr.html#a4365e69ddcc4d8c13904852391b99268',1,'tvm::runtime::ObjectPtr::GetRef()'],['../namespacetvm_1_1runtime.html#aa4a97de4fefd23aa5942c6a545544a05',1,'tvm::runtime::GetRef(const ObjectType *ptr)'],['../namespacetvm_1_1runtime.html#af63300957592e8991c18c54703123ef7',1,'tvm::runtime::GetRef(const ObjType *ptr)']]],
   ['getreprbytes',['GetReprBytes',['../classtvm_1_1ReflectionVTable.html#acc577dacd480beaee8f905bab8d2029c',1,'tvm::ReflectionVTable']]],
+  ['getrpcsessionindex',['GetRPCSessionIndex',['../namespacetvm_1_1runtime.html#a9ac54b0d7a3e3c22fd0ddef0a731cfd5',1,'tvm::runtime']]],
   ['getruntimedatatype',['GetRuntimeDataType',['../namespacetvm.html#a0447e9aa45f6cab707f6dc9f9281b3f5',1,'tvm']]],
   ['getshape',['GetShape',['../classtvm_1_1te_1_1TensorNode.html#a35df267780880731a051773b2dca7bbd',1,'tvm::te::TensorNode::GetShape()'],['../classtvm_1_1tir_1_1DataProducerNode.html#aa44d3146c9f543538a32dfd954512bf7',1,'tvm::tir::DataProducerNode::GetShape()']]],
   ['getsigntype',['GetSignType',['../classtvm_1_1arith_1_1IntSet.html#a651c7689d1da9b8f931312ffd6954dbd',1,'tvm::arith::IntSet']]],
@@ -84,10 +85,10 @@ var searchData=
   ['globalpool2dattrs',['GlobalPool2DAttrs',['../structtvm_1_1relay_1_1GlobalPool2DAttrs.html',1,'tvm::relay']]],
   ['globaltypevar',['GlobalTypeVar',['../classtvm_1_1GlobalTypeVar.html#a323a2269c3ab4edf67796d5d51fc5ebf',1,'tvm::GlobalTypeVar::GlobalTypeVar()'],['../namespacetvm_1_1relay.html#a2235e350f9cd1eac3aa0177034976043',1,'tvm::relay::GlobalTypeVar()']]],
   ['globaltypevar',['GlobalTypeVar',['../classtvm_1_1GlobalTypeVar.html',1,'tvm']]],
-  ['globaltypevarnode',['GlobalTypeVarNode',['../namespacetvm_1_1relay.html#a9a10e2305e3a50dd00e07b043b93b5e8',1,'tvm::relay']]],
   ['globaltypevarnode',['GlobalTypeVarNode',['../classtvm_1_1GlobalTypeVarNode.html',1,'tvm']]],
-  ['globalvar',['GlobalVar',['../classtvm_1_1GlobalVar.html',1,'tvm']]],
+  ['globaltypevarnode',['GlobalTypeVarNode',['../namespacetvm_1_1relay.html#a9a10e2305e3a50dd00e07b043b93b5e8',1,'tvm::relay']]],
   ['globalvar',['GlobalVar',['../classtvm_1_1GlobalVar.html#a245549e21b51742150a22bfbec80f53e',1,'tvm::GlobalVar::GlobalVar()'],['../namespacetvm_1_1relay.html#a81ac7c3d0824529fddce7849c9c66289',1,'tvm::relay::GlobalVar()']]],
+  ['globalvar',['GlobalVar',['../classtvm_1_1GlobalVar.html',1,'tvm']]],
   ['globalvarnode',['GlobalVarNode',['../namespacetvm_1_1relay.html#afe7144195dbbc914183189444ef6a347',1,'tvm::relay']]],
   ['globalvarnode',['GlobalVarNode',['../classtvm_1_1GlobalVarNode.html',1,'tvm']]],
   ['goto',['Goto',['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a40b49fe5c05c5fe5f7a5c5c01bf651c8',1,'tvm::runtime::vm::Instruction::Goto()'],['../namespacetvm_1_1runtime_1_1vm.html#a8d8d95ce8d629c7213f2f595917870ecaae8fae3d74fdc80ef40995c3c6ca152e',1,'tvm::runtime::vm::Goto()']]],
diff --git a/docs/api/doxygen/search/all_9.js b/docs/api/doxygen/search/all_9.js
index 8574772..bae7431 100644
--- a/docs/api/doxygen/search/all_9.js
+++ b/docs/api/doxygen/search/all_9.js
@@ -177,6 +177,7 @@ var searchData=
   ['ispragmakey',['IsPragmaKey',['../namespacetvm_1_1tir_1_1attr.html#a385e883a7cecc309d063786e5fdf2c4b',1,'tvm::tir::attr']]],
   ['isprimal',['IsPrimal',['../classtvm_1_1tir_1_1LayoutAxis.html#a13e11bef75e29b71977779124f72e1b9',1,'tvm::tir::LayoutAxis']]],
   ['isprimitiveop',['IsPrimitiveOp',['../classtvm_1_1OpNode.html#a285c8dc0ccec2ca34386271d1b338506',1,'tvm::OpNode::IsPrimitiveOp() const '],['../classtvm_1_1OpNode.html#aee9090e54dff3e72ed272b981e036ae6',1,'tvm::OpNode::IsPrimitiveOp()'],['../namespacetvm.html#a8259e23409eda017c6bde908e050b670',1,'tvm::IsPrimitiveOp()']]],
+  ['isrpcsessioncontext',['IsRPCSessionContext',['../namespacetvm_1_1runtime.html#af2a8f6198750ead46feeb72ef4f9de4c',1,'tvm::runtime']]],
   ['issimpleaccess',['IsSimpleAccess',['../classtvm_1_1auto__scheduler_1_1AccessAnalyzer.html#a9a059e6b4d9a04a700c9bf5ed72db7d1',1,'tvm::auto_scheduler::AccessAnalyzer']]],
   ['issinglepoint',['IsSinglePoint',['../classtvm_1_1arith_1_1IntSet.html#a7422ed5fde1738b2930af58666f9a946',1,'tvm::arith::IntSet']]],
   ['isstrictlyinlineable',['IsStrictlyInlineable',['../classtvm_1_1auto__scheduler_1_1AccessAnalyzer.html#a77b7f4c645c1d5f1bbf64417d718a3ce',1,'tvm::auto_scheduler::AccessAnalyzer']]],
@@ -191,9 +192,9 @@ var searchData=
   ['iteradapter',['IterAdapter',['../classtvm_1_1runtime_1_1IterAdapter.html',1,'tvm::runtime']]],
   ['iteradapter',['IterAdapter',['../classtvm_1_1runtime_1_1IterAdapter.html#abd170b9025d87892f41780c2a4d8bbfc',1,'tvm::runtime::IterAdapter']]],
   ['iterator',['Iterator',['../classtvm_1_1auto__scheduler_1_1Iterator.html',1,'tvm::auto_scheduler']]],
-  ['iterator',['iterator',['../classtvm_1_1Map_1_1iterator.html',1,'tvm::Map']]],
-  ['iterator',['iterator',['../classtvm_1_1runtime_1_1Array.html#a98e5ad633b8195d954c98067213ae29f',1,'tvm::runtime::Array::iterator()'],['../classtvm_1_1MapNode_1_1iterator.html#a88e86aa2905755683efaa0e8cfff0a2c',1,'tvm::MapNode::iterator::iterator()'],['../classtvm_1_1MapNode_1_1iterator.html#ab8b919a83a0dd9a4bd386ad3425763ae',1,'tvm::MapNode::iterator::iterator(uint64_t index, const MapNode *self)'],['../classtvm_1_1Map_1_1iterator.html#a030cc3169e173281d8f75f37a127d2cd',1,'tvm::Map:: [...]
   ['iterator',['iterator',['../classtvm_1_1MapNode_1_1iterator.html',1,'tvm::MapNode']]],
+  ['iterator',['iterator',['../classtvm_1_1runtime_1_1Array.html#a98e5ad633b8195d954c98067213ae29f',1,'tvm::runtime::Array::iterator()'],['../classtvm_1_1MapNode_1_1iterator.html#a88e86aa2905755683efaa0e8cfff0a2c',1,'tvm::MapNode::iterator::iterator()'],['../classtvm_1_1MapNode_1_1iterator.html#ab8b919a83a0dd9a4bd386ad3425763ae',1,'tvm::MapNode::iterator::iterator(uint64_t index, const MapNode *self)'],['../classtvm_1_1Map_1_1iterator.html#a030cc3169e173281d8f75f37a127d2cd',1,'tvm::Map:: [...]
+  ['iterator',['iterator',['../classtvm_1_1Map_1_1iterator.html',1,'tvm::Map']]],
   ['iterator_5fcategory',['iterator_category',['../classtvm_1_1MapNode_1_1iterator.html#a38e872db5b6f2d7475ed91ff22254084',1,'tvm::MapNode::iterator::iterator_category()'],['../classtvm_1_1Map_1_1iterator.html#acec00aa74a93f8fcb1e4d7e439073312',1,'tvm::Map::iterator::iterator_category()'],['../classtvm_1_1runtime_1_1IterAdapter.html#ad3b2d03b6683bdaee0b524612b53d419',1,'tvm::runtime::IterAdapter::iterator_category()'],['../classtvm_1_1runtime_1_1ReverseIterAdapter.html#a4548877b6a33598c7 [...]
   ['iteratorannotation',['IteratorAnnotation',['../namespacetvm_1_1auto__scheduler.html#ad81bc395fc88957fbd33bf041adbe0ec',1,'tvm::auto_scheduler']]],
   ['iteratorannotationstring',['IteratorAnnotationString',['../namespacetvm_1_1auto__scheduler.html#a8eade4c9463a2502919ec5d2fdbd5b64',1,'tvm::auto_scheduler']]],
@@ -203,12 +204,12 @@ var searchData=
   ['iterkeyhash',['IterKeyHash',['../structtvm_1_1auto__scheduler_1_1AttachMapNode_1_1IterKeyHash.html',1,'tvm::auto_scheduler::AttachMapNode']]],
   ['itermapexpr',['IterMapExpr',['../classtvm_1_1arith_1_1IterMapExpr.html',1,'tvm::arith']]],
   ['itermapexprnode',['IterMapExprNode',['../classtvm_1_1arith_1_1IterMapExprNode.html',1,'tvm::arith']]],
-  ['itermark',['IterMark',['../classtvm_1_1arith_1_1IterMark.html#a7b46a2bc2460f43e529a6fc65a0a618d',1,'tvm::arith::IterMark']]],
   ['itermark',['IterMark',['../classtvm_1_1arith_1_1IterMark.html',1,'tvm::arith']]],
+  ['itermark',['IterMark',['../classtvm_1_1arith_1_1IterMark.html#a7b46a2bc2460f43e529a6fc65a0a618d',1,'tvm::arith::IterMark']]],
   ['itermarknode',['IterMarkNode',['../classtvm_1_1arith_1_1IterMarkNode.html',1,'tvm::arith']]],
   ['iters',['iters',['../classtvm_1_1auto__scheduler_1_1StageNode.html#a65304957db6f84d8d7c90ad553453bb9',1,'tvm::auto_scheduler::StageNode']]],
-  ['itersplitexpr',['IterSplitExpr',['../classtvm_1_1arith_1_1IterSplitExpr.html#a754a9d8338aa2d2b5fac9e10c95c9128',1,'tvm::arith::IterSplitExpr::IterSplitExpr(IterMark source)'],['../classtvm_1_1arith_1_1IterSplitExpr.html#af919631fd9bfb7726d0a867ee9f0e6f5',1,'tvm::arith::IterSplitExpr::IterSplitExpr(IterMark source, PrimExpr scale)'],['../classtvm_1_1arith_1_1IterSplitExpr.html#a59bd2fa8d07f4ad2c4ac09c8f7004cb8',1,'tvm::arith::IterSplitExpr::IterSplitExpr(IterMark source, PrimExpr lowe [...]
   ['itersplitexpr',['IterSplitExpr',['../classtvm_1_1arith_1_1IterSplitExpr.html',1,'tvm::arith']]],
+  ['itersplitexpr',['IterSplitExpr',['../classtvm_1_1arith_1_1IterSplitExpr.html#a754a9d8338aa2d2b5fac9e10c95c9128',1,'tvm::arith::IterSplitExpr::IterSplitExpr(IterMark source)'],['../classtvm_1_1arith_1_1IterSplitExpr.html#af919631fd9bfb7726d0a867ee9f0e6f5',1,'tvm::arith::IterSplitExpr::IterSplitExpr(IterMark source, PrimExpr scale)'],['../classtvm_1_1arith_1_1IterSplitExpr.html#a59bd2fa8d07f4ad2c4ac09c8f7004cb8',1,'tvm::arith::IterSplitExpr::IterSplitExpr(IterMark source, PrimExpr lowe [...]
   ['itersplitexprnode',['IterSplitExprNode',['../classtvm_1_1arith_1_1IterSplitExprNode.html',1,'tvm::arith']]],
   ['itersumexpr',['IterSumExpr',['../classtvm_1_1arith_1_1IterSumExpr.html',1,'tvm::arith']]],
   ['itersumexpr',['IterSumExpr',['../classtvm_1_1arith_1_1IterSumExpr.html#a1b9f8013f3978bafe4da3a6cad65fb36',1,'tvm::arith::IterSumExpr']]],
diff --git a/docs/api/doxygen/search/all_f.js b/docs/api/doxygen/search/all_f.js
index bac9e7d..eea38fd 100644
--- a/docs/api/doxygen/search/all_f.js
+++ b/docs/api/doxygen/search/all_f.js
@@ -80,7 +80,7 @@ var searchData=
   ['operator_2f',['operator/',['../namespacetvm.html#a18256ba1213ce5ff3cf8037a314354b7',1,'tvm::operator/(PrimExpr a, PrimExpr b)'],['../namespacetvm.html#a136427374941fbf8e50f53b1cab39e38',1,'tvm::operator/(const PrimExpr &amp;a, const TB &amp;b)']]],
   ['operator_2f_3d',['operator/=',['../namespacetvm.html#a51dc569142bf8ce8ea55f73029d3807d',1,'tvm']]],
   ['operator_3c',['operator&lt;',['../classtvm_1_1runtime_1_1ObjectRef.html#a17626209947c4a2f302422be451661c5',1,'tvm::runtime::ObjectRef::operator&lt;()'],['../namespacetvm_1_1runtime.html#a2865dffa2fddf5eff9d7ed397563ebd6',1,'tvm::runtime::operator&lt;(const String &amp;lhs, const std::string &amp;rhs)'],['../namespacetvm_1_1runtime.html#ad5305faaeefd679da62186dab423bdab',1,'tvm::runtime::operator&lt;(const std::string &amp;lhs, const String &amp;rhs)'],['../namespacetvm_1_1runtime.htm [...]
-  ['operator_3c_3c',['operator&lt;&lt;',['../classtvm_1_1DiagnosticBuilder.html#aa92a3f9039d464fbefaed90b0e255e84',1,'tvm::DiagnosticBuilder::operator&lt;&lt;()'],['../structtvm_1_1ErrorBuilder.html#ad40b754d2d8992b65d0bc5b116bd3f71',1,'tvm::ErrorBuilder::operator&lt;&lt;()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a7948440c8e6f670e3c782619415dc184',1,'tvm::runtime::vm::Instruction::operator&lt;&lt;()'],['../structtvm_1_1runtime_1_1vm_1_1VMFunction.html#a4dd5eae76553d1be115e7 [...]
+  ['operator_3c_3c',['operator&lt;&lt;',['../classtvm_1_1DiagnosticBuilder.html#aa92a3f9039d464fbefaed90b0e255e84',1,'tvm::DiagnosticBuilder::operator&lt;&lt;()'],['../structtvm_1_1ErrorBuilder.html#ad40b754d2d8992b65d0bc5b116bd3f71',1,'tvm::ErrorBuilder::operator&lt;&lt;()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a7948440c8e6f670e3c782619415dc184',1,'tvm::runtime::vm::Instruction::operator&lt;&lt;()'],['../structtvm_1_1runtime_1_1vm_1_1VMFunction.html#a4dd5eae76553d1be115e7 [...]
   ['operator_3c_3d',['operator&lt;=',['../namespacetvm_1_1runtime.html#a92428efae022d4982b2644f8960d4386',1,'tvm::runtime::operator&lt;=(const String &amp;lhs, const std::string &amp;rhs)'],['../namespacetvm_1_1runtime.html#a8daf39dc422f228fae2ec11a426bab28',1,'tvm::runtime::operator&lt;=(const std::string &amp;lhs, const String &amp;rhs)'],['../namespacetvm_1_1runtime.html#a9cf2e7e67fd12d69c5bce2be881c8296',1,'tvm::runtime::operator&lt;=(const String &amp;lhs, const String &amp;rhs)'],[ [...]
   ['operator_3d',['operator=',['../classtvm_1_1arith_1_1Analyzer.html#a9dccc7d98b8b9465390e10436d3a9178',1,'tvm::arith::Analyzer::operator=()'],['../classtvm_1_1TypedEnvFunc_3_01R_07Args_8_8_8_08_4.html#aab332907b9f98876f441f6403b801187',1,'tvm::TypedEnvFunc&lt; R(Args...)&gt;::operator=()'],['../classtvm_1_1Integer.html#ad538a2ae6f636b3ce38fb4162b1c2549',1,'tvm::Integer::operator=()'],['../classtvm_1_1Map.html#acf11920f50d9a6283cd2f1ed9985bfca',1,'tvm::Map::operator=(Map&lt; K, V &gt; & [...]
   ['operator_3d_3d',['operator==',['../classtvm_1_1Integer.html#ad71119ff42add763e11fafe4f4194f6a',1,'tvm::Integer::operator==(int other) const '],['../classtvm_1_1Integer.html#ae395bcab691d3bdd0fc5978fe5addea3',1,'tvm::Integer::operator==(Enum other) const '],['../classtvm_1_1MapNode_1_1iterator.html#a127d6726a9fd54602523f711725c4c3f',1,'tvm::MapNode::iterator::operator==()'],['../classtvm_1_1Map_1_1iterator.html#a1e2c5fa12e3cb81b3e5e3b999e6da4ec',1,'tvm::Map::iterator::operator==()'],[ [...]
diff --git a/docs/api/doxygen/search/functions_1.js b/docs/api/doxygen/search/functions_1.js
index 2f85952..1a51c96 100644
--- a/docs/api/doxygen/search/functions_1.js
+++ b/docs/api/doxygen/search/functions_1.js
@@ -15,6 +15,7 @@ var searchData=
   ['addimplementation',['AddImplementation',['../classtvm_1_1relay_1_1OpSpecialization.html#ac76d48fb032a2e732a4309dc0ce7a636',1,'tvm::relay::OpSpecialization::AddImplementation()'],['../classtvm_1_1relay_1_1OpStrategy.html#a3d389c44571d9b3e5a6135f22b8a7bf3',1,'tvm::relay::OpStrategy::AddImplementation()']]],
   ['address_5fof',['address_of',['../namespacetvm_1_1tir_1_1builtin.html#a700b7018f2c1f1fba8b4e28f264d8bbb',1,'tvm::tir::builtin']]],
   ['addressof',['AddressOf',['../classtvm_1_1runtime_1_1InplaceArrayBase.html#ae4f845e2695ce301c6c3916a6e280c49',1,'tvm::runtime::InplaceArrayBase']]],
+  ['addrpcsessionmask',['AddRPCSessionMask',['../namespacetvm_1_1runtime.html#a409b50f5d118a11f7a9f234498be7c27',1,'tvm::runtime']]],
   ['addtag',['AddTag',['../classtvm_1_1TargetTag.html#a467b19e6f2764e006f6ed412e3a8b48c',1,'tvm::TargetTag']]],
   ['addtypedef',['AddTypeDef',['../classtvm_1_1IRModuleNode.html#a4284c66981befd976af5deadaca2b7f6',1,'tvm::IRModuleNode']]],
   ['addtypedefunchecked',['AddTypeDefUnchecked',['../classtvm_1_1IRModuleNode.html#a1c4aaf62ebed8952d523c3e832051299',1,'tvm::IRModuleNode']]],
diff --git a/docs/api/doxygen/search/functions_12.js b/docs/api/doxygen/search/functions_12.js
index 6d0fdff..60d9cbe 100644
--- a/docs/api/doxygen/search/functions_12.js
+++ b/docs/api/doxygen/search/functions_12.js
@@ -38,6 +38,7 @@ var searchData=
   ['remapthreadaxis',['RemapThreadAxis',['../namespacetvm_1_1tir_1_1transform.html#a25b5de58d543c6786325d87eaad83692',1,'tvm::tir::transform']]],
   ['remove',['Remove',['../classtvm_1_1IRModuleNode.html#a1350c7d68665605f9c4f10850f4a90b9',1,'tvm::IRModuleNode::Remove()'],['../classtvm_1_1runtime_1_1Registry.html#aad89aa915515019c59364b7b569c4648',1,'tvm::runtime::Registry::Remove()']]],
   ['removenoop',['RemoveNoOp',['../namespacetvm_1_1tir_1_1transform.html#a8aad1159425e29be796562b2ec629b10',1,'tvm::tir::transform']]],
+  ['removerpcsessionmask',['RemoveRPCSessionMask',['../namespacetvm_1_1runtime.html#aea8fddcdd83b2bce46fbff699f43eee6',1,'tvm::runtime']]],
   ['removeunusedfunctions',['RemoveUnusedFunctions',['../namespacetvm_1_1relay_1_1transform.html#afbbf5f3e5ffb775fafb9c48473dbfa24',1,'tvm::relay::transform']]],
   ['rend',['rend',['../classtvm_1_1runtime_1_1Array.html#ab9f93fb26aa3d08fd8665abde9d8bacf',1,'tvm::runtime::Array']]],
   ['render',['Render',['../classtvm_1_1DiagnosticRenderer.html#a186c087a55cedd9f55b56c2925f5a559',1,'tvm::DiagnosticRenderer::Render()'],['../classtvm_1_1DiagnosticContext.html#a118fc9eccb99eb0772013eca507d97eb',1,'tvm::DiagnosticContext::Render()']]],
diff --git a/docs/api/doxygen/search/functions_14.js b/docs/api/doxygen/search/functions_14.js
index 0b67780..deb8b47 100644
--- a/docs/api/doxygen/search/functions_14.js
+++ b/docs/api/doxygen/search/functions_14.js
@@ -148,6 +148,7 @@ var searchData=
   ['tvmpackedfunc_5finitmodulefunc',['TVMPackedFunc_InitModuleFunc',['../crt_2packed__func_8h.html#a65f35e3b3f521d105d7aa71347135efd',1,'packed_func.h']]],
   ['tvmpackedfunc_5fsetargs',['TVMPackedFunc_SetArgs',['../crt_2packed__func_8h.html#af145c1c723cc05360ab7b66bcf6f435e',1,'packed_func.h']]],
   ['tvmplatformabort',['TVMPlatformAbort',['../platform_8h.html#a47980e4ea2182978f94ca87cc15ca0c8',1,'platform.h']]],
+  ['tvmplatformformatmessage',['TVMPlatformFormatMessage',['../platform_8h.html#a6dfecb024ace62e724817f90b6407285',1,'platform.h']]],
   ['tvmpodvalue_5f',['TVMPODValue_',['../classtvm_1_1runtime_1_1TVMPODValue__.html#a2f46b59a6c1d5eb4575d7f583b5f1a0c',1,'tvm::runtime::TVMPODValue_::TVMPODValue_()'],['../classtvm_1_1runtime_1_1TVMPODValue__.html#afe1837bdbafe8341c2031c5cebcf6e74',1,'tvm::runtime::TVMPODValue_::TVMPODValue_(TVMValue value, int type_code)']]],
   ['tvmretvalue',['TVMRetValue',['../classtvm_1_1runtime_1_1TVMRetValue.html#a77455a8fe7d27b90a01a64f1cd28e9ec',1,'tvm::runtime::TVMRetValue::TVMRetValue()'],['../classtvm_1_1runtime_1_1TVMRetValue.html#ac4a3850c0989e7c2d5cd8e0f096d0997',1,'tvm::runtime::TVMRetValue::TVMRetValue(TVMRetValue &amp;&amp;other)'],['../classtvm_1_1runtime_1_1TVMRetValue.html#ab86bf21f214fca72e73a7f6e20ffab8d',1,'tvm::runtime::TVMRetValue::TVMRetValue(const TVMRetValue &amp;other)']]],
   ['tvmsetstream',['TVMSetStream',['../c__runtime__api_8h.html#ac414ed248ddb1bfb561685bba3de5e89',1,'c_runtime_api.h']]],
diff --git a/docs/api/doxygen/search/functions_7.js b/docs/api/doxygen/search/functions_7.js
index 206ee54..673650a 100644
--- a/docs/api/doxygen/search/functions_7.js
+++ b/docs/api/doxygen/search/functions_7.js
@@ -57,6 +57,7 @@ var searchData=
   ['getrealaxis',['GetRealAxis',['../namespacetvm_1_1topi.html#aa45cdc15f72e867eff29c74b2dffd185',1,'tvm::topi']]],
   ['getref',['GetRef',['../namespacetvm_1_1runtime.html#aa4a97de4fefd23aa5942c6a545544a05',1,'tvm::runtime::GetRef(const ObjectType *ptr)'],['../namespacetvm_1_1runtime.html#af63300957592e8991c18c54703123ef7',1,'tvm::runtime::GetRef(const ObjType *ptr)']]],
   ['getreprbytes',['GetReprBytes',['../classtvm_1_1ReflectionVTable.html#acc577dacd480beaee8f905bab8d2029c',1,'tvm::ReflectionVTable']]],
+  ['getrpcsessionindex',['GetRPCSessionIndex',['../namespacetvm_1_1runtime.html#a9ac54b0d7a3e3c22fd0ddef0a731cfd5',1,'tvm::runtime']]],
   ['getruntimedatatype',['GetRuntimeDataType',['../namespacetvm.html#a0447e9aa45f6cab707f6dc9f9281b3f5',1,'tvm']]],
   ['getshape',['GetShape',['../classtvm_1_1te_1_1TensorNode.html#a35df267780880731a051773b2dca7bbd',1,'tvm::te::TensorNode::GetShape()'],['../classtvm_1_1tir_1_1DataProducerNode.html#aa44d3146c9f543538a32dfd954512bf7',1,'tvm::tir::DataProducerNode::GetShape()']]],
   ['getsigntype',['GetSignType',['../classtvm_1_1arith_1_1IntSet.html#a651c7689d1da9b8f931312ffd6954dbd',1,'tvm::arith::IntSet']]],
diff --git a/docs/api/doxygen/search/functions_9.js b/docs/api/doxygen/search/functions_9.js
index e0cdc66..38d175e 100644
--- a/docs/api/doxygen/search/functions_9.js
+++ b/docs/api/doxygen/search/functions_9.js
@@ -89,6 +89,7 @@ var searchData=
   ['ispragmakey',['IsPragmaKey',['../namespacetvm_1_1tir_1_1attr.html#a385e883a7cecc309d063786e5fdf2c4b',1,'tvm::tir::attr']]],
   ['isprimal',['IsPrimal',['../classtvm_1_1tir_1_1LayoutAxis.html#a13e11bef75e29b71977779124f72e1b9',1,'tvm::tir::LayoutAxis']]],
   ['isprimitiveop',['IsPrimitiveOp',['../classtvm_1_1OpNode.html#a285c8dc0ccec2ca34386271d1b338506',1,'tvm::OpNode::IsPrimitiveOp()'],['../namespacetvm.html#a8259e23409eda017c6bde908e050b670',1,'tvm::IsPrimitiveOp()']]],
+  ['isrpcsessioncontext',['IsRPCSessionContext',['../namespacetvm_1_1runtime.html#af2a8f6198750ead46feeb72ef4f9de4c',1,'tvm::runtime']]],
   ['issimpleaccess',['IsSimpleAccess',['../classtvm_1_1auto__scheduler_1_1AccessAnalyzer.html#a9a059e6b4d9a04a700c9bf5ed72db7d1',1,'tvm::auto_scheduler::AccessAnalyzer']]],
   ['issinglepoint',['IsSinglePoint',['../classtvm_1_1arith_1_1IntSet.html#a7422ed5fde1738b2930af58666f9a946',1,'tvm::arith::IntSet']]],
   ['isstrictlyinlineable',['IsStrictlyInlineable',['../classtvm_1_1auto__scheduler_1_1AccessAnalyzer.html#a77b7f4c645c1d5f1bbf64417d718a3ce',1,'tvm::auto_scheduler::AccessAnalyzer']]],
diff --git a/docs/api/doxygen/search/functions_f.js b/docs/api/doxygen/search/functions_f.js
index 70d253a..07e33d1 100644
--- a/docs/api/doxygen/search/functions_f.js
+++ b/docs/api/doxygen/search/functions_f.js
@@ -43,7 +43,7 @@ var searchData=
   ['operator_2f',['operator/',['../namespacetvm.html#a18256ba1213ce5ff3cf8037a314354b7',1,'tvm::operator/(PrimExpr a, PrimExpr b)'],['../namespacetvm.html#a136427374941fbf8e50f53b1cab39e38',1,'tvm::operator/(const PrimExpr &amp;a, const TB &amp;b)']]],
   ['operator_2f_3d',['operator/=',['../namespacetvm.html#a51dc569142bf8ce8ea55f73029d3807d',1,'tvm']]],
   ['operator_3c',['operator&lt;',['../classtvm_1_1runtime_1_1ObjectRef.html#a17626209947c4a2f302422be451661c5',1,'tvm::runtime::ObjectRef::operator&lt;()'],['../namespacetvm_1_1runtime.html#a2865dffa2fddf5eff9d7ed397563ebd6',1,'tvm::runtime::operator&lt;(const String &amp;lhs, const std::string &amp;rhs)'],['../namespacetvm_1_1runtime.html#ad5305faaeefd679da62186dab423bdab',1,'tvm::runtime::operator&lt;(const std::string &amp;lhs, const String &amp;rhs)'],['../namespacetvm_1_1runtime.htm [...]
-  ['operator_3c_3c',['operator&lt;&lt;',['../classtvm_1_1DiagnosticBuilder.html#aa92a3f9039d464fbefaed90b0e255e84',1,'tvm::DiagnosticBuilder::operator&lt;&lt;()'],['../structtvm_1_1ErrorBuilder.html#ad40b754d2d8992b65d0bc5b116bd3f71',1,'tvm::ErrorBuilder::operator&lt;&lt;()'],['../namespacetvm_1_1runtime.html#af22b89284299c81d0c1802199af446d7',1,'tvm::runtime::operator&lt;&lt;(std::ostream &amp;os, const ObjectRef &amp;n)'],['../namespacetvm_1_1runtime.html#a2c20920d4a09a6c022768b353ec8d [...]
+  ['operator_3c_3c',['operator&lt;&lt;',['../classtvm_1_1DiagnosticBuilder.html#aa92a3f9039d464fbefaed90b0e255e84',1,'tvm::DiagnosticBuilder::operator&lt;&lt;()'],['../structtvm_1_1ErrorBuilder.html#ad40b754d2d8992b65d0bc5b116bd3f71',1,'tvm::ErrorBuilder::operator&lt;&lt;()'],['../namespacetvm_1_1runtime.html#af22b89284299c81d0c1802199af446d7',1,'tvm::runtime::operator&lt;&lt;(std::ostream &amp;os, const ObjectRef &amp;n)'],['../namespacetvm_1_1runtime.html#a2c20920d4a09a6c022768b353ec8d [...]
   ['operator_3c_3d',['operator&lt;=',['../namespacetvm_1_1runtime.html#a92428efae022d4982b2644f8960d4386',1,'tvm::runtime::operator&lt;=(const String &amp;lhs, const std::string &amp;rhs)'],['../namespacetvm_1_1runtime.html#a8daf39dc422f228fae2ec11a426bab28',1,'tvm::runtime::operator&lt;=(const std::string &amp;lhs, const String &amp;rhs)'],['../namespacetvm_1_1runtime.html#a9cf2e7e67fd12d69c5bce2be881c8296',1,'tvm::runtime::operator&lt;=(const String &amp;lhs, const String &amp;rhs)'],[ [...]
   ['operator_3d',['operator=',['../classtvm_1_1arith_1_1Analyzer.html#a9dccc7d98b8b9465390e10436d3a9178',1,'tvm::arith::Analyzer::operator=()'],['../classtvm_1_1TypedEnvFunc_3_01R_07Args_8_8_8_08_4.html#aab332907b9f98876f441f6403b801187',1,'tvm::TypedEnvFunc&lt; R(Args...)&gt;::operator=()'],['../classtvm_1_1Integer.html#ad538a2ae6f636b3ce38fb4162b1c2549',1,'tvm::Integer::operator=()'],['../classtvm_1_1Map.html#acf11920f50d9a6283cd2f1ed9985bfca',1,'tvm::Map::operator=(Map&lt; K, V &gt; & [...]
   ['operator_3d_3d',['operator==',['../classtvm_1_1Integer.html#ad71119ff42add763e11fafe4f4194f6a',1,'tvm::Integer::operator==(int other) const '],['../classtvm_1_1Integer.html#ae395bcab691d3bdd0fc5978fe5addea3',1,'tvm::Integer::operator==(Enum other) const '],['../classtvm_1_1MapNode_1_1iterator.html#a127d6726a9fd54602523f711725c4c3f',1,'tvm::MapNode::iterator::operator==()'],['../classtvm_1_1Map_1_1iterator.html#a1e2c5fa12e3cb81b3e5e3b999e6da4ec',1,'tvm::Map::iterator::operator==()'],[ [...]
diff --git a/docs/api/python/auto_scheduler.html b/docs/api/python/auto_scheduler.html
index 5f39b03..609aae5 100644
--- a/docs/api/python/auto_scheduler.html
+++ b/docs/api/python/auto_scheduler.html
@@ -520,7 +520,7 @@ Can be the a function or the function name.</p></li>
 
 <dl class="py function">
 <dt id="tvm.auto_scheduler.auto_schedule">
-<code class="sig-prename descclassname">tvm.auto_scheduler.</code><code class="sig-name descname">auto_schedule</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">task</span></em>, <em class="sig-param"><span class="n">search_policy</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">tuning_options</span><span class="o">=</span><span class="default_value">auto_scheduler.TuningOptions(34914256)</span></ [...]
+<code class="sig-prename descclassname">tvm.auto_scheduler.</code><code class="sig-name descname">auto_schedule</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">task</span></em>, <em class="sig-param"><span class="n">search_policy</span><span class="o">=</span><span class="default_value">None</span></em>, <em class="sig-param"><span class="n">tuning_options</span><span class="o">=</span><span class="default_value">auto_scheduler.TuningOptions(30792640)</span></ [...]
 <dd><p>Run auto scheduling search for a task</p>
 <dl class="field-list simple">
 <dt class="field-odd">Parameters</dt>
@@ -1381,7 +1381,7 @@ the initial naive schedule (state).</p>
 
 <dl class="py class">
 <dt id="tvm.auto_scheduler.SketchPolicy">
-<em class="property">class </em><code class="sig-prename descclassname">tvm.auto_scheduler.</code><code class="sig-name descname">SketchPolicy</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">task</span></em>, <em class="sig-param"><span class="n">program_cost_model</span><span class="o">=</span><span class="default_value">auto_scheduler.RandomModel(34911208)</span></em>, <em class="sig-param"><span class="n">params</span><span class="o">=</span><span class="de [...]
+<em class="property">class </em><code class="sig-prename descclassname">tvm.auto_scheduler.</code><code class="sig-name descname">SketchPolicy</code><span class="sig-paren">(</span><em class="sig-param"><span class="n">task</span></em>, <em class="sig-param"><span class="n">program_cost_model</span><span class="o">=</span><span class="default_value">auto_scheduler.RandomModel(30857048)</span></em>, <em class="sig-param"><span class="n">params</span><span class="o">=</span><span class="de [...]
 <dd><p>The search policy that searches in a hierarchical search space defined by sketches.
 The policy randomly samples programs from the space defined by sketches and use evolutionary
 search to fine-tune them.</p>
diff --git a/docs/api/rust/compiler_ext/fn.tvm_export.html b/docs/api/rust/compiler_ext/fn.tvm_export.html
index 0b49fcd..4a80ad4 100644
--- a/docs/api/rust/compiler_ext/fn.tvm_export.html
+++ b/docs/api/rust/compiler_ext/fn.tvm_export.html
@@ -1,2 +1,2 @@
 <!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="generator" content="rustdoc"><meta name="description" content="API documentation for the Rust `tvm_export` fn in crate `compiler_ext`."><meta name="keywords" content="rust, rustlang, rust-lang, tvm_export"><title>compiler_ext::tvm_export - Rust</title><link rel="stylesheet" type="text/css" href="../normalize.css"><link rel="stylesheet" type="text/cs [...]
-                <a id="settings-menu" href="../settings.html"><img src="../wheel.svg" width="18" alt="Change settings"></a></div></form></nav><section id="main" class="content"><h1 class='fqn'><span class='out-of-band'><span id='render-detail'><a id="toggle-all-docs" href="javascript:void(0)" title="collapse all docs">[<span class='inner'>&#x2212;</span>]</a></span><a class='srclink' href='../src/tvm/lib.rs.html#54-60' title='goto source code'>[src]</a></span><span class='in-band'>Functi [...]
\ No newline at end of file
+                <a id="settings-menu" href="../settings.html"><img src="../wheel.svg" width="18" alt="Change settings"></a></div></form></nav><section id="main" class="content"><h1 class='fqn'><span class='out-of-band'><span id='render-detail'><a id="toggle-all-docs" href="javascript:void(0)" title="collapse all docs">[<span class='inner'>&#x2212;</span>]</a></span></span><span class='in-band'>Function <a href='index.html'>compiler_ext</a>::<wbr><a class="fn" href=''>tvm_export</a></span [...]
\ No newline at end of file
diff --git a/docs/api/typedoc/classes/bytestreamreader.html b/docs/api/typedoc/classes/bytestreamreader.html
index 63c5d36..68a18a8 100644
--- a/docs/api/typedoc/classes/bytestreamreader.html
+++ b/docs/api/typedoc/classes/bytestreamreader.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -141,7 +141,7 @@
 					<div class="tsd-signature tsd-kind-icon">bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Uint8Array</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -151,7 +151,7 @@
 					<div class="tsd-signature tsd-kind-icon">offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L42">rpc_server.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L42">rpc_server.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -168,7 +168,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L63">rpc_server.ts:63</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L63">rpc_server.ts:63</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">Uint8Array</span></h4>
@@ -185,7 +185,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L49">rpc_server.ts:49</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L49">rpc_server.ts:49</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -202,7 +202,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L57">rpc_server.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L57">rpc_server.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/api/typedoc/classes/cachedcallstack.html b/docs/api/typedoc/classes/cachedcallstack.html
index 70377fa..c1cc4c0 100644
--- a/docs/api/typedoc/classes/cachedcallstack.html
+++ b/docs/api/typedoc/classes/cachedcallstack.html
@@ -144,7 +144,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L223">memory.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L223">memory.ts:223</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">temp<wbr>Args<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><a href="../interfaces/disposable.html" class="tsd-signature-type">Disposable</a><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L208">memory.ts:208</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L208">memory.ts:208</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -194,7 +194,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L312">memory.ts:312</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L312">memory.ts:312</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L284">memory.ts:284</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L284">memory.ts:284</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L388">memory.ts:388</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L388">memory.ts:388</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -300,7 +300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L376">memory.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L376">memory.ts:376</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -340,7 +340,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L267">memory.ts:267</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L267">memory.ts:267</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -373,7 +373,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L243">memory.ts:243</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L243">memory.ts:243</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -390,7 +390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L321">memory.ts:321</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L321">memory.ts:321</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -422,7 +422,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L252">memory.ts:252</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L252">memory.ts:252</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -444,7 +444,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L359">memory.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L359">memory.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -470,7 +470,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L342">memory.ts:342</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L342">memory.ts:342</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -496,7 +496,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L350">memory.ts:350</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L350">memory.ts:350</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -522,7 +522,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L326">memory.ts:326</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L326">memory.ts:326</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -548,7 +548,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L363">memory.ts:363</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L363">memory.ts:363</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -574,7 +574,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L346">memory.ts:346</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L346">memory.ts:346</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -600,7 +600,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L334">memory.ts:334</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L334">memory.ts:334</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/api/typedoc/classes/dlcontext.html b/docs/api/typedoc/classes/dlcontext.html
index 694ed74..262d44f 100644
--- a/docs/api/typedoc/classes/dlcontext.html
+++ b/docs/api/typedoc/classes/dlcontext.html
@@ -118,7 +118,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L203">runtime.ts:203</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L203">runtime.ts:203</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L201">runtime.ts:201</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L201">runtime.ts:201</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -161,7 +161,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L199">runtime.ts:199</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L199">runtime.ts:199</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -183,7 +183,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L224">runtime.ts:224</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L224">runtime.ts:224</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -205,7 +205,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L231">runtime.ts:231</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L231">runtime.ts:231</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/api/typedoc/classes/dldatatype.html b/docs/api/typedoc/classes/dldatatype.html
index 5aee49c..9d44de3 100644
--- a/docs/api/typedoc/classes/dldatatype.html
+++ b/docs/api/typedoc/classes/dldatatype.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L263">runtime.ts:263</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L263">runtime.ts:263</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">bits<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L261">runtime.ts:261</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L261">runtime.ts:261</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">code<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L259">runtime.ts:259</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L259">runtime.ts:259</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -177,7 +177,7 @@
 					<div class="tsd-signature tsd-kind-icon">lanes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L263">runtime.ts:263</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L263">runtime.ts:263</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -199,7 +199,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L280">runtime.ts:280</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L280">runtime.ts:280</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -216,7 +216,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L271">runtime.ts:271</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L271">runtime.ts:271</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/api/typedoc/classes/environment.html b/docs/api/typedoc/classes/environment.html
index 5424753..d839741 100644
--- a/docs/api/typedoc/classes/environment.html
+++ b/docs/api/typedoc/classes/environment.html
@@ -125,7 +125,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/environment.ts#L86">environment.ts:86</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/environment.ts#L86">environment.ts:86</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 					<aside class="tsd-sources">
 						<p>Implementation of <a href="../interfaces/libraryprovider.html">LibraryProvider</a>.<a href="../interfaces/libraryprovider.html#imports">imports</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/environment.ts#L70">environment.ts:70</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/environment.ts#L70">environment.ts:70</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/environment.ts#L69">environment.ts:69</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/environment.ts#L69">environment.ts:69</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -210,7 +210,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">ctypes.FTVMWasmPackedCFunc</span><span class="tsd-signature-symbol"> | </span><span class="tsd-signature-type">undefined</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = [undefined,]</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/environment.ts#L78">environment.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/environment.ts#L78">environment.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -228,7 +228,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<wbr>Free<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/environment.ts#L84">environment.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/environment.ts#L84">environment.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/environment.ts#L105">environment.ts:105</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/environment.ts#L105">environment.ts:105</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/api/typedoc/classes/ffilibrary.html b/docs/api/typedoc/classes/ffilibrary.html
index 5a4211c..f05f3c6 100644
--- a/docs/api/typedoc/classes/ffilibrary.html
+++ b/docs/api/typedoc/classes/ffilibrary.html
@@ -131,7 +131,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L49">runtime.ts:49</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L49">runtime.ts:49</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L46">runtime.ts:46</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L46">runtime.ts:46</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L45">runtime.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L45">runtime.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L44">runtime.ts:44</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L44">runtime.ts:44</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">webGPUContext<span class="tsd-signature-symbol">:</span> <a href="webgpucontext.html" class="tsd-signature-type">WebGPUContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L47">runtime.ts:47</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L47">runtime.ts:47</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -203,7 +203,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L76">runtime.ts:76</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L76">runtime.ts:76</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L66">runtime.ts:66</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L66">runtime.ts:66</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -243,7 +243,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L84">runtime.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L84">runtime.ts:84</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <a href="cachedcallstack.html" class="tsd-signature-type">CachedCallStack</a></h4>
@@ -260,7 +260,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L95">runtime.ts:95</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L95">runtime.ts:95</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -283,7 +283,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L72">runtime.ts:72</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L72">runtime.ts:72</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/api/typedoc/classes/graphruntime.html b/docs/api/typedoc/classes/graphruntime.html
index 29dc1f4..363162a 100644
--- a/docs/api/typedoc/classes/graphruntime.html
+++ b/docs/api/typedoc/classes/graphruntime.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L584">runtime.ts:584</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L584">runtime.ts:584</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">module<span class="tsd-signature-symbol">:</span> <a href="module.html" class="tsd-signature-type">Module</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L580">runtime.ts:580</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L580">runtime.ts:580</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L655">runtime.ts:655</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L655">runtime.ts:655</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L598">runtime.ts:598</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L598">runtime.ts:598</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -241,7 +241,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L632">runtime.ts:632</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L632">runtime.ts:632</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -279,7 +279,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L645">runtime.ts:645</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L645">runtime.ts:645</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -310,7 +310,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L622">runtime.ts:622</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L622">runtime.ts:622</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -332,7 +332,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L610">runtime.ts:610</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L610">runtime.ts:610</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/api/typedoc/classes/instance.html b/docs/api/typedoc/classes/instance.html
index 2293b77..e1c46ac 100644
--- a/docs/api/typedoc/classes/instance.html
+++ b/docs/api/typedoc/classes/instance.html
@@ -139,7 +139,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L693">runtime.ts:693</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L693">runtime.ts:693</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -202,7 +202,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L685">runtime.ts:685</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L685">runtime.ts:685</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -212,7 +212,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L684">runtime.ts:684</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L684">runtime.ts:684</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -229,7 +229,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L925">runtime.ts:925</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L925">runtime.ts:925</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -267,7 +267,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L933">runtime.ts:933</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L933">runtime.ts:933</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -298,7 +298,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L995">runtime.ts:995</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L995">runtime.ts:995</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -341,7 +341,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L733">runtime.ts:733</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L733">runtime.ts:733</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -358,7 +358,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L953">runtime.ts:953</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L953">runtime.ts:953</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -402,7 +402,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L817">runtime.ts:817</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L817">runtime.ts:817</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -434,7 +434,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L1038">runtime.ts:1038</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L1038">runtime.ts:1038</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -465,7 +465,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L847">runtime.ts:847</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L847">runtime.ts:847</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -497,7 +497,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L751">runtime.ts:751</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L751">runtime.ts:751</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -520,7 +520,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L1018">runtime.ts:1018</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L1018">runtime.ts:1018</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -568,7 +568,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L790">runtime.ts:790</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L790">runtime.ts:790</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -608,7 +608,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L915">runtime.ts:915</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L915">runtime.ts:915</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -646,7 +646,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L1139">runtime.ts:1139</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L1139">runtime.ts:1139</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -698,7 +698,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L741">runtime.ts:741</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L741">runtime.ts:741</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -722,7 +722,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L869">runtime.ts:869</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L869">runtime.ts:869</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -754,7 +754,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L858">runtime.ts:858</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L858">runtime.ts:858</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -786,7 +786,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L941">runtime.ts:941</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L941">runtime.ts:941</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/api/typedoc/classes/memory.html b/docs/api/typedoc/classes/memory.html
index 2a9f32a..44e8ddd 100644
--- a/docs/api/typedoc/classes/memory.html
+++ b/docs/api/typedoc/classes/memory.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L40">memory.ts:40</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L40">memory.ts:40</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Memory</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L32">memory.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L32">memory.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span><span class="tsd-signature-symbol"> = true</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L33">memory.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L33">memory.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L154">memory.ts:154</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L154">memory.ts:154</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -210,7 +210,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L90">memory.ts:90</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L90">memory.ts:90</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -233,7 +233,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L97">memory.ts:97</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L97">memory.ts:97</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -256,7 +256,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L74">memory.ts:74</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L74">memory.ts:74</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -279,7 +279,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L81">memory.ts:81</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L81">memory.ts:81</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -302,7 +302,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L104">memory.ts:104</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L104">memory.ts:104</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -325,7 +325,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L132">memory.ts:132</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L132">memory.ts:132</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -362,7 +362,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L145">memory.ts:145</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L145">memory.ts:145</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -393,7 +393,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L60">memory.ts:60</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L60">memory.ts:60</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -416,7 +416,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L67">memory.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L67">memory.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -439,7 +439,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L53">memory.ts:53</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L53">memory.ts:53</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -462,7 +462,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L114">memory.ts:114</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L114">memory.ts:114</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -485,7 +485,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L124">memory.ts:124</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L124">memory.ts:124</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -502,7 +502,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/memory.ts#L175">memory.ts:175</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/memory.ts#L175">memory.ts:175</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/api/typedoc/classes/module.html b/docs/api/typedoc/classes/module.html
index 577d75b..9e7aaff 100644
--- a/docs/api/typedoc/classes/module.html
+++ b/docs/api/typedoc/classes/module.html
@@ -124,7 +124,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L505">runtime.ts:505</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L505">runtime.ts:505</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -170,7 +170,7 @@
 					<div class="tsd-signature tsd-kind-icon">handle<span class="tsd-signature-symbol">:</span> <a href="../index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L503">runtime.ts:503</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L503">runtime.ts:503</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -187,7 +187,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L517">runtime.ts:517</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L517">runtime.ts:517</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -204,7 +204,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L531">runtime.ts:531</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L531">runtime.ts:531</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -236,7 +236,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L562">runtime.ts:562</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L562">runtime.ts:562</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/api/typedoc/classes/ndarray.html b/docs/api/typedoc/classes/ndarray.html
index 7d8ee7a..772843c 100644
--- a/docs/api/typedoc/classes/ndarray.html
+++ b/docs/api/typedoc/classes/ndarray.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L305">runtime.ts:305</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L305">runtime.ts:305</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">context<span class="tsd-signature-symbol">:</span> <a href="dlcontext.html" class="tsd-signature-type">DLContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L298">runtime.ts:298</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L298">runtime.ts:298</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -173,7 +173,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L294">runtime.ts:294</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L294">runtime.ts:294</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -188,7 +188,7 @@
 					<div class="tsd-signature tsd-kind-icon">handle<span class="tsd-signature-symbol">:</span> <a href="../index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L290">runtime.ts:290</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L290">runtime.ts:290</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -203,7 +203,7 @@
 					<div class="tsd-signature tsd-kind-icon">ndim<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L292">runtime.ts:292</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L292">runtime.ts:292</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -218,7 +218,7 @@
 					<div class="tsd-signature tsd-kind-icon">shape<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L296">runtime.ts:296</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L296">runtime.ts:296</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -240,7 +240,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L371">runtime.ts:371</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L371">runtime.ts:371</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -273,7 +273,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L415">runtime.ts:415</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L415">runtime.ts:415</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -305,7 +305,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L356">runtime.ts:356</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L356">runtime.ts:356</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -322,7 +322,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L475">runtime.ts:475</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L475">runtime.ts:475</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -346,7 +346,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L444">runtime.ts:444</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L444">runtime.ts:444</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/api/typedoc/classes/packedfunccell.html b/docs/api/typedoc/classes/packedfunccell.html
index 453c797..d2db889 100644
--- a/docs/api/typedoc/classes/packedfunccell.html
+++ b/docs/api/typedoc/classes/packedfunccell.html
@@ -122,7 +122,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L158">runtime.ts:158</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L158">runtime.ts:158</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">handle<span class="tsd-signature-symbol">:</span> <a href="../index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L157">runtime.ts:157</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L157">runtime.ts:157</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -164,7 +164,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L165">runtime.ts:165</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L165">runtime.ts:165</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
diff --git a/docs/api/typedoc/classes/rpcserver.html b/docs/api/typedoc/classes/rpcserver.html
index c94abce..5f79040 100644
--- a/docs/api/typedoc/classes/rpcserver.html
+++ b/docs/api/typedoc/classes/rpcserver.html
@@ -115,7 +115,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L92">rpc_server.ts:92</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L92">rpc_server.ts:92</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">unknown</span><span class="tsd-signat [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -201,7 +201,7 @@
 					<div class="tsd-signature tsd-kind-icon">key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L78">rpc_server.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L78">rpc_server.ts:78</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -211,7 +211,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -242,7 +242,7 @@
 					<div class="tsd-signature tsd-kind-icon">socket<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">WebSocket</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -252,7 +252,7 @@
 					<div class="tsd-signature tsd-kind-icon">state<span class="tsd-signature-symbol">:</span> <a href="../enums/rpcserverstate.html" class="tsd-signature-type">RPCServerState</a><span class="tsd-signature-symbol"> = RPCServerState.InitHeader</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -262,7 +262,7 @@
 					<div class="tsd-signature tsd-kind-icon">url<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L77">rpc_server.ts:77</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L77">rpc_server.ts:77</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/api/typedoc/classes/scalar.html b/docs/api/typedoc/classes/scalar.html
index 8a57d1a..f1f548e 100644
--- a/docs/api/typedoc/classes/scalar.html
+++ b/docs/api/typedoc/classes/scalar.html
@@ -112,7 +112,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L145">runtime.ts:145</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -137,7 +137,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L145">runtime.ts:145</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">value<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L143">runtime.ts:143</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L143">runtime.ts:143</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/api/typedoc/classes/webgpucontext.html b/docs/api/typedoc/classes/webgpucontext.html
index c0decf2..a7c26f4 100644
--- a/docs/api/typedoc/classes/webgpucontext.html
+++ b/docs/api/typedoc/classes/webgpucontext.html
@@ -120,7 +120,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -145,7 +145,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">GPUDevice</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -155,7 +155,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -172,7 +172,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -209,7 +209,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L170">webgpu.ts:170</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L170">webgpu.ts:170</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/api/typedoc/enums/argtypecode.html b/docs/api/typedoc/enums/argtypecode.html
index a45c6df..7c1e9b9 100644
--- a/docs/api/typedoc/enums/argtypecode.html
+++ b/docs/api/typedoc/enums/argtypecode.html
@@ -106,7 +106,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L216">ctypes.ts:216</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L216">ctypes.ts:216</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -116,7 +116,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L214">ctypes.ts:214</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L214">ctypes.ts:214</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -126,7 +126,7 @@
 					<div class="tsd-signature tsd-kind-icon">Null<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L218">ctypes.ts:218</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L218">ctypes.ts:218</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -136,7 +136,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 12</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMContext<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 6</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L220">ctypes.ts:220</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L220">ctypes.ts:220</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMDLTensor<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 7</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L219">ctypes.ts:219</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L219">ctypes.ts:219</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMModule<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 9</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMNDArray<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 13</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -196,7 +196,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -206,7 +206,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObjectRValue<wbr>Ref<wbr>Arg<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 14</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -216,7 +216,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMOpaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L217">ctypes.ts:217</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L217">ctypes.ts:217</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -226,7 +226,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMPacked<wbr>Func<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 10</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -236,7 +236,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 11</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -246,7 +246,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/api/typedoc/enums/aynccallbackcode.html b/docs/api/typedoc/enums/aynccallbackcode.html
index fc3b20a..4b261bd 100644
--- a/docs/api/typedoc/enums/aynccallbackcode.html
+++ b/docs/api/typedoc/enums/aynccallbackcode.html
@@ -93,7 +93,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Exception<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L677">runtime.ts:677</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L677">runtime.ts:677</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -103,7 +103,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L676">runtime.ts:676</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L676">runtime.ts:676</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/api/typedoc/enums/dldatatypecode.html b/docs/api/typedoc/enums/dldatatypecode.html
index cbce0a1..6907480 100644
--- a/docs/api/typedoc/enums/dldatatypecode.html
+++ b/docs/api/typedoc/enums/dldatatypecode.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L243">runtime.ts:243</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L243">runtime.ts:243</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L241">runtime.ts:241</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L241">runtime.ts:241</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">Opaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L244">runtime.ts:244</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L244">runtime.ts:244</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -125,7 +125,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L242">runtime.ts:242</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L242">runtime.ts:242</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/api/typedoc/enums/rpcserverstate.html b/docs/api/typedoc/enums/rpcserverstate.html
index a2b68e2..07646a9 100644
--- a/docs/api/typedoc/enums/rpcserverstate.html
+++ b/docs/api/typedoc/enums/rpcserverstate.html
@@ -90,7 +90,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L27">rpc_server.ts:27</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L27">rpc_server.ts:27</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<wbr>Key<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L28">rpc_server.ts:28</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L28">rpc_server.ts:28</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Server<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Body<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">Wait<wbr>For<wbr>Callback<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/api/typedoc/enums/sizeof.html b/docs/api/typedoc/enums/sizeof.html
index 963ac5b..eaa872b 100644
--- a/docs/api/typedoc/enums/sizeof.html
+++ b/docs/api/typedoc/enums/sizeof.html
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLContext<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32 + I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L207">ctypes.ts:207</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L207">ctypes.ts:207</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L206">ctypes.ts:206</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L206">ctypes.ts:206</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">F32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L203">ctypes.ts:203</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L203">ctypes.ts:203</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">F64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L204">ctypes.ts:204</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L204">ctypes.ts:204</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">I32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -150,7 +150,7 @@
 					<div class="tsd-signature tsd-kind-icon">I64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L202">ctypes.ts:202</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L202">ctypes.ts:202</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -160,7 +160,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMValue<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L205">ctypes.ts:205</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L205">ctypes.ts:205</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -170,7 +170,7 @@
 					<div class="tsd-signature tsd-kind-icon">U16<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L200">ctypes.ts:200</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L200">ctypes.ts:200</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 					<div class="tsd-signature tsd-kind-icon">U8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L199">ctypes.ts:199</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L199">ctypes.ts:199</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/api/typedoc/index.html b/docs/api/typedoc/index.html
index 2f35b11..3a5e93c 100644
--- a/docs/api/typedoc/index.html
+++ b/docs/api/typedoc/index.html
@@ -174,7 +174,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Alloc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>shape<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, ndim<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeCode<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeBits<span class="tsd [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L112">ctypes.ts:112</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L112">ctypes.ts:112</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>Bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">num [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L128">ctypes.ts:128</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L128">ctypes.ts:128</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -282,7 +282,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>To<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>from<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, to<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-sig [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L144">ctypes.ts:144</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L144">ctypes.ts:144</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -326,7 +326,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>ToBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</sp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L136">ctypes.ts:136</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L136">ctypes.ts:136</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -370,7 +370,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L121">ctypes.ts:121</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L121">ctypes.ts:121</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -406,7 +406,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMBackend<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number< [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L160">ctypes.ts:160</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L160">ctypes.ts:160</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -458,7 +458,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCFunc<wbr>Set<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ret<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L77">ctypes.ts:77</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L77">ctypes.ts:77</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -506,7 +506,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCb<wbr>Arg<wbr>ToReturn<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, code<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span c [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L83">ctypes.ts:83</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L83">ctypes.ts:83</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -545,7 +545,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Call<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L67">ctypes.ts:67</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L67">ctypes.ts:67</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -601,7 +601,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L57">ctypes.ts:57</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L57">ctypes.ts:57</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -637,7 +637,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Get<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span cla [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L100">ctypes.ts:100</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L100">ctypes.ts:100</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -676,7 +676,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>List<wbr>Global<wbr>Names<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>outSize<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, outArray<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L88">ctypes.ts:88</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L88">ctypes.ts:88</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -715,7 +715,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Register<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, f<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, override<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L94">ctypes.ts:94</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L94">ctypes.ts:94</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -758,7 +758,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMGet<wbr>Last<wbr>Error<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -788,7 +788,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L52">ctypes.ts:52</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L52">ctypes.ts:52</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -824,7 +824,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Get<wbr>Function<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, funcName<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, queryImports<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">numbe [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -872,7 +872,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Import<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, dep<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-si [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -912,7 +912,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMSynchronize<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>deviceType<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, deviceId<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signatur [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L150">ctypes.ts:150</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L150">ctypes.ts:150</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -954,7 +954,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Alloc<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>size<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L167">ctypes.ts:167</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L167">ctypes.ts:167</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -990,7 +990,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Free<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ptr<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L170">ctypes.ts:170</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L170">ctypes.ts:170</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1026,7 +1026,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Func<wbr>Create<wbr>FromCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resource<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L187">ctypes.ts:187</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L187">ctypes.ts:187</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1066,7 +1066,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>args<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1118,7 +1118,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<wbr>Finalizer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resourceHandle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L193">ctypes.ts:193</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L193">ctypes.ts:193</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1154,7 +1154,7 @@
 					<div class="tsd-signature tsd-kind-icon">GPUPointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1169,7 +1169,7 @@
 					<div class="tsd-signature tsd-kind-icon">Packed<wbr>Func<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">...</span>args<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol"> &amp; </span><a href="interfaces/disp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L36">runtime.ts:36</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L36">runtime.ts:36</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1184,7 +1184,7 @@
 					<div class="tsd-signature tsd-kind-icon">Pointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1199,7 +1199,7 @@
 					<div class="tsd-signature tsd-kind-icon">Ptr<wbr>Offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1217,7 +1217,7 @@
 					<div class="tsd-signature tsd-kind-icon">RPC_<wbr>MAGIC<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">1045105</span><span class="tsd-signature-symbol"> = 1045105</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/rpc_server.ts#L36">rpc_server.ts:36</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/rpc_server.ts#L36">rpc_server.ts:36</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1239,7 +1239,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/support.ts#L25">support.ts:25</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/support.ts#L25">support.ts:25</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1271,7 +1271,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/support.ts#L39">support.ts:39</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/support.ts#L39">support.ts:39</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1300,7 +1300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/support.ts#L52">support.ts:52</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/support.ts#L52">support.ts:52</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1337,7 +1337,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/compact.ts#L38">compact.ts:38</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/compact.ts#L38">compact.ts:38</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1368,7 +1368,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1390,7 +1390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/environment.ts#L32">environment.ts:32</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/environment.ts#L32">environment.ts:32</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1421,7 +1421,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/compact.ts#L24">compact.ts:24</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/compact.ts#L24">compact.ts:24</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1443,7 +1443,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L1361">runtime.ts:1361</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L1361">runtime.ts:1361</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1508,7 +1508,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/support.ts#L62">support.ts:62</a></li>
+									<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/support.ts#L62">support.ts:62</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1530,7 +1530,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<wbr>Code<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L247">runtime.ts:247</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L247">runtime.ts:247</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1539,7 +1539,7 @@
 						<div class="tsd-signature tsd-kind-icon">0<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;int&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L248">runtime.ts:248</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L248">runtime.ts:248</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1549,7 +1549,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;uint&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L249">runtime.ts:249</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L249">runtime.ts:249</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1559,7 +1559,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;float&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L250">runtime.ts:250</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L250">runtime.ts:250</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1569,7 +1569,7 @@
 						<div class="tsd-signature tsd-kind-icon">3<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;handle&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L251">runtime.ts:251</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L251">runtime.ts:251</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1580,7 +1580,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Enum<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L175">runtime.ts:175</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L175">runtime.ts:175</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1589,7 +1589,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L176">runtime.ts:176</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L176">runtime.ts:176</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1599,7 +1599,7 @@
 						<div class="tsd-signature tsd-kind-icon">15<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;webgpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L180">runtime.ts:180</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L180">runtime.ts:180</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1609,7 +1609,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;gpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L177">runtime.ts:177</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L177">runtime.ts:177</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1619,7 +1619,7 @@
 						<div class="tsd-signature tsd-kind-icon">4<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;opencl&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L178">runtime.ts:178</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L178">runtime.ts:178</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1629,7 +1629,7 @@
 						<div class="tsd-signature tsd-kind-icon">8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;metal&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L179">runtime.ts:179</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L179">runtime.ts:179</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1640,7 +1640,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Str<wbr>ToEnum<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L183">runtime.ts:183</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L183">runtime.ts:183</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1649,7 +1649,7 @@
 						<div class="tsd-signature tsd-kind-icon">cl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L187">runtime.ts:187</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L187">runtime.ts:187</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1659,7 +1659,7 @@
 						<div class="tsd-signature tsd-kind-icon">cpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 1</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L184">runtime.ts:184</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L184">runtime.ts:184</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1669,7 +1669,7 @@
 						<div class="tsd-signature tsd-kind-icon">cuda<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 2</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L186">runtime.ts:186</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L186">runtime.ts:186</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1679,7 +1679,7 @@
 						<div class="tsd-signature tsd-kind-icon">gpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 2</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L185">runtime.ts:185</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L185">runtime.ts:185</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1689,7 +1689,7 @@
 						<div class="tsd-signature tsd-kind-icon">metal<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 8</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L190">runtime.ts:190</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L190">runtime.ts:190</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1699,7 +1699,7 @@
 						<div class="tsd-signature tsd-kind-icon">opencl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L188">runtime.ts:188</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L188">runtime.ts:188</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1709,7 +1709,7 @@
 						<div class="tsd-signature tsd-kind-icon">vulkan<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 7</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L189">runtime.ts:189</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L189">runtime.ts:189</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1719,7 +1719,7 @@
 						<div class="tsd-signature tsd-kind-icon">webgpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 15</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/runtime.ts#L191">runtime.ts:191</a></li>
+								<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/runtime.ts#L191">runtime.ts:191</a></li>
 							</ul>
 						</aside>
 					</section>
diff --git a/docs/api/typedoc/interfaces/disposable.html b/docs/api/typedoc/interfaces/disposable.html
index bbe288f..61c2619 100644
--- a/docs/api/typedoc/interfaces/disposable.html
+++ b/docs/api/typedoc/interfaces/disposable.html
@@ -113,7 +113,7 @@
 					<div class="tsd-signature tsd-kind-icon">dispose<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/types.ts#L52">types.ts:52</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/types.ts#L52">types.ts:52</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/api/typedoc/interfaces/functioninfo.html b/docs/api/typedoc/interfaces/functioninfo.html
index bd076f5..012605a 100644
--- a/docs/api/typedoc/interfaces/functioninfo.html
+++ b/docs/api/typedoc/interfaces/functioninfo.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">arg_<wbr>types<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">name<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">thread_<wbr>axis_<wbr>tags<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/api/typedoc/interfaces/libraryprovider.html b/docs/api/typedoc/interfaces/libraryprovider.html
index f692e29..bc0a4ce 100644
--- a/docs/api/typedoc/interfaces/libraryprovider.html
+++ b/docs/api/typedoc/interfaces/libraryprovider.html
@@ -112,7 +112,7 @@
 					<div class="tsd-signature tsd-kind-icon">imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/types.ts#L34">types.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/types.ts#L34">types.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -127,7 +127,7 @@
 					<div class="tsd-signature tsd-kind-icon">start<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>inst<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">Instance</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/050a836/web/src/types.ts#L39">types.ts:39</a></li>
+							<li>Defined in <a href="https://github.com/apache/incubator-tvm/blob/3950639/web/src/types.ts#L39">types.ts:39</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/index.html b/docs/index.html
index 8777437..7a2018f 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -310,7 +310,7 @@
 <h2>Get Started<a class="headerlink" href="#get-started" title="Permalink to this headline">¶</a></h2>
 <ul class="simple">
 <li><p>Follow the <a class="reference internal" href="install/index.html"><span class="doc">instructions</span></a> to install TVM.</p></li>
-<li><p>Checkout the <a class="reference internal" href="tutorials/index.html"><span class="doc">Tutorials</span></a>.</p></li>
+<li><p>Checkout the <a class="reference internal" href="tutorials/index.html"><span class="doc">tutorials</span></a>.</p></li>
 </ul>
 </div>
 <div class="section" id="for-developers">
diff --git a/docs/objects.inv b/docs/objects.inv
index 3a89b6a..1916d98 100644
Binary files a/docs/objects.inv and b/docs/objects.inv differ
diff --git a/docs/searchindex.js b/docs/searchindex.js
index f016899..d141363 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({docnames:["api/links","api/python/auto_scheduler","api/python/autotvm","api/python/contrib","api/python/driver","api/python/error","api/python/graph_runtime","api/python/index","api/python/ir","api/python/micro","api/python/ndarray","api/python/relay/analysis","api/python/relay/backend","api/python/relay/dataflow_pattern","api/python/relay/frontend","api/python/relay/image","api/python/relay/index","api/python/relay/nn","api/python/relay/testing","api/python/relay/transf [...]
\ No newline at end of file
+Search.setIndex({docnames:["api/links","api/python/auto_scheduler","api/python/autotvm","api/python/contrib","api/python/driver","api/python/error","api/python/graph_runtime","api/python/index","api/python/ir","api/python/micro","api/python/ndarray","api/python/relay/analysis","api/python/relay/backend","api/python/relay/dataflow_pattern","api/python/relay/frontend","api/python/relay/image","api/python/relay/index","api/python/relay/nn","api/python/relay/testing","api/python/relay/transf [...]
\ No newline at end of file
diff --git a/docs/tutorials/auto_scheduler/sg_execution_times.html b/docs/tutorials/auto_scheduler/sg_execution_times.html
index 3c66fd7..e0b9124 100644
--- a/docs/tutorials/auto_scheduler/sg_execution_times.html
+++ b/docs/tutorials/auto_scheduler/sg_execution_times.html
@@ -304,11 +304,11 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-tutorials-auto-scheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>05:05.432</strong> total execution time for <strong>tutorials_auto_scheduler</strong> files:</p>
+<p><strong>05:00.161</strong> total execution time for <strong>tutorials_auto_scheduler</strong> files:</p>
 <ul class="simple">
-<li><p><strong>02:46.397</strong>: <a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-tutorials-auto-scheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a convolution layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></li>
-<li><p><strong>01:54.000</strong>: <a class="reference internal" href="tune_matmul_x86.html#sphx-glr-tutorials-auto-scheduler-tune-matmul-x86-py"><span class="std std-ref">Auto-scheduling matrix multiplication for CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_matmul_x86.py</span></code>)</p></li>
-<li><p><strong>00:25.034</strong>: <a class="reference internal" href="tune_network_cuda.html#sphx-glr-tutorials-auto-scheduler-tune-network-cuda-py"><span class="std std-ref">Auto-tuning a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></li>
+<li><p><strong>02:42.357</strong>: <a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-tutorials-auto-scheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></li>
+<li><p><strong>01:53.558</strong>: <a class="reference internal" href="tune_matmul_x86.html#sphx-glr-tutorials-auto-scheduler-tune-matmul-x86-py"><span class="std std-ref">Auto-scheduling Matrix Multiplication for CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_matmul_x86.py</span></code>)</p></li>
+<li><p><strong>00:24.246</strong>: <a class="reference internal" href="tune_network_cuda.html#sphx-glr-tutorials-auto-scheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></li>
 </ul>
 </div>
 
diff --git a/docs/tutorials/auto_scheduler/tune_conv2d_layer_cuda.html b/docs/tutorials/auto_scheduler/tune_conv2d_layer_cuda.html
index c372b1b..963ecfc 100644
--- a/docs/tutorials/auto_scheduler/tune_conv2d_layer_cuda.html
+++ b/docs/tutorials/auto_scheduler/tune_conv2d_layer_cuda.html
@@ -11,7 +11,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>Auto-scheduling a convolution layer for GPU &mdash; tvm 0.8.dev0 documentation</title>
+  <title>Auto-scheduling a Convolution Layer for GPU &mdash; tvm 0.8.dev0 documentation</title>
   
 
   
@@ -44,8 +44,8 @@
     <script type="text/javascript" src="../../_static/js/tlcpack_theme.js"></script>
     <link rel="index" title="Index" href="../../genindex.html" />
     <link rel="search" title="Search" href="../../search.html" />
-    <link rel="next" title="Auto-tuning a Neural Network for NVIDIA GPU" href="tune_network_cuda.html" />
-    <link rel="prev" title="Auto-scheduling matrix multiplication for CPU" href="tune_matmul_x86.html" /> 
+    <link rel="next" title="Auto-scheduling a Neural Network for NVIDIA GPU" href="tune_network_cuda.html" />
+    <link rel="prev" title="Auto-scheduling Matrix Multiplication for CPU" href="tune_matmul_x86.html" /> 
 </head>
 
 <body class="wy-body-for-nav">
@@ -208,8 +208,8 @@
 <li class="toctree-l1"><a class="reference internal" href="../index.html#optimize-tensor-operators">Optimize Tensor Operators</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#autotvm-template-based-auto-tuning">AutoTVM : Template-based Auto Tuning</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="../index.html#autoscheduler-template-free-auto-scheduling">AutoScheduler : Template-free Auto Scheduling</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="tune_matmul_x86.html">Auto-scheduling matrix multiplication for CPU</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-scheduling a convolution layer for GPU</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="tune_matmul_x86.html">Auto-scheduling Matrix Multiplication for CPU</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-scheduling a Convolution Layer for GPU</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#define-the-computation">Define the computation</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#create-the-search-task">Create the search task</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#run-the-search">Run the search</a></li>
@@ -217,7 +217,7 @@
 <li class="toctree-l3"><a class="reference internal" href="#using-the-record-file">Using the record file</a></li>
 </ul>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="tune_network_cuda.html">Auto-tuning a Neural Network for NVIDIA GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_network_cuda.html">Auto-scheduling a Neural Network for NVIDIA GPU</a></li>
 </ul>
 </li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#developer-tutorials">Developer Tutorials</a></li>
@@ -299,7 +299,7 @@
         
           <li><a href="../index.html">Get Started Tutorials</a> <span class="br-arrow">></span></li>
         
-      <li>Auto-scheduling a convolution layer for GPU</li>
+      <li>Auto-scheduling a Convolution Layer for GPU</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -323,8 +323,9 @@
 <p>Click <a class="reference internal" href="#sphx-glr-download-tutorials-auto-scheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">here</span></a> to download the full example code</p>
 </div>
 <div class="sphx-glr-example-title section" id="auto-scheduling-a-convolution-layer-for-gpu">
-<span id="auto-scheduler-conv-gpu"></span><span id="sphx-glr-tutorials-auto-scheduler-tune-conv2d-layer-cuda-py"></span><h1>Auto-scheduling a convolution layer for GPU<a class="headerlink" href="#auto-scheduling-a-convolution-layer-for-gpu" title="Permalink to this headline">¶</a></h1>
+<span id="auto-scheduler-conv-gpu"></span><span id="sphx-glr-tutorials-auto-scheduler-tune-conv2d-layer-cuda-py"></span><h1>Auto-scheduling a Convolution Layer for GPU<a class="headerlink" href="#auto-scheduling-a-convolution-layer-for-gpu" title="Permalink to this headline">¶</a></h1>
 <p><strong>Author</strong>: <a class="reference external" href="https://github.com/merrymercy">Lianmin Zheng</a>,             <a class="reference external" href="https://github.com/jcf94/">Chengfan Jia</a></p>
+<p>This is a tutorial on how to use the auto-scheduler for GPUs.</p>
 <p>Different from the template-based <a class="reference internal" href="../index.html#tutorials-autotvm-sec"><span class="std std-ref">autotvm</span></a> which relies on
 manual templates to define the search space, the auto-scheduler does not require any templates.
 Users only need to write the computation declaration without any schedule commands or templates.
@@ -405,6 +406,7 @@ and do more analyses later.</p></li>
     <span class="n">num_measure_trials</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span>  <span class="c1"># change this to 1000 to achieve the best performance</span>
     <span class="n">runner</span><span class="o">=</span><span class="n">measure_ctx</span><span class="o">.</span><span class="n">runner</span><span class="p">,</span>
     <span class="n">measure_callbacks</span><span class="o">=</span><span class="p">[</span><a href="../../api/python/auto_scheduler.html#tvm.auto_scheduler.RecordToFile" title="View documentation for tvm.auto_scheduler.RecordToFile"><span class="n">auto_scheduler</span><span class="o">.</span><span class="n">RecordToFile</span></a><span class="p">(</span><span class="n">log_file</span><span class="p">)],</span>
+    <span class="n">verbose</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span>
 <span class="p">)</span>
 </pre></div>
 </div>
@@ -442,92 +444,58 @@ cooperative fetching, unrolling and operator fusion.</p>
              kernel: Buffer(kernel_2: Pointer(float32), float32, [512, 512, 3, 3], []),
              data: Buffer(data_2: Pointer(float32), float32, [1, 512, 7, 7], [])}
   buffer_map = {data_1: data, kernel_1: kernel, bias_1: bias, compute_1: compute} {
-  attr [IterVar(blockIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;blockIdx.x&quot;)] &quot;thread_extent&quot; = 224;
+  attr [IterVar(blockIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;blockIdx.x&quot;)] &quot;thread_extent&quot; = 16;
   attr [compute_3: Pointer(float32)] &quot;storage_scope&quot; = &quot;local&quot;;
-  allocate(compute_3, float32, [2]);
+  allocate(compute_3, float32, [14]);
   attr [pad_temp.shared: Pointer(float32)] &quot;storage_scope&quot; = &quot;shared&quot;;
-  allocate(pad_temp.shared, float32, [72]);
+  allocate(pad_temp.shared, float32, [81]);
   attr [kernel.shared: Pointer(float32)] &quot;storage_scope&quot; = &quot;shared&quot;;
-  allocate(kernel.shared, float32, [384]);
-  attr [IterVar(threadIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56 {
-    compute_3[0] = 0f32
-    compute_3[1] = 0f32
-    for (rc.outer.outer: int32, 0, 64) {
-      for (rx.outer.outer: int32, 0, 3) {
-        attr [IterVar(threadIdx.x_1: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56;
-        pad_temp.shared[threadIdx.x_1] = @tir.if_then_else(((((1 &lt;= floormod(threadIdx.x_1, 9)) &amp;&amp; (floormod(threadIdx.x_1, 9) &lt; 8)) &amp;&amp; (1 &lt;= (rx.outer.outer + floormod(blockIdx.x, 7)))) &amp;&amp; ((rx.outer.outer + floormod(blockIdx.x, 7)) &lt; 8)), (float32*)data_2[((((((rc.outer.outer*392) + (floordiv(threadIdx.x_1, 9)*49)) + (floormod(threadIdx.x_1, 9)*7)) + rx.outer.outer) + floormod(blockIdx.x, 7)) - 8)], 0f32, dtype=float32)
-        attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56;
-        if @tir.likely((threadIdx.x_1 &lt; 16), dtype=bool) {
-          pad_temp.shared[(threadIdx.x_1 + 56)] = @tir.if_then_else(((((1 &lt;= floormod((threadIdx.x_1 + 2), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 2), 9) &lt; 8)) &amp;&amp; (1 &lt;= (rx.outer.outer + floormod(blockIdx.x, 7)))) &amp;&amp; ((rx.outer.outer + floormod(blockIdx.x, 7)) &lt; 8)), (float32*)data_2[((((((rc.outer.outer*392) + (floordiv((threadIdx.x_1 + 56), 9)*49)) + (floormod((threadIdx.x_1 + 2), 9)*7)) + rx.outer.outer) + floormod(blockIdx.x, 7)) - 8)], 0f32, dtype=float32)
+  allocate(kernel.shared, float32, [288]);
+  attr [IterVar(threadIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 112 {
+    for (ff.inner.init: int32, 0, 2) {
+      compute_3[ff.inner.init] = 0f32
+      compute_3[(ff.inner.init + 2)] = 0f32
+      compute_3[(ff.inner.init + 4)] = 0f32
+      compute_3[(ff.inner.init + 6)] = 0f32
+      compute_3[(ff.inner.init + 8)] = 0f32
+      compute_3[(ff.inner.init + 10)] = 0f32
+      compute_3[(ff.inner.init + 12)] = 0f32
+    }
+    for (rc.outer.outer: int32, 0, 512) {
+      attr [IterVar(threadIdx.x_1: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 112;
+      if @tir.likely((threadIdx.x_1 &lt; 9), dtype=bool) {
+        for (ax0.ax1.fused.ax2.fused.ax3.fused.inner.s: int32, 0, 9) {
+          pad_temp.shared[((threadIdx.x_1*9) + ax0.ax1.fused.ax2.fused.ax3.fused.inner.s)] = @tir.if_then_else(((((1 &lt;= threadIdx.x_1) &amp;&amp; (threadIdx.x_1 &lt; 8)) &amp;&amp; (1 &lt;= ax0.ax1.fused.ax2.fused.ax3.fused.inner.s)) &amp;&amp; (ax0.ax1.fused.ax2.fused.ax3.fused.inner.s &lt; 8)), (float32*)data_2[((((rc.outer.outer*49) + (threadIdx.x_1*7)) + ax0.ax1.fused.ax2.fused.ax3.fused.inner.s) - 8)], 0f32, dtype=float32)
+        }
+      }
+      for (ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer: int32, 0, 3) {
+        attr [IterVar(threadIdx.x_2: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 112;
+        if @tir.likely((((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*112) + threadIdx.x_2) &lt; 288), dtype=bool) {
+          kernel.shared[((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*112) + threadIdx.x_2)] = (float32*)kernel_2[((((blockIdx.x*147456) + (floordiv(((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*112) + threadIdx.x_2), 9)*4608)) + (rc.outer.outer*9)) + floormod(((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*112) + threadIdx.x_2), 9))]
         }
-        attr [IterVar(threadIdx.x_2: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56;
-        kernel.shared[threadIdx.x_2] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floormod(threadIdx.x_2, 24)*3)) + rx.outer.outer)]
-        attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56;
-        kernel.shared[(threadIdx.x_2 + 56)] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv((threadIdx.x_2 + 56), 24)*4608)) + (rc.outer.outer*72)) + (floormod((threadIdx.x_2 + 8), 24)*3)) + rx.outer.outer)]
-        attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56;
-        kernel.shared[(threadIdx.x_2 + 112)] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv((threadIdx.x_2 + 112), 24)*4608)) + (rc.outer.outer*72)) + (floormod((threadIdx.x_2 + 16), 24)*3)) + rx.outer.outer)]
-        attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56;
-        kernel.shared[(threadIdx.x_2 + 168)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*73728) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floormod(threadIdx.x_2, 24)*3)) + rx.outer.outer) + 32256)]
-        attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56;
-        kernel.shared[(threadIdx.x_2 + 224)] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv((threadIdx.x_2 + 224), 24)*4608)) + (rc.outer.outer*72)) + (floormod((threadIdx.x_2 + 8), 24)*3)) + rx.outer.outer)]
-        attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56;
-        kernel.shared[(threadIdx.x_2 + 280)] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*73728) + (floordiv((threadIdx.x_2 + 280), 24)*4608)) + (rc.outer.outer*72)) + (floormod((threadIdx.x_2 + 16), 24)*3)) + rx.outer.outer)]
-        attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 56;
-        if @tir.likely((threadIdx.x_2 &lt; 48), dtype=bool) {
-          kernel.shared[(threadIdx.x_2 + 336)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*73728) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floormod(threadIdx.x_2, 24)*3)) + rx.outer.outer) + 64512)]
+      }
+      for (ry.outer.inner: int32, 0, 3) {
+        for (rx.inner: int32, 0, 3) {
+          for (ff.inner: int32, 0, 2) {
+            compute_3[ff.inner] = ((float32*)compute_3[ff.inner] + ((float32*)pad_temp.shared[(((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+            compute_3[(ff.inner + 2)] = ((float32*)compute_3[(ff.inner + 2)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 1)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+            compute_3[(ff.inner + 4)] = ((float32*)compute_3[(ff.inner + 4)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 2)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+            compute_3[(ff.inner + 6)] = ((float32*)compute_3[(ff.inner + 6)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 3)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+            compute_3[(ff.inner + 8)] = ((float32*)compute_3[(ff.inner + 8)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 4)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+            compute_3[(ff.inner + 10)] = ((float32*)compute_3[(ff.inner + 10)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 5)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+            compute_3[(ff.inner + 12)] = ((float32*)compute_3[(ff.inner + 12)] + ((float32*)pad_temp.shared[((((ry.outer.inner*9) + (floormod(threadIdx.x, 7)*9)) + rx.inner) + 6)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*18) + (ff.inner*9)) + (ry.outer.inner*3)) + rx.inner)]))
+          }
         }
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[floormod(threadIdx.x, 7)]*(float32*)kernel.shared[(floordiv(threadIdx.x, 7)*48)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 9)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 3)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[floormod(threadIdx.x, 7)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 24)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 9)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 27)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 1)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 1)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 10)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 4)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 1)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 25)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 10)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 28)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 2)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 2)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 11)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 5)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 2)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 26)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 11)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 29)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 18)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 6)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 27)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 9)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 18)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 30)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 27)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 33)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 19)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 7)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 28)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 10)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 19)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 31)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 28)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 34)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 20)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 8)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 29)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 11)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 20)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 32)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 29)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 35)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 36)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 12)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 45)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 15)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 36)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 36)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 45)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 39)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 37)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 13)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 46)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 16)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 37)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 37)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 46)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 40)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 38)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 14)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 47)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 17)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 38)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 38)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 47)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 41)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 54)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 18)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 63)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 21)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 54)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 42)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 63)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 45)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 55)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 19)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 64)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 22)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 55)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 43)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 64)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 46)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 56)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 20)]))
-        compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 65)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 23)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 56)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 44)]))
-        compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(floormod(threadIdx.x, 7) + 65)]*(float32*)kernel.shared[((floordiv(threadIdx.x, 7)*48) + 47)]))
       }
     }
     for (i1.inner: int32, 0, 2) {
-      compute_2[(((((floordiv(blockIdx.x, 7)*784) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + floormod(blockIdx.x, 7))] = max(((float32*)compute_3[i1.inner] + (float32*)bias_2[(((floordiv(blockIdx.x, 7)*16) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+      compute_2[((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7))] = max(((float32*)compute_3[i1.inner] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+      compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 1)] = max(((float32*)compute_3[(i1.inner + 2)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+      compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 2)] = max(((float32*)compute_3[(i1.inner + 4)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+      compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 3)] = max(((float32*)compute_3[(i1.inner + 6)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+      compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 4)] = max(((float32*)compute_3[(i1.inner + 8)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+      compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 5)] = max(((float32*)compute_3[(i1.inner + 10)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+      compute_2[(((((blockIdx.x*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + 6)] = max(((float32*)compute_3[(i1.inner + 12)] + (float32*)bias_2[(((blockIdx.x*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
     }
   }
 }
@@ -565,7 +533,7 @@ cooperative fetching, unrolling and operator fusion.</p>
 </pre></div>
 </div>
 <p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Execution time of this operator: 0.364 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Execution time of this operator: 0.394 ms
 </pre></div>
 </div>
 </div>
@@ -601,9 +569,9 @@ compute_nn_o_i, compute_nn_i = s[compute].split(compute_nn, factor=1)
 compute_nn_o_o_i, compute_nn_o_i = s[compute].split(compute_nn_o_i, factor=1)
 compute_nn_o_o_o_i, compute_nn_o_o_i = s[compute].split(compute_nn_o_o_i, factor=1)
 compute_nn_o_o_o_o, compute_nn_o_o_o_i = s[compute].split(compute_nn_o_o_o_i, factor=1)
-compute_ff_o_i, compute_ff_i = s[compute].split(compute_ff, factor=1)
-compute_ff_o_o_i, compute_ff_o_i = s[compute].split(compute_ff_o_i, factor=2)
-compute_ff_o_o_o_i, compute_ff_o_o_i = s[compute].split(compute_ff_o_o_i, factor=8)
+compute_ff_o_i, compute_ff_i = s[compute].split(compute_ff, factor=2)
+compute_ff_o_o_i, compute_ff_o_i = s[compute].split(compute_ff_o_i, factor=1)
+compute_ff_o_o_o_i, compute_ff_o_o_i = s[compute].split(compute_ff_o_o_i, factor=16)
 compute_ff_o_o_o_o, compute_ff_o_o_o_i = s[compute].split(compute_ff_o_o_o_i, factor=1)
 compute_yy_o_i, compute_yy_i = s[compute].split(compute_yy, factor=1)
 compute_yy_o_o_i, compute_yy_o_i = s[compute].split(compute_yy_o_i, factor=1)
@@ -612,26 +580,26 @@ compute_yy_o_o_o_o, compute_yy_o_o_o_i = s[compute].split(compute_yy_o_o_o_i, fa
 compute_xx_o_i, compute_xx_i = s[compute].split(compute_xx, factor=1)
 compute_xx_o_o_i, compute_xx_o_i = s[compute].split(compute_xx_o_i, factor=1)
 compute_xx_o_o_o_i, compute_xx_o_o_i = s[compute].split(compute_xx_o_o_i, factor=1)
-compute_xx_o_o_o_o, compute_xx_o_o_o_i = s[compute].split(compute_xx_o_o_o_i, factor=1)
-compute_rc_o_i, compute_rc_i = s[compute].split(compute_rc, factor=2)
-compute_rc_o_o, compute_rc_o_i = s[compute].split(compute_rc_o_i, factor=4)
+compute_xx_o_o_o_o, compute_xx_o_o_o_i = s[compute].split(compute_xx_o_o_o_i, factor=7)
+compute_rc_o_i, compute_rc_i = s[compute].split(compute_rc, factor=1)
+compute_rc_o_o, compute_rc_o_i = s[compute].split(compute_rc_o_i, factor=1)
 compute_ry_o_i, compute_ry_i = s[compute].split(compute_ry, factor=1)
 compute_ry_o_o, compute_ry_o_i = s[compute].split(compute_ry_o_i, factor=3)
-compute_rx_o_i, compute_rx_i = s[compute].split(compute_rx, factor=1)
+compute_rx_o_i, compute_rx_i = s[compute].split(compute_rx, factor=3)
 compute_rx_o_o, compute_rx_o_i = s[compute].split(compute_rx_o_i, factor=1)
 s[compute].reorder(compute_nn_o_o_o_o, compute_ff_o_o_o_o, compute_yy_o_o_o_o, compute_xx_o_o_o_o, compute_nn_o_o_o_i, compute_ff_o_o_o_i, compute_yy_o_o_o_i, compute_xx_o_o_o_i, compute_nn_o_o_i, compute_ff_o_o_i, compute_yy_o_o_i, compute_xx_o_o_i, compute_rc_o_o, compute_ry_o_o, compute_rx_o_o, compute_rc_o_i, compute_ry_o_i, compute_rx_o_i, compute_nn_o_i, compute_ff_o_i, compute_yy_o_i, compute_xx_o_i, compute_rc_i, compute_ry_i, compute_rx_i, compute_nn_i, compute_ff_i, compute_yy_ [...]
 compute_i0_o_i, compute_i0_i = s[compute].split(compute_i0, factor=1)
 compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
 compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
 compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=2)
-compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=8)
+compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=16)
 compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
 compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
 compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
 compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
 compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=1)
 compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=1)
-compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=1)
+compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=7)
 s[compute].reorder(compute_i0_o_o_o, compute_i1_o_o_o, compute_i2_o_o_o, compute_i3_o_o_o, compute_i0_o_o_i, compute_i1_o_o_i, compute_i2_o_o_i, compute_i3_o_o_i, compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i, compute_i0_i, compute_i1_i, compute_i2_i, compute_i3_i)
 s[compute].compute_at(s[compute], compute_i3_o_i)
 kernel_shared = s.cache_read(kernel, &quot;shared&quot;, [compute])
@@ -650,14 +618,14 @@ s[compute].bind(compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused, te.thread
 kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
 kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
 s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
+kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=112)
 s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis(&quot;threadIdx.x&quot;))
 pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
-pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
+pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=9)
 s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
+pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=112)
 s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis(&quot;threadIdx.x&quot;))
-s[compute].pragma(compute_nn_o_o_o_o, &quot;auto_unroll_max_step&quot;, 64)
+s[compute].pragma(compute_nn_o_o_o_o, &quot;auto_unroll_max_step&quot;, 0)
 s[compute].pragma(compute_nn_o_o_o_o, &quot;unroll_explicit&quot;, True)
 </pre></div>
 </div>
@@ -686,7 +654,7 @@ In the example below we resume the status and do more 5 trials.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Get devices for measurement successfully!
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  46.397 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  42.357 seconds)</p>
 <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorials-auto-scheduler-tune-conv2d-layer-cuda-py">
 <div class="sphx-glr-download docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/678f3c372a599a18d909aed0fefb30be/tune_conv2d_layer_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_conv2d_layer_cuda.py</span></code></a></p>
@@ -709,10 +677,10 @@ In the example below we resume the status and do more 5 trials.</p>
 
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="tune_network_cuda.html" class="btn btn-neutral float-right" title="Auto-tuning a Neural Network for NVIDIA GPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="tune_network_cuda.html" class="btn btn-neutral float-right" title="Auto-scheduling a Neural Network for NVIDIA GPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="tune_matmul_x86.html" class="btn btn-neutral float-left" title="Auto-scheduling matrix multiplication for CPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="tune_matmul_x86.html" class="btn btn-neutral float-left" title="Auto-scheduling Matrix Multiplication for CPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
 
diff --git a/docs/tutorials/auto_scheduler/tune_matmul_x86.html b/docs/tutorials/auto_scheduler/tune_matmul_x86.html
index 8727fcc..e9fb7ff 100644
--- a/docs/tutorials/auto_scheduler/tune_matmul_x86.html
+++ b/docs/tutorials/auto_scheduler/tune_matmul_x86.html
@@ -11,7 +11,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>Auto-scheduling matrix multiplication for CPU &mdash; tvm 0.8.dev0 documentation</title>
+  <title>Auto-scheduling Matrix Multiplication for CPU &mdash; tvm 0.8.dev0 documentation</title>
   
 
   
@@ -44,8 +44,8 @@
     <script type="text/javascript" src="../../_static/js/tlcpack_theme.js"></script>
     <link rel="index" title="Index" href="../../genindex.html" />
     <link rel="search" title="Search" href="../../search.html" />
-    <link rel="next" title="Auto-scheduling a convolution layer for GPU" href="tune_conv2d_layer_cuda.html" />
-    <link rel="prev" title="Auto-tuning a convolutional network for Mobile GPU" href="../autotvm/tune_relay_mobile_gpu.html" /> 
+    <link rel="next" title="Auto-scheduling a Convolution Layer for GPU" href="tune_conv2d_layer_cuda.html" />
+    <link rel="prev" title="Auto-tuning a Convolutional Network for Mobile GPU" href="../autotvm/tune_relay_mobile_gpu.html" /> 
 </head>
 
 <body class="wy-body-for-nav">
@@ -208,7 +208,7 @@
 <li class="toctree-l1"><a class="reference internal" href="../index.html#optimize-tensor-operators">Optimize Tensor Operators</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#autotvm-template-based-auto-tuning">AutoTVM : Template-based Auto Tuning</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="../index.html#autoscheduler-template-free-auto-scheduling">AutoScheduler : Template-free Auto Scheduling</a><ul class="current">
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-scheduling matrix multiplication for CPU</a><ul>
+<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-scheduling Matrix Multiplication for CPU</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#define-the-computation">Define the computation</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#create-the-search-task">Create the search task</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#run-the-search">Run the search</a></li>
@@ -216,8 +216,8 @@
 <li class="toctree-l3"><a class="reference internal" href="#using-the-record-file">Using the record file</a></li>
 </ul>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="tune_conv2d_layer_cuda.html">Auto-scheduling a convolution layer for GPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_network_cuda.html">Auto-tuning a Neural Network for NVIDIA GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_conv2d_layer_cuda.html">Auto-scheduling a Convolution Layer for GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_network_cuda.html">Auto-scheduling a Neural Network for NVIDIA GPU</a></li>
 </ul>
 </li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#developer-tutorials">Developer Tutorials</a></li>
@@ -299,7 +299,7 @@
         
           <li><a href="../index.html">Get Started Tutorials</a> <span class="br-arrow">></span></li>
         
-      <li>Auto-scheduling matrix multiplication for CPU</li>
+      <li>Auto-scheduling Matrix Multiplication for CPU</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -323,8 +323,9 @@
 <p>Click <a class="reference internal" href="#sphx-glr-download-tutorials-auto-scheduler-tune-matmul-x86-py"><span class="std std-ref">here</span></a> to download the full example code</p>
 </div>
 <div class="sphx-glr-example-title section" id="auto-scheduling-matrix-multiplication-for-cpu">
-<span id="sphx-glr-tutorials-auto-scheduler-tune-matmul-x86-py"></span><h1>Auto-scheduling matrix multiplication for CPU<a class="headerlink" href="#auto-scheduling-matrix-multiplication-for-cpu" title="Permalink to this headline">¶</a></h1>
+<span id="sphx-glr-tutorials-auto-scheduler-tune-matmul-x86-py"></span><h1>Auto-scheduling Matrix Multiplication for CPU<a class="headerlink" href="#auto-scheduling-matrix-multiplication-for-cpu" title="Permalink to this headline">¶</a></h1>
 <p><strong>Author</strong>: <a class="reference external" href="https://github.com/merrymercy">Lianmin Zheng</a>,             <a class="reference external" href="https://github.com/jcf94/">Chengfan Jia</a></p>
+<p>This is a tutorial on how to use the auto-scheduler for CPUs.</p>
 <p>Different from the template-based <a class="reference internal" href="../index.html#tutorials-autotvm-sec"><span class="std std-ref">autotvm</span></a> which relies on
 manual templates to define the search space, the auto-scheduler does not require any templates.
 Users only need to write the computation declaration without any schedule commands or templates.
@@ -397,7 +398,9 @@ and do more analyses later.</p></li>
 </ul>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span class="n">log_file</span> <span class="o">=</span> <span class="s2">&quot;matmul.json&quot;</span>
 <span class="n">tune_option</span> <span class="o">=</span> <a href="../../api/python/auto_scheduler.html#tvm.auto_scheduler.TuningOptions" title="View documentation for tvm.auto_scheduler.TuningOptions"><span class="n">auto_scheduler</span><span class="o">.</span><span class="n">TuningOptions</span></a><span class="p">(</span>
-    <span class="n">num_measure_trials</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span> <span class="n">measure_callbacks</span><span class="o">=</span><span class="p">[</span><a href="../../api/python/auto_scheduler.html#tvm.auto_scheduler.RecordToFile" title="View documentation for tvm.auto_scheduler.RecordToFile"><span class="n">auto_scheduler</span><span class="o">.</span><span class="n">RecordToFile</span></a><span class="p">(</span><span class="n">lo [...]
+    <span class="n">num_measure_trials</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span>
+    <span class="n">measure_callbacks</span><span class="o">=</span><span class="p">[</span><a href="../../api/python/auto_scheduler.html#tvm.auto_scheduler.RecordToFile" title="View documentation for tvm.auto_scheduler.RecordToFile"><span class="n">auto_scheduler</span><span class="o">.</span><span class="n">RecordToFile</span></a><span class="p">(</span><span class="n">log_file</span><span class="p">)],</span>
+    <span class="n">verbose</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span>
 <span class="p">)</span>
 </pre></div>
 </div>
@@ -476,7 +479,7 @@ parallelization, vectorization, unrolling and operator fusion.</p>
 </pre></div>
 </div>
 <p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Execution time of this operator: 2.209 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Execution time of this operator: 2.243 ms
 </pre></div>
 </div>
 </div>
@@ -560,9 +563,9 @@ For example, you can start a new thread/process (with the builtin python library
 threading or multiprocessing) and run the tvm binaries in the new thread/process.
 This provides an isolation and avoids the conflict in the main thread/process.
 You can also use <a class="reference internal" href="../../api/python/auto_scheduler.html#tvm.auto_scheduler.LocalRPCMeasureContext" title="tvm.auto_scheduler.LocalRPCMeasureContext"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.LocalRPCMeasureContext</span></code></a> for auto-scheduler,
-as shown in the GPU tutorial (<a class="reference internal" href="tune_conv2d_layer_cuda.html#auto-scheduler-conv-gpu"><span class="std std-ref">Auto-scheduling a convolution layer for GPU</span></a>).</p>
+as shown in the GPU tutorial (<a class="reference internal" href="tune_conv2d_layer_cuda.html#auto-scheduler-conv-gpu"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a>).</p>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  54.000 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  53.558 seconds)</p>
 <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorials-auto-scheduler-tune-matmul-x86-py">
 <div class="sphx-glr-download docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/91b0339c8f3cc2594cee580dc450149a/tune_matmul_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_matmul_x86.py</span></code></a></p>
@@ -585,10 +588,10 @@ as shown in the GPU tutorial (<a class="reference internal" href="tune_conv2d_la
 
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="tune_conv2d_layer_cuda.html" class="btn btn-neutral float-right" title="Auto-scheduling a convolution layer for GPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="tune_conv2d_layer_cuda.html" class="btn btn-neutral float-right" title="Auto-scheduling a Convolution Layer for GPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="../autotvm/tune_relay_mobile_gpu.html" class="btn btn-neutral float-left" title="Auto-tuning a convolutional network for Mobile GPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="../autotvm/tune_relay_mobile_gpu.html" class="btn btn-neutral float-left" title="Auto-tuning a Convolutional Network for Mobile GPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
 
diff --git a/docs/tutorials/auto_scheduler/tune_network_cuda.html b/docs/tutorials/auto_scheduler/tune_network_cuda.html
index 4bd7f11..02ac0bd 100644
--- a/docs/tutorials/auto_scheduler/tune_network_cuda.html
+++ b/docs/tutorials/auto_scheduler/tune_network_cuda.html
@@ -11,7 +11,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>Auto-tuning a Neural Network for NVIDIA GPU &mdash; tvm 0.8.dev0 documentation</title>
+  <title>Auto-scheduling a Neural Network for NVIDIA GPU &mdash; tvm 0.8.dev0 documentation</title>
   
 
   
@@ -45,7 +45,7 @@
     <link rel="index" title="Index" href="../../genindex.html" />
     <link rel="search" title="Search" href="../../search.html" />
     <link rel="next" title="Bring Your Own Datatypes to TVM" href="../dev/bring_your_own_datatypes.html" />
-    <link rel="prev" title="Auto-scheduling a convolution layer for GPU" href="tune_conv2d_layer_cuda.html" /> 
+    <link rel="prev" title="Auto-scheduling a Convolution Layer for GPU" href="tune_conv2d_layer_cuda.html" /> 
 </head>
 
 <body class="wy-body-for-nav">
@@ -208,9 +208,9 @@
 <li class="toctree-l1"><a class="reference internal" href="../index.html#optimize-tensor-operators">Optimize Tensor Operators</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#autotvm-template-based-auto-tuning">AutoTVM : Template-based Auto Tuning</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="../index.html#autoscheduler-template-free-auto-scheduling">AutoScheduler : Template-free Auto Scheduling</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="tune_matmul_x86.html">Auto-scheduling matrix multiplication for CPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_conv2d_layer_cuda.html">Auto-scheduling a convolution layer for GPU</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-tuning a Neural Network for NVIDIA GPU</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="tune_matmul_x86.html">Auto-scheduling Matrix Multiplication for CPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_conv2d_layer_cuda.html">Auto-scheduling a Convolution Layer for GPU</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-scheduling a Neural Network for NVIDIA GPU</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#define-a-network">Define a Network</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#extract-search-tasks">Extract Search Tasks</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#begin-tuning">Begin Tuning</a></li>
@@ -299,7 +299,7 @@
         
           <li><a href="../index.html">Get Started Tutorials</a> <span class="br-arrow">></span></li>
         
-      <li>Auto-tuning a Neural Network for NVIDIA GPU</li>
+      <li>Auto-scheduling a Neural Network for NVIDIA GPU</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -322,8 +322,8 @@
 <p class="admonition-title">Note</p>
 <p>Click <a class="reference internal" href="#sphx-glr-download-tutorials-auto-scheduler-tune-network-cuda-py"><span class="std std-ref">here</span></a> to download the full example code</p>
 </div>
-<div class="sphx-glr-example-title section" id="auto-tuning-a-neural-network-for-nvidia-gpu">
-<span id="sphx-glr-tutorials-auto-scheduler-tune-network-cuda-py"></span><h1>Auto-tuning a Neural Network for NVIDIA GPU<a class="headerlink" href="#auto-tuning-a-neural-network-for-nvidia-gpu" title="Permalink to this headline">¶</a></h1>
+<div class="sphx-glr-example-title section" id="auto-scheduling-a-neural-network-for-nvidia-gpu">
+<span id="sphx-glr-tutorials-auto-scheduler-tune-network-cuda-py"></span><h1>Auto-scheduling a Neural Network for NVIDIA GPU<a class="headerlink" href="#auto-scheduling-a-neural-network-for-nvidia-gpu" title="Permalink to this headline">¶</a></h1>
 <p><strong>Author</strong>: <a class="reference external" href="https://github.com/merrymercy">Lianmin Zheng</a></p>
 <p>Auto-tuning for specific devices and workloads is critical for getting the
 best performance. This is a tutorial on how to tune a whole neural
@@ -451,10 +451,266 @@ The task scheduler will just optimize this objective.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Extract tasks...&quot;</span><span class="p">)</span>
 <span class="n">mod</span><span class="p">,</span> <span class="n">params</span><span class="p">,</span> <span class="n">input_shape</span><span class="p">,</span> <span class="n">output_shape</span> <span class="o">=</span> <span class="n">get_network</span><span class="p">(</span><span class="n">network</span><span class="p">,</span> <span class="n">batch_size</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="n">dtype</span><span class="o [...]
 <span class="n">tasks</span><span class="p">,</span> <span class="n">task_weights</span> <span class="o">=</span> <a href="../../api/python/auto_scheduler.html#tvm.auto_scheduler.extract_tasks" title="View documentation for tvm.auto_scheduler.extract_tasks"><span class="n">auto_scheduler</span><span class="o">.</span><span class="n">extract_tasks</span></a><span class="p">(</span><span class="n">mod</span><span class="p">[</span><span class="s2">&quot;main&quot;</span><span class="p">],< [...]
+
+<span class="k">for</span> <span class="n">idx</span><span class="p">,</span> <span class="n">task</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">tasks</span><span class="p">):</span>
+    <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;========== Task </span><span class="si">%d</span><span class="s2">  (workload key: </span><span class="si">%s</span><span class="s2">) ==========&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">idx</span><span class="p">,</span> <span class="n">task</span><span class="o">.</span><span class="n">workload_key</span><span class="p">))</span>
+    <span class="nb">print</span><span class="p">(</span><span class="n">task</span><span class="o">.</span><span class="n">compute_dag</span><span class="p">)</span>
 </pre></div>
 </div>
 <p class="sphx-glr-script-out">Out:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Extract tasks...
+========== Task 0  (workload key: [&quot;d09dc1a6bb90d59c91b68989ad3492ff&quot;]) ==========
+placeholder = PLACEHOLDER [1, 512]
+placeholder = PLACEHOLDER [1000, 512]
+T_dense(i, j) += (placeholder[i, k]*placeholder[j, k])
+placeholder = PLACEHOLDER [1000]
+T_add(ax0, ax1) = (T_dense[ax0, ax1] + placeholder[ax1])
+
+========== Task 1  (workload key: [&quot;8d5a93959138dc7b2ee1f1b3219dfa14&quot;]) ==========
+placeholder = PLACEHOLDER [1, 7, 7, 512]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 8)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 8)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 16), ((floormod(floordiv(p, 4), 4)*2) + eps), ((floormod(p, 4)*2) + nu), ci]
+B(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [4, 4, 512, 512]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 0)), 1f, 0f))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*4)*4) + (floordiv(h, 2)*4)) + floordiv(w, 2)), co]
+placeholder = PLACEHOLDER [1, 7, 7, 512]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+placeholder = PLACEHOLDER [1, 1, 1, 512]
+T_multiply(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3]*placeholder[ax0, 0, 0, ax3])
+placeholder = PLACEHOLDER [1, 1, 1, 512]
+T_add(ax0, ax1, ax2, ax3) = (T_multiply[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 2  (workload key: [&quot;ac6920940de3797cc3f9f9c260675e5d&quot;]) ==========
+placeholder = PLACEHOLDER [1, 7, 7, 512]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 8)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 8)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 16), ((floormod(floordiv(p, 4), 4)*2) + eps), ((floormod(p, 4)*2) + nu), ci]
+B(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [4, 4, 512, 512]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 0)), 1f, 0f))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*4)*4) + (floordiv(h, 2)*4)) + floordiv(w, 2)), co]
+placeholder = PLACEHOLDER [1, 1, 1, 512]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 3  (workload key: [&quot;7e83a2ee5cd5d50282ed19310700046a&quot;]) ==========
+placeholder = PLACEHOLDER [1, 7, 7, 512]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 8)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 8)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 16), ((floormod(floordiv(p, 4), 4)*2) + eps), ((floormod(p, 4)*2) + nu), ci]
+B(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [4, 4, 512, 512]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 0)), 1f, 0f))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*4)*4) + (floordiv(h, 2)*4)) + floordiv(w, 2)), co]
+placeholder = PLACEHOLDER [1, 7, 7, 512]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+
+========== Task 4  (workload key: [&quot;1f6cd3637ec856bf5cf5010a623eed05&quot;]) ==========
+placeholder = PLACEHOLDER [1, 14, 14, 256]
+PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 15)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+placeholder = PLACEHOLDER [3, 3, 256, 512]
+Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+placeholder = PLACEHOLDER [1, 1, 1, 512]
+T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 5  (workload key: [&quot;424ba83160af31badc0b098136e1a3b0&quot;]) ==========
+placeholder = PLACEHOLDER [1, 14, 14, 256]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 15)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 49), ((floormod(floordiv(p, 7), 7)*2) + eps), ((floormod(p, 7)*2) + nu), ci]
+B(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [4, 4, 256, 256]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 0)), 1f, 0f))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*7)*7) + (floordiv(h, 2)*7)) + floordiv(w, 2)), co]
+placeholder = PLACEHOLDER [1, 14, 14, 256]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+placeholder = PLACEHOLDER [1, 1, 1, 256]
+T_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 6  (workload key: [&quot;a169cd0053d3a7ca82998fcb62e42c58&quot;]) ==========
+placeholder = PLACEHOLDER [1, 14, 14, 256]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 15)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 49), ((floormod(floordiv(p, 7), 7)*2) + eps), ((floormod(p, 7)*2) + nu), ci]
+B(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [4, 4, 256, 256]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 0)), 1f, 0f))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*7)*7) + (floordiv(h, 2)*7)) + floordiv(w, 2)), co]
+placeholder = PLACEHOLDER [1, 1, 1, 256]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 7  (workload key: [&quot;0141ffc4fbabc10cc5a94c954419055b&quot;]) ==========
+placeholder = PLACEHOLDER [1, 14, 14, 256]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 15)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 49), ((floormod(floordiv(p, 7), 7)*2) + eps), ((floormod(p, 7)*2) + nu), ci]
+B(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [4, 4, 256, 256]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 0)), 1f, 0f))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*7)*7) + (floordiv(h, 2)*7)) + floordiv(w, 2)), co]
+placeholder = PLACEHOLDER [1, 14, 14, 256]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+
+========== Task 8  (workload key: [&quot;81aae4b8e2c076a4014d403e8a2c70a1&quot;]) ==========
+placeholder = PLACEHOLDER [1, 28, 28, 128]
+PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 29)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+placeholder = PLACEHOLDER [3, 3, 128, 256]
+Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+placeholder = PLACEHOLDER [1, 1, 1, 256]
+T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 9  (workload key: [&quot;c7a6b56bdc04b94c829fb2ef9874019e&quot;]) ==========
+placeholder = PLACEHOLDER [1, 28, 28, 128]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 29)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*2) + eps), ((floormod(p, 14)*2) + nu), ci]
+B(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [4, 4, 128, 128]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 0)), 1f, 0f))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*14)*14) + (floordiv(h, 2)*14)) + floordiv(w, 2)), co]
+placeholder = PLACEHOLDER [1, 28, 28, 128]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+placeholder = PLACEHOLDER [1, 1, 1, 128]
+T_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 10  (workload key: [&quot;c035cc8b0568a8e054d06bd7f4950550&quot;]) ==========
+placeholder = PLACEHOLDER [1, 28, 28, 128]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 29)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*2) + eps), ((floormod(p, 14)*2) + nu), ci]
+B(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [4, 4, 128, 128]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 0)), 1f, 0f))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*14)*14) + (floordiv(h, 2)*14)) + floordiv(w, 2)), co]
+placeholder = PLACEHOLDER [1, 1, 1, 128]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 11  (workload key: [&quot;c5ee3e05edd9754492d0763aa41fd025&quot;]) ==========
+placeholder = PLACEHOLDER [1, 28, 28, 128]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 29)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*2) + eps), ((floormod(p, 14)*2) + nu), ci]
+B(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED).. ormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [4, 4, 128, 128]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 1)), 1f, select(((floormod(i, 4) == 3) &amp;&amp; (floormod(j, 2) == 0)),  ..(OMITTED).. ct(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 1)), 0f, select(((floormod(i, 4) == 0) &amp;&amp; (floormod(j, 2) == 0)), 1f, 0f))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 2), floormod(w, 2), ((((n*14)*14) + (floordiv(h, 2)*14)) + floordiv(w, 2)), co]
+placeholder = PLACEHOLDER [1, 28, 28, 128]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+
+========== Task 12  (workload key: [&quot;022ebb6b7c55c5ed030421380ec83a04&quot;]) ==========
+placeholder = PLACEHOLDER [1, 56, 56, 64]
+PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 57)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+placeholder = PLACEHOLDER [3, 3, 64, 128]
+Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+placeholder = PLACEHOLDER [1, 1, 1, 128]
+T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 13  (workload key: [&quot;de0df0893e01892cfe69f7bc2c24111f&quot;]) ==========
+placeholder = PLACEHOLDER [1, 56, 56, 64]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 57)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*4) + eps), ((floormod(p, 14)*4) + nu), ci]
+B(i, j) = select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 6) == 5)), 1f, select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 6) == 4)),  ..(OMITTED)..  (floormod(j, 6) == 1)), 0f, select(((floormod(i, 6) == 0) &amp;&amp; (floormod(j, 6) == 0)), 1f, 0f))))))))))))))))))))))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [6, 6, 64, 64]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED)..  6) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 6) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 4), floormod(w, 4), ((((n*14)*14) + (floordiv(h, 4)*14)) + floordiv(w, 4)), co]
+placeholder = PLACEHOLDER [1, 56, 56, 64]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+placeholder = PLACEHOLDER [1, 1, 1, 64]
+T_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 14  (workload key: [&quot;f2e3c09a00e7d0a9897f70497e089f1e&quot;]) ==========
+placeholder = PLACEHOLDER [1, 56, 56, 64]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 57)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*4) + eps), ((floormod(p, 14)*4) + nu), ci]
+B(i, j) = select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 6) == 5)), 1f, select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 6) == 4)),  ..(OMITTED)..  (floormod(j, 6) == 1)), 0f, select(((floormod(i, 6) == 0) &amp;&amp; (floormod(j, 6) == 0)), 1f, 0f))))))))))))))))))))))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [6, 6, 64, 64]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED)..  6) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 6) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 4), floormod(w, 4), ((((n*14)*14) + (floordiv(h, 4)*14)) + floordiv(w, 4)), co]
+placeholder = PLACEHOLDER [1, 1, 1, 64]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 15  (workload key: [&quot;fa26946d7ac51126bfa859cb183f9ca1&quot;]) ==========
+placeholder = PLACEHOLDER [1, 56, 56, 64]
+data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 1) &amp;&amp; (i1 &lt; 57)) &amp;&amp; (i2 &gt;= 1)) &amp;&amp; (i2 &lt; 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*4) + eps), ((floormod(p, 14)*4) + nu), ci]
+B(i, j) = select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 6) == 5)), 1f, select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 6) == 4)),  ..(OMITTED)..  (floormod(j, 6) == 1)), 0f, select(((floormod(i, 6) == 0) &amp;&amp; (floormod(j, 6) == 0)), 1f, 0f))))))))))))))))))))))))))))))))))))
+data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
+placeholder = PLACEHOLDER [6, 6, 64, 64]
+bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
+A(i, j) = select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 4) == 3)), 1f, select(((floormod(i, 6) == 5) &amp;&amp; (floormod(j, 4) == 2)),  ..(OMITTED)..  6) == 0) &amp;&amp; (floormod(j, 4) == 1)), 0f, select(((floormod(i, 6) == 0) &amp;&amp; (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))))))))))
+inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
+conv2d_winograd(n, h, w, co) = inverse[floormod(h, 4), floormod(w, 4), ((((n*14)*14) + (floordiv(h, 4)*14)) + floordiv(w, 4)), co]
+placeholder = PLACEHOLDER [1, 56, 56, 64]
+T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+
+========== Task 16  (workload key: [&quot;a0eb8d6048282a4a0986cc2ccf14eaa2&quot;]) ==========
+placeholder = PLACEHOLDER [1, 224, 224, 3]
+PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 &gt;= 3) &amp;&amp; (i1 &lt; 227)) &amp;&amp; (i2 &gt;= 3)) &amp;&amp; (i2 &lt; 227)), placeholder[i0, (i1 - 3), (i2 - 3), i3], 0f)
+placeholder = PLACEHOLDER [7, 7, 3, 64]
+Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+placeholder = PLACEHOLDER [1, 1, 1, 64]
+T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+========== Task 17  (workload key: [&quot;bf78a7bf0209980f72953637dfd14a6f&quot;]) ==========
+placeholder = PLACEHOLDER [1, 56, 56, 64]
+PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
+placeholder = PLACEHOLDER [1, 1, 64, 64]
+Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
+
+========== Task 18  (workload key: [&quot;6630936c26852f2b89dbfa2ff37fbb9c&quot;]) ==========
+placeholder = PLACEHOLDER [1, 56, 56, 64]
+PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
+placeholder = PLACEHOLDER [1, 1, 64, 128]
+Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+
+========== Task 19  (workload key: [&quot;ba5f918733ccbbd4a1d7fd3724665a2f&quot;]) ==========
+placeholder = PLACEHOLDER [1, 28, 28, 128]
+PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
+placeholder = PLACEHOLDER [1, 1, 128, 256]
+Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+
+========== Task 20  (workload key: [&quot;21ad409d72953de188314010134e3acd&quot;]) ==========
+placeholder = PLACEHOLDER [1, 14, 14, 256]
+PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
+placeholder = PLACEHOLDER [1, 1, 256, 512]
+Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
 </pre></div>
 </div>
 </div>
@@ -546,7 +802,7 @@ total time spent on auto-tuning and the id of the next task to tune.</p>
 <p>There will also be some “dmlc::Error”s and CUDA errors, because the
 auto-scheduler will try some invalid schedules.
 You can safely ignore them if the tuning can continue, because these
-errors are isolated from the master process.</p>
+errors are isolated from the main process.</p>
 </div>
 <div class="admonition note">
 <p class="admonition-title">Note</p>
@@ -583,7 +839,7 @@ so we can read the log file and load the best schedules.</p>
 <p class="sphx-glr-script-out">Out:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Compile...
 Evaluate inference time cost...
-Mean inference time (std dev): 3.15 ms (0.01 ms)
+Mean inference time (std dev): 3.14 ms (0.01 ms)
 </pre></div>
 </div>
 </div>
@@ -624,7 +880,7 @@ with <a class="reference internal" href="../../api/python/auto_scheduler.html#tv
         <a href="../dev/bring_your_own_datatypes.html" class="btn btn-neutral float-right" title="Bring Your Own Datatypes to TVM" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="tune_conv2d_layer_cuda.html" class="btn btn-neutral float-left" title="Auto-scheduling a convolution layer for GPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="tune_conv2d_layer_cuda.html" class="btn btn-neutral float-left" title="Auto-scheduling a Convolution Layer for GPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
 
diff --git a/docs/tutorials/autotvm/sg_execution_times.html b/docs/tutorials/autotvm/sg_execution_times.html
index eb6d82b..7eb8dcb 100644
--- a/docs/tutorials/autotvm/sg_execution_times.html
+++ b/docs/tutorials/autotvm/sg_execution_times.html
@@ -304,14 +304,14 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-tutorials-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:59.076</strong> total execution time for <strong>tutorials_autotvm</strong> files:</p>
+<p><strong>01:07.514</strong> total execution time for <strong>tutorials_autotvm</strong> files:</p>
 <ul class="simple">
-<li><p><strong>00:30.069</strong>: <a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-tutorials-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></li>
-<li><p><strong>00:28.300</strong>: <a class="reference internal" href="tune_simple_template.html#sphx-glr-tutorials-autotvm-tune-simple-template-py"><span class="std std-ref">Writing tunable template and Using auto-tuner</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_simple_template.py</span></code>)</p></li>
-<li><p><strong>00:00.205</strong>: <a class="reference internal" href="tune_relay_cuda.html#sphx-glr-tutorials-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a convolutional network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></li>
-<li><p><strong>00:00.176</strong>: <a class="reference internal" href="tune_relay_x86.html#sphx-glr-tutorials-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a convolutional network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></li>
-<li><p><strong>00:00.163</strong>: <a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-tutorials-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a convolutional network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></li>
-<li><p><strong>00:00.162</strong>: <a class="reference internal" href="tune_relay_arm.html#sphx-glr-tutorials-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a convolutional network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></li>
+<li><p><strong>00:35.925</strong>: <a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-tutorials-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></li>
+<li><p><strong>00:30.897</strong>: <a class="reference internal" href="tune_simple_template.html#sphx-glr-tutorials-autotvm-tune-simple-template-py"><span class="std std-ref">Writing Tunable Templates and Using the Auto-tuner</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_simple_template.py</span></code>)</p></li>
+<li><p><strong>00:00.204</strong>: <a class="reference internal" href="tune_relay_cuda.html#sphx-glr-tutorials-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a Convolutional Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></li>
+<li><p><strong>00:00.169</strong>: <a class="reference internal" href="tune_relay_x86.html#sphx-glr-tutorials-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></li>
+<li><p><strong>00:00.160</strong>: <a class="reference internal" href="tune_relay_arm.html#sphx-glr-tutorials-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></li>
+<li><p><strong>00:00.159</strong>: <a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-tutorials-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></li>
 </ul>
 </div>
 
diff --git a/docs/tutorials/autotvm/tune_conv2d_cuda.html b/docs/tutorials/autotvm/tune_conv2d_cuda.html
index f3df1be..91b1399 100644
--- a/docs/tutorials/autotvm/tune_conv2d_cuda.html
+++ b/docs/tutorials/autotvm/tune_conv2d_cuda.html
@@ -44,8 +44,8 @@
     <script type="text/javascript" src="../../_static/js/tlcpack_theme.js"></script>
     <link rel="index" title="Index" href="../../genindex.html" />
     <link rel="search" title="Search" href="../../search.html" />
-    <link rel="next" title="Auto-tuning a convolutional network for NVIDIA GPU" href="tune_relay_cuda.html" />
-    <link rel="prev" title="Writing tunable template and Using auto-tuner" href="tune_simple_template.html" /> 
+    <link rel="next" title="Auto-tuning a Convolutional Network for NVIDIA GPU" href="tune_relay_cuda.html" />
+    <link rel="prev" title="Writing Tunable Templates and Using the Auto-tuner" href="tune_simple_template.html" /> 
 </head>
 
 <body class="wy-body-for-nav">
@@ -207,17 +207,17 @@
 <li class="toctree-l1"><a class="reference internal" href="../index.html#tensor-expression-and-schedules">Tensor Expression and Schedules</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#optimize-tensor-operators">Optimize Tensor Operators</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="../index.html#autotvm-template-based-auto-tuning">AutoTVM : Template-based Auto Tuning</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing tunable template and Using auto-tuner</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing Tunable Templates and Using the Auto-tuner</a></li>
 <li class="toctree-l2 current"><a class="current reference internal" href="#">Tuning High Performance Convolution on NVIDIA GPUs</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#install-dependencies">Install dependencies</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#step-1-define-the-search-space">Step 1:  Define the search space</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#step-2-search-through-the-space">Step 2:  Search through the space</a></li>
 </ul>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a convolutional network for NVIDIA GPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a convolutional network for x86 CPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a convolutional network for ARM CPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a convolutional network for Mobile GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a Convolutional Network for NVIDIA GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a Convolutional Network for x86 CPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a Convolutional Network for ARM CPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a Convolutional Network for Mobile GPU</a></li>
 </ul>
 </li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#autoscheduler-template-free-auto-scheduling">AutoScheduler : Template-free Auto Scheduling</a></li>
@@ -513,26 +513,26 @@ for this template</p>
    7 unroll_explicit: OtherOption([0, 1]) len=2
 )
 Get devices for measurement successfully!
-No: 1   GFLOPS: 226.07/226.07   result: MeasureResult(costs=(0.0010240150306122448,), error_no=0, all_cost=1.4391686916351318, timestamp=1605262522.444662)     [(&#39;tile_f&#39;, [-1, 2, 64, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 2, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4881186
-No: 2   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 3   GFLOPS: 179.21/226.07   result: MeasureResult(costs=(0.0012917972661290324,), error_no=0, all_cost=1.6224138736724854, timestamp=1605262523.8775072)    [(&#39;tile_f&#39;, [-1, 4, 32, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 16]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3605182
-No: 4   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 5   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 6   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 7   GFLOPS: 0.00/226.07     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 8   GFLOPS: 1.75/226.07     result: MeasureResult(costs=(0.13202702,), error_no=0, all_cost=3.336221933364868, timestamp=1605262527.2192101)        [(&#39;tile_f&#39;, [-1, 2, 4, 64]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 2, 1]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2716108
-No: 9   GFLOPS: 12.08/226.07    result: MeasureResult(costs=(0.019164146333333333,), error_no=0, all_cost=1.751448392868042, timestamp=1605262530.169132)       [(&#39;tile_f&#39;, [-1, 1, 4, 2]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,1263092
-No: 10  GFLOPS: 228.40/228.40   result: MeasureResult(costs=(0.0010135667474747475,), error_no=0, all_cost=1.4332818984985352, timestamp=1605262531.0474083)    [(&#39;tile_f&#39;, [-1, 1, 32, 4]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 16, 1]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,8921130
-No: 11  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 12  GFLOPS: 120.00/228.40   result: MeasureResult(costs=(0.0019292541346153846,), error_no=0, all_cost=1.344985008239746, timestamp=1605262532.1955059)     [(&#39;tile_f&#39;, [-1, 2, 32, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 1]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,5036371
-No: 13  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 14  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 15  GFLOPS: 82.26/228.40    result: MeasureResult(costs=(0.0028143660526315792,), error_no=0, all_cost=1.4765589237213135, timestamp=1605262533.614049)     [(&#39;tile_f&#39;, [-1, 1, 1, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3582580
-No: 16  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 17  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 18  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
-No: 19  GFLOPS: 18.26/228.40    result: MeasureResult(costs=(0.012675726555555555,), error_no=0, all_cost=1.667898178100586, timestamp=1605262536.8822658)      [(&#39;tile_f&#39;, [-1, 8, 64, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4107668
-No: 20  GFLOPS: 0.00/228.40     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f902ac24901]\n  [bt] (3) /workspace/build/libtvm.so(+0x6d54a7) [0x7f902a0564a7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7f902a0558be]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 1   GFLOPS: 309.56/309.56   result: MeasureResult(costs=(0.0007478322784810127,), error_no=0, all_cost=1.6631748676300049, timestamp=1605451988.403362)     [(&#39;tile_f&#39;, [-1, 2, 64, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 2, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4881186
+No: 2   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 3   GFLOPS: 180.09/309.56   result: MeasureResult(costs=(0.0012854856451612903,), error_no=0, all_cost=1.655426025390625, timestamp=1605451990.2436233)     [(&#39;tile_f&#39;, [-1, 4, 32, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 16]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3605182
+No: 4   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 5   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 6   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 7   GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 8   GFLOPS: 1.75/309.56     result: MeasureResult(costs=(0.1320260235,), error_no=0, all_cost=3.5821640491485596, timestamp=1605451994.7318807)     [(&#39;tile_f&#39;, [-1, 2, 4, 64]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 2, 1]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2716108
+No: 9   GFLOPS: 12.11/309.56    result: MeasureResult(costs=(0.019121657333333333,), error_no=0, all_cost=1.8620920181274414, timestamp=1605451998.4005475)     [(&#39;tile_f&#39;, [-1, 1, 4, 2]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,1263092
+No: 10  GFLOPS: 228.36/309.56   result: MeasureResult(costs=(0.0010137748686868686,), error_no=0, all_cost=1.599214792251587, timestamp=1605451999.5098352)     [(&#39;tile_f&#39;, [-1, 1, 32, 4]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 16, 1]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,8921130
+No: 11  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 12  GFLOPS: 120.00/309.56   result: MeasureResult(costs=(0.0019292001346153846,), error_no=0, all_cost=1.4698126316070557, timestamp=1605452001.1850023)    [(&#39;tile_f&#39;, [-1, 2, 32, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 1]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,5036371
+No: 13  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 14  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 15  GFLOPS: 70.72/309.56    result: MeasureResult(costs=(0.003273344657894737,), error_no=0, all_cost=1.608949899673462, timestamp=1605452003.2869494)      [(&#39;tile_f&#39;, [-1, 1, 1, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3582580
+No: 16  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 17  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 18  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
+No: 19  GFLOPS: 17.38/309.56    result: MeasureResult(costs=(0.013322496,), error_no=0, all_cost=1.7255771160125732, timestamp=1605452007.9485376)      [(&#39;tile_f&#39;, [-1, 8, 64, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4107668
+No: 20  GFLOPS: 0.00/309.56     result: MeasureResult(costs=(InstantiationError(&#39;Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fed2105a3c1]\n  [bt] (3) /workspace/build/libtvm.so(+0x69fb87) [0x7fed2045ab87]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const+0x40e) [0x7fed20459f9e]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::Prim [...]
 </pre></div>
 </div>
 <p>Finally we can inspect the best config from log file, check correctness,
@@ -570,8 +570,8 @@ and measure running time.</p>
 </div>
 <p class="sphx-glr-script-out">Out:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Best config:
-[(&#39;tile_f&#39;, [-1, 1, 32, 4]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 16, 1]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,8921130
-Time cost of this operator: 0.001455
+[(&#39;tile_f&#39;, [-1, 2, 64, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 2, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4881186
+Time cost of this operator: 0.001034
 </pre></div>
 </div>
 <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorials-autotvm-tune-conv2d-cuda-py">
@@ -596,10 +596,10 @@ Time cost of this operator: 0.001455
 
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="tune_relay_cuda.html" class="btn btn-neutral float-right" title="Auto-tuning a convolutional network for NVIDIA GPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="tune_relay_cuda.html" class="btn btn-neutral float-right" title="Auto-tuning a Convolutional Network for NVIDIA GPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="tune_simple_template.html" class="btn btn-neutral float-left" title="Writing tunable template and Using auto-tuner" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="tune_simple_template.html" class="btn btn-neutral float-left" title="Writing Tunable Templates and Using the Auto-tuner" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
 
diff --git a/docs/tutorials/autotvm/tune_relay_arm.html b/docs/tutorials/autotvm/tune_relay_arm.html
index c95b563..59c69ab 100644
--- a/docs/tutorials/autotvm/tune_relay_arm.html
+++ b/docs/tutorials/autotvm/tune_relay_arm.html
@@ -11,7 +11,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>Auto-tuning a convolutional network for ARM CPU &mdash; tvm 0.8.dev0 documentation</title>
+  <title>Auto-tuning a Convolutional Network for ARM CPU &mdash; tvm 0.8.dev0 documentation</title>
   
 
   
@@ -44,8 +44,8 @@
     <script type="text/javascript" src="../../_static/js/tlcpack_theme.js"></script>
     <link rel="index" title="Index" href="../../genindex.html" />
     <link rel="search" title="Search" href="../../search.html" />
-    <link rel="next" title="Auto-tuning a convolutional network for Mobile GPU" href="tune_relay_mobile_gpu.html" />
-    <link rel="prev" title="Auto-tuning a convolutional network for x86 CPU" href="tune_relay_x86.html" /> 
+    <link rel="next" title="Auto-tuning a Convolutional Network for Mobile GPU" href="tune_relay_mobile_gpu.html" />
+    <link rel="prev" title="Auto-tuning a Convolutional Network for x86 CPU" href="tune_relay_x86.html" /> 
 </head>
 
 <body class="wy-body-for-nav">
@@ -207,11 +207,11 @@
 <li class="toctree-l1"><a class="reference internal" href="../index.html#tensor-expression-and-schedules">Tensor Expression and Schedules</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#optimize-tensor-operators">Optimize Tensor Operators</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="../index.html#autotvm-template-based-auto-tuning">AutoTVM : Template-based Auto Tuning</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing tunable template and Using auto-tuner</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing Tunable Templates and Using the Auto-tuner</a></li>
 <li class="toctree-l2"><a class="reference internal" href="tune_conv2d_cuda.html">Tuning High Performance Convolution on NVIDIA GPUs</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a convolutional network for NVIDIA GPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a convolutional network for x86 CPU</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-tuning a convolutional network for ARM CPU</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a Convolutional Network for NVIDIA GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a Convolutional Network for x86 CPU</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-tuning a Convolutional Network for ARM CPU</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#install-dependencies">Install dependencies</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#define-network">Define network</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#start-rpc-tracker">Start RPC Tracker</a></li>
@@ -221,7 +221,7 @@
 <li class="toctree-l3"><a class="reference internal" href="#sample-output">Sample Output</a></li>
 </ul>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a convolutional network for Mobile GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a Convolutional Network for Mobile GPU</a></li>
 </ul>
 </li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#autoscheduler-template-free-auto-scheduling">AutoScheduler : Template-free Auto Scheduling</a></li>
@@ -304,7 +304,7 @@
         
           <li><a href="../index.html">Get Started Tutorials</a> <span class="br-arrow">></span></li>
         
-      <li>Auto-tuning a convolutional network for ARM CPU</li>
+      <li>Auto-tuning a Convolutional Network for ARM CPU</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -328,7 +328,7 @@
 <p>Click <a class="reference internal" href="#sphx-glr-download-tutorials-autotvm-tune-relay-arm-py"><span class="std std-ref">here</span></a> to download the full example code</p>
 </div>
 <div class="sphx-glr-example-title section" id="auto-tuning-a-convolutional-network-for-arm-cpu">
-<span id="tune-relay-arm"></span><span id="sphx-glr-tutorials-autotvm-tune-relay-arm-py"></span><h1>Auto-tuning a convolutional network for ARM CPU<a class="headerlink" href="#auto-tuning-a-convolutional-network-for-arm-cpu" title="Permalink to this headline">¶</a></h1>
+<span id="tune-relay-arm"></span><span id="sphx-glr-tutorials-autotvm-tune-relay-arm-py"></span><h1>Auto-tuning a Convolutional Network for ARM CPU<a class="headerlink" href="#auto-tuning-a-convolutional-network-for-arm-cpu" title="Permalink to this headline">¶</a></h1>
 <p><strong>Author</strong>: <a class="reference external" href="https://github.com/merrymercy">Lianmin Zheng</a>, <a class="reference external" href="https://github.com/FrozenGene">Zhao Wu</a>, <a class="reference external" href="https://github.com/eqy">Eddie Yan</a></p>
 <p>Auto-tuning for a specific ARM device is critical for getting the best
 performance. This is a tutorial about how to tune a whole convolutional
@@ -711,10 +711,10 @@ error messages.</p>
 
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="tune_relay_mobile_gpu.html" class="btn btn-neutral float-right" title="Auto-tuning a convolutional network for Mobile GPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="tune_relay_mobile_gpu.html" class="btn btn-neutral float-right" title="Auto-tuning a Convolutional Network for Mobile GPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="tune_relay_x86.html" class="btn btn-neutral float-left" title="Auto-tuning a convolutional network for x86 CPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="tune_relay_x86.html" class="btn btn-neutral float-left" title="Auto-tuning a Convolutional Network for x86 CPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
 
diff --git a/docs/tutorials/autotvm/tune_relay_cuda.html b/docs/tutorials/autotvm/tune_relay_cuda.html
index 7308907..ce201ea 100644
--- a/docs/tutorials/autotvm/tune_relay_cuda.html
+++ b/docs/tutorials/autotvm/tune_relay_cuda.html
@@ -11,7 +11,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>Auto-tuning a convolutional network for NVIDIA GPU &mdash; tvm 0.8.dev0 documentation</title>
+  <title>Auto-tuning a Convolutional Network for NVIDIA GPU &mdash; tvm 0.8.dev0 documentation</title>
   
 
   
@@ -44,7 +44,7 @@
     <script type="text/javascript" src="../../_static/js/tlcpack_theme.js"></script>
     <link rel="index" title="Index" href="../../genindex.html" />
     <link rel="search" title="Search" href="../../search.html" />
-    <link rel="next" title="Auto-tuning a convolutional network for x86 CPU" href="tune_relay_x86.html" />
+    <link rel="next" title="Auto-tuning a Convolutional Network for x86 CPU" href="tune_relay_x86.html" />
     <link rel="prev" title="Tuning High Performance Convolution on NVIDIA GPUs" href="tune_conv2d_cuda.html" /> 
 </head>
 
@@ -207,9 +207,9 @@
 <li class="toctree-l1"><a class="reference internal" href="../index.html#tensor-expression-and-schedules">Tensor Expression and Schedules</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#optimize-tensor-operators">Optimize Tensor Operators</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="../index.html#autotvm-template-based-auto-tuning">AutoTVM : Template-based Auto Tuning</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing tunable template and Using auto-tuner</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing Tunable Templates and Using the Auto-tuner</a></li>
 <li class="toctree-l2"><a class="reference internal" href="tune_conv2d_cuda.html">Tuning High Performance Convolution on NVIDIA GPUs</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-tuning a convolutional network for NVIDIA GPU</a><ul>
+<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-tuning a Convolutional Network for NVIDIA GPU</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#install-dependencies">Install dependencies</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#define-network">Define Network</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#set-tuning-options">Set Tuning Options</a></li>
@@ -218,9 +218,9 @@
 <li class="toctree-l3"><a class="reference internal" href="#scale-up-measurement-by-using-multiple-devices">Scale up measurement by using multiple devices</a></li>
 </ul>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a convolutional network for x86 CPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a convolutional network for ARM CPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a convolutional network for Mobile GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a Convolutional Network for x86 CPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a Convolutional Network for ARM CPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a Convolutional Network for Mobile GPU</a></li>
 </ul>
 </li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#autoscheduler-template-free-auto-scheduling">AutoScheduler : Template-free Auto Scheduling</a></li>
@@ -303,7 +303,7 @@
         
           <li><a href="../index.html">Get Started Tutorials</a> <span class="br-arrow">></span></li>
         
-      <li>Auto-tuning a convolutional network for NVIDIA GPU</li>
+      <li>Auto-tuning a Convolutional Network for NVIDIA GPU</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -327,7 +327,7 @@
 <p>Click <a class="reference internal" href="#sphx-glr-download-tutorials-autotvm-tune-relay-cuda-py"><span class="std std-ref">here</span></a> to download the full example code</p>
 </div>
 <div class="sphx-glr-example-title section" id="auto-tuning-a-convolutional-network-for-nvidia-gpu">
-<span id="sphx-glr-tutorials-autotvm-tune-relay-cuda-py"></span><h1>Auto-tuning a convolutional network for NVIDIA GPU<a class="headerlink" href="#auto-tuning-a-convolutional-network-for-nvidia-gpu" title="Permalink to this headline">¶</a></h1>
+<span id="sphx-glr-tutorials-autotvm-tune-relay-cuda-py"></span><h1>Auto-tuning a Convolutional Network for NVIDIA GPU<a class="headerlink" href="#auto-tuning-a-convolutional-network-for-nvidia-gpu" title="Permalink to this headline">¶</a></h1>
 <p><strong>Author</strong>: <a class="reference external" href="https://github.com/merrymercy">Lianmin Zheng</a>, <a class="reference external" href="https://github.com/eqy/">Eddie Yan</a></p>
 <p>Auto-tuning for specific devices and workloads is critical for getting the
 best performance. This is a tutorial on how to tune a whole convolutional
@@ -680,7 +680,7 @@ to replace the corresponding part above.</p>
 
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="tune_relay_x86.html" class="btn btn-neutral float-right" title="Auto-tuning a convolutional network for x86 CPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="tune_relay_x86.html" class="btn btn-neutral float-right" title="Auto-tuning a Convolutional Network for x86 CPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
         <a href="tune_conv2d_cuda.html" class="btn btn-neutral float-left" title="Tuning High Performance Convolution on NVIDIA GPUs" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
diff --git a/docs/tutorials/autotvm/tune_relay_mobile_gpu.html b/docs/tutorials/autotvm/tune_relay_mobile_gpu.html
index a19e25a..aaa475d 100644
--- a/docs/tutorials/autotvm/tune_relay_mobile_gpu.html
+++ b/docs/tutorials/autotvm/tune_relay_mobile_gpu.html
@@ -11,7 +11,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>Auto-tuning a convolutional network for Mobile GPU &mdash; tvm 0.8.dev0 documentation</title>
+  <title>Auto-tuning a Convolutional Network for Mobile GPU &mdash; tvm 0.8.dev0 documentation</title>
   
 
   
@@ -44,8 +44,8 @@
     <script type="text/javascript" src="../../_static/js/tlcpack_theme.js"></script>
     <link rel="index" title="Index" href="../../genindex.html" />
     <link rel="search" title="Search" href="../../search.html" />
-    <link rel="next" title="Auto-scheduling matrix multiplication for CPU" href="../auto_scheduler/tune_matmul_x86.html" />
-    <link rel="prev" title="Auto-tuning a convolutional network for ARM CPU" href="tune_relay_arm.html" /> 
+    <link rel="next" title="Auto-scheduling Matrix Multiplication for CPU" href="../auto_scheduler/tune_matmul_x86.html" />
+    <link rel="prev" title="Auto-tuning a Convolutional Network for ARM CPU" href="tune_relay_arm.html" /> 
 </head>
 
 <body class="wy-body-for-nav">
@@ -207,12 +207,12 @@
 <li class="toctree-l1"><a class="reference internal" href="../index.html#tensor-expression-and-schedules">Tensor Expression and Schedules</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#optimize-tensor-operators">Optimize Tensor Operators</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="../index.html#autotvm-template-based-auto-tuning">AutoTVM : Template-based Auto Tuning</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing tunable template and Using auto-tuner</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing Tunable Templates and Using the Auto-tuner</a></li>
 <li class="toctree-l2"><a class="reference internal" href="tune_conv2d_cuda.html">Tuning High Performance Convolution on NVIDIA GPUs</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a convolutional network for NVIDIA GPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a convolutional network for x86 CPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a convolutional network for ARM CPU</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-tuning a convolutional network for Mobile GPU</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a Convolutional Network for NVIDIA GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a Convolutional Network for x86 CPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a Convolutional Network for ARM CPU</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-tuning a Convolutional Network for Mobile GPU</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#install-dependencies">Install dependencies</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#define-network">Define network</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#start-rpc-tracker">Start RPC Tracker</a></li>
@@ -304,7 +304,7 @@
         
           <li><a href="../index.html">Get Started Tutorials</a> <span class="br-arrow">></span></li>
         
-      <li>Auto-tuning a convolutional network for Mobile GPU</li>
+      <li>Auto-tuning a Convolutional Network for Mobile GPU</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -328,7 +328,7 @@
 <p>Click <a class="reference internal" href="#sphx-glr-download-tutorials-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">here</span></a> to download the full example code</p>
 </div>
 <div class="sphx-glr-example-title section" id="auto-tuning-a-convolutional-network-for-mobile-gpu">
-<span id="sphx-glr-tutorials-autotvm-tune-relay-mobile-gpu-py"></span><h1>Auto-tuning a convolutional network for Mobile GPU<a class="headerlink" href="#auto-tuning-a-convolutional-network-for-mobile-gpu" title="Permalink to this headline">¶</a></h1>
+<span id="sphx-glr-tutorials-autotvm-tune-relay-mobile-gpu-py"></span><h1>Auto-tuning a Convolutional Network for Mobile GPU<a class="headerlink" href="#auto-tuning-a-convolutional-network-for-mobile-gpu" title="Permalink to this headline">¶</a></h1>
 <p><strong>Author</strong>: <a class="reference external" href="https://github.com/merrymercy">Lianmin Zheng</a>, <a class="reference external" href="https://github.com/eqy">Eddie Yan</a></p>
 <p>Auto-tuning for a specific device is critical for getting the best
 performance. This is a tutorial about how to tune a whole convolutional
@@ -717,10 +717,10 @@ error messages.</p>
 
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="../auto_scheduler/tune_matmul_x86.html" class="btn btn-neutral float-right" title="Auto-scheduling matrix multiplication for CPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="../auto_scheduler/tune_matmul_x86.html" class="btn btn-neutral float-right" title="Auto-scheduling Matrix Multiplication for CPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="tune_relay_arm.html" class="btn btn-neutral float-left" title="Auto-tuning a convolutional network for ARM CPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="tune_relay_arm.html" class="btn btn-neutral float-left" title="Auto-tuning a Convolutional Network for ARM CPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
 
diff --git a/docs/tutorials/autotvm/tune_relay_x86.html b/docs/tutorials/autotvm/tune_relay_x86.html
index 7c92251..1e30a48 100644
--- a/docs/tutorials/autotvm/tune_relay_x86.html
+++ b/docs/tutorials/autotvm/tune_relay_x86.html
@@ -11,7 +11,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>Auto-tuning a convolutional network for x86 CPU &mdash; tvm 0.8.dev0 documentation</title>
+  <title>Auto-tuning a Convolutional Network for x86 CPU &mdash; tvm 0.8.dev0 documentation</title>
   
 
   
@@ -44,8 +44,8 @@
     <script type="text/javascript" src="../../_static/js/tlcpack_theme.js"></script>
     <link rel="index" title="Index" href="../../genindex.html" />
     <link rel="search" title="Search" href="../../search.html" />
-    <link rel="next" title="Auto-tuning a convolutional network for ARM CPU" href="tune_relay_arm.html" />
-    <link rel="prev" title="Auto-tuning a convolutional network for NVIDIA GPU" href="tune_relay_cuda.html" /> 
+    <link rel="next" title="Auto-tuning a Convolutional Network for ARM CPU" href="tune_relay_arm.html" />
+    <link rel="prev" title="Auto-tuning a Convolutional Network for NVIDIA GPU" href="tune_relay_cuda.html" /> 
 </head>
 
 <body class="wy-body-for-nav">
@@ -207,17 +207,17 @@
 <li class="toctree-l1"><a class="reference internal" href="../index.html#tensor-expression-and-schedules">Tensor Expression and Schedules</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#optimize-tensor-operators">Optimize Tensor Operators</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="../index.html#autotvm-template-based-auto-tuning">AutoTVM : Template-based Auto Tuning</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing tunable template and Using auto-tuner</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_simple_template.html">Writing Tunable Templates and Using the Auto-tuner</a></li>
 <li class="toctree-l2"><a class="reference internal" href="tune_conv2d_cuda.html">Tuning High Performance Convolution on NVIDIA GPUs</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a convolutional network for NVIDIA GPU</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-tuning a convolutional network for x86 CPU</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a Convolutional Network for NVIDIA GPU</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="#">Auto-tuning a Convolutional Network for x86 CPU</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#define-network">Define network</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#configure-tensor-tuning-settings-and-create-tasks">Configure tensor tuning settings and create tasks</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#sample-output">Sample Output</a></li>
 </ul>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a convolutional network for ARM CPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a convolutional network for Mobile GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a Convolutional Network for ARM CPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a Convolutional Network for Mobile GPU</a></li>
 </ul>
 </li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#autoscheduler-template-free-auto-scheduling">AutoScheduler : Template-free Auto Scheduling</a></li>
@@ -300,7 +300,7 @@
         
           <li><a href="../index.html">Get Started Tutorials</a> <span class="br-arrow">></span></li>
         
-      <li>Auto-tuning a convolutional network for x86 CPU</li>
+      <li>Auto-tuning a Convolutional Network for x86 CPU</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -324,7 +324,7 @@
 <p>Click <a class="reference internal" href="#sphx-glr-download-tutorials-autotvm-tune-relay-x86-py"><span class="std std-ref">here</span></a> to download the full example code</p>
 </div>
 <div class="sphx-glr-example-title section" id="auto-tuning-a-convolutional-network-for-x86-cpu">
-<span id="tune-relay-x86"></span><span id="sphx-glr-tutorials-autotvm-tune-relay-x86-py"></span><h1>Auto-tuning a convolutional network for x86 CPU<a class="headerlink" href="#auto-tuning-a-convolutional-network-for-x86-cpu" title="Permalink to this headline">¶</a></h1>
+<span id="tune-relay-x86"></span><span id="sphx-glr-tutorials-autotvm-tune-relay-x86-py"></span><h1>Auto-tuning a Convolutional Network for x86 CPU<a class="headerlink" href="#auto-tuning-a-convolutional-network-for-x86-cpu" title="Permalink to this headline">¶</a></h1>
 <p><strong>Author</strong>: <a class="reference external" href="https://github.com/kevinthesun">Yao Wang</a>, <a class="reference external" href="https://github.com/eqy">Eddie Yan</a></p>
 <p>This is a tutorial about how to tune convolution neural network
 for x86 CPU.</p>
@@ -423,7 +423,7 @@ conv2d_NCHWc operator in topi. We will tune this operator
 instead of plain conv2d.</p>
 <p>We will use local mode for tuning configuration. RPC tracker
 mode can be setup similarly to the approach in
-<a class="reference internal" href="tune_relay_arm.html#tune-relay-arm"><span class="std std-ref">Auto-tuning a convolutional network for ARM CPU</span></a> tutorial.</p>
+<a class="reference internal" href="tune_relay_arm.html#tune-relay-arm"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> tutorial.</p>
 <p>To perform a precise measurement, we should repeat the measurement several
 times and use the average of results. In addition, we need to flush the cache
 for the weight tensors between repeated measurements. This can make the measured
@@ -575,10 +575,10 @@ Mean inference <span class="nb">time</span> <span class="o">(</span>std dev<span
 
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="tune_relay_arm.html" class="btn btn-neutral float-right" title="Auto-tuning a convolutional network for ARM CPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="tune_relay_arm.html" class="btn btn-neutral float-right" title="Auto-tuning a Convolutional Network for ARM CPU" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="tune_relay_cuda.html" class="btn btn-neutral float-left" title="Auto-tuning a convolutional network for NVIDIA GPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="tune_relay_cuda.html" class="btn btn-neutral float-left" title="Auto-tuning a Convolutional Network for NVIDIA GPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
 
diff --git a/docs/tutorials/autotvm/tune_simple_template.html b/docs/tutorials/autotvm/tune_simple_template.html
index bbd1610..316b0fe 100644
--- a/docs/tutorials/autotvm/tune_simple_template.html
+++ b/docs/tutorials/autotvm/tune_simple_template.html
@@ -11,7 +11,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>Writing tunable template and Using auto-tuner &mdash; tvm 0.8.dev0 documentation</title>
+  <title>Writing Tunable Templates and Using the Auto-tuner &mdash; tvm 0.8.dev0 documentation</title>
   
 
   
@@ -207,7 +207,7 @@
 <li class="toctree-l1"><a class="reference internal" href="../index.html#tensor-expression-and-schedules">Tensor Expression and Schedules</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#optimize-tensor-operators">Optimize Tensor Operators</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="../index.html#autotvm-template-based-auto-tuning">AutoTVM : Template-based Auto Tuning</a><ul class="current">
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Writing tunable template and Using auto-tuner</a><ul>
+<li class="toctree-l2 current"><a class="current reference internal" href="#">Writing Tunable Templates and Using the Auto-tuner</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#install-dependencies">Install dependencies</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#step-1-define-the-search-space">Step 1:  Define the search space</a><ul>
 <li class="toctree-l4"><a class="reference internal" href="#parametrize-the-schedule">Parametrize the schedule</a></li>
@@ -222,10 +222,10 @@
 </ul>
 </li>
 <li class="toctree-l2"><a class="reference internal" href="tune_conv2d_cuda.html">Tuning High Performance Convolution on NVIDIA GPUs</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a convolutional network for NVIDIA GPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a convolutional network for x86 CPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a convolutional network for ARM CPU</a></li>
-<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a convolutional network for Mobile GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_cuda.html">Auto-tuning a Convolutional Network for NVIDIA GPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_x86.html">Auto-tuning a Convolutional Network for x86 CPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_arm.html">Auto-tuning a Convolutional Network for ARM CPU</a></li>
+<li class="toctree-l2"><a class="reference internal" href="tune_relay_mobile_gpu.html">Auto-tuning a Convolutional Network for Mobile GPU</a></li>
 </ul>
 </li>
 <li class="toctree-l1"><a class="reference internal" href="../index.html#autoscheduler-template-free-auto-scheduling">AutoScheduler : Template-free Auto Scheduling</a></li>
@@ -308,7 +308,7 @@
         
           <li><a href="../index.html">Get Started Tutorials</a> <span class="br-arrow">></span></li>
         
-      <li>Writing tunable template and Using auto-tuner</li>
+      <li>Writing Tunable Templates and Using the Auto-tuner</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -331,8 +331,8 @@
 <p class="admonition-title">Note</p>
 <p>Click <a class="reference internal" href="#sphx-glr-download-tutorials-autotvm-tune-simple-template-py"><span class="std std-ref">here</span></a> to download the full example code</p>
 </div>
-<div class="sphx-glr-example-title section" id="writing-tunable-template-and-using-auto-tuner">
-<span id="sphx-glr-tutorials-autotvm-tune-simple-template-py"></span><h1>Writing tunable template and Using auto-tuner<a class="headerlink" href="#writing-tunable-template-and-using-auto-tuner" title="Permalink to this headline">¶</a></h1>
+<div class="sphx-glr-example-title section" id="writing-tunable-templates-and-using-the-auto-tuner">
+<span id="sphx-glr-tutorials-autotvm-tune-simple-template-py"></span><h1>Writing Tunable Templates and Using the Auto-tuner<a class="headerlink" href="#writing-tunable-templates-and-using-the-auto-tuner" title="Permalink to this headline">¶</a></h1>
 <p><strong>Author</strong>: <a class="reference external" href="https://github.com/merrymercy">Lianmin Zheng</a></p>
 <p>This is an introduction tutorial to the auto-tuning module in TVM.</p>
 <p>There are two steps in auto-tuning.
@@ -607,16 +607,16 @@ used to get the best config later.</p>
 </div>
 <p class="sphx-glr-script-out">Out:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Get devices for measurement successfully!
-No: 1   GFLOPS: 0.52/0.52       result: MeasureResult(costs=(0.519133092,), error_no=0, all_cost=8.710088014602661, timestamp=1605262499.8931446)       [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 1])],None,6
-No: 2   GFLOPS: 2.19/2.19       result: MeasureResult(costs=(0.122798191,), error_no=0, all_cost=2.4234249591827393, timestamp=1605262502.3358595)      [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 8])],None,39
-No: 3   GFLOPS: 2.68/2.68       result: MeasureResult(costs=(0.1002148718,), error_no=0, all_cost=2.024742603302002, timestamp=1605262504.4139025)      [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 8])],None,31
-No: 4   GFLOPS: 7.24/7.24       result: MeasureResult(costs=(0.0370866816,), error_no=0, all_cost=1.0611913204193115, timestamp=1605262505.483117)      [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 32])],None,50
-No: 5   GFLOPS: 13.37/13.37     result: MeasureResult(costs=(0.020077077,), error_no=0, all_cost=0.7708723545074463, timestamp=1605262506.2793317)      [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 64])],None,68
-No: 6   GFLOPS: 12.17/13.37     result: MeasureResult(costs=(0.0220493612,), error_no=0, all_cost=0.7993049621582031, timestamp=1605262507.1112614)     [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 512])],None,98
-No: 7   GFLOPS: 0.92/13.37      result: MeasureResult(costs=(0.29137312579999997,), error_no=0, all_cost=5.066913843154907, timestamp=1605262512.2570298)       [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 2])],None,17
-No: 8   GFLOPS: 2.61/13.37      result: MeasureResult(costs=(0.102951418,), error_no=0, all_cost=2.0490610599517822, timestamp=1605262514.3929913)      [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 4])],None,23
-No: 9   GFLOPS: 11.68/13.37     result: MeasureResult(costs=(0.0229774654,), error_no=0, all_cost=0.7303047180175781, timestamp=1605262515.9335515)     [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 32])],None,58
-No: 10  GFLOPS: 14.79/14.79     result: MeasureResult(costs=(0.018150249,), error_no=0, all_cost=0.760230541229248, timestamp=1605262516.7134416)       [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 128])],None,76
+No: 1   GFLOPS: 0.52/0.52       result: MeasureResult(costs=(0.5180510666,), error_no=0, all_cost=8.781435012817383, timestamp=1605451963.079987)       [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 1])],None,6
+No: 2   GFLOPS: 2.14/2.14       result: MeasureResult(costs=(0.125252425,), error_no=0, all_cost=2.5115718841552734, timestamp=1605451965.740256)       [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 8])],None,39
+No: 3   GFLOPS: 2.71/2.71       result: MeasureResult(costs=(0.099166239,), error_no=0, all_cost=2.10246205329895, timestamp=1605451967.9825048)        [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 8])],None,31
+No: 4   GFLOPS: 7.80/7.80       result: MeasureResult(costs=(0.0344016246,), error_no=0, all_cost=1.0503571033477783, timestamp=1605451969.1972082)     [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 32])],None,50
+No: 5   GFLOPS: 13.09/13.09     result: MeasureResult(costs=(0.020505473599999997,), error_no=0, all_cost=0.8432226181030273, timestamp=1605451970.1748426)     [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 64])],None,68
+No: 6   GFLOPS: 12.21/13.09     result: MeasureResult(costs=(0.0219806834,), error_no=0, all_cost=0.8422861099243164, timestamp=1605451971.1708436)     [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 512])],None,98
+No: 7   GFLOPS: 0.92/13.09      result: MeasureResult(costs=(0.29195622499999996,), error_no=0, all_cost=5.095428705215454, timestamp=1605451976.5535822)       [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 2])],None,17
+No: 8   GFLOPS: 2.40/13.09      result: MeasureResult(costs=(0.11178283959999999,), error_no=0, all_cost=2.2193515300750732, timestamp=1605451978.9914048)      [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 4])],None,23
+No: 9   GFLOPS: 11.22/13.09     result: MeasureResult(costs=(0.0239272356,), error_no=0, all_cost=0.7559356689453125, timestamp=1605451980.961141)      [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 32])],None,58
+No: 10  GFLOPS: 14.58/14.58     result: MeasureResult(costs=(0.0184163582,), error_no=0, all_cost=0.790762186050415, timestamp=1605451981.9231868)      [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 128])],None,76
 </pre></div>
 </div>
 <p>Finally we apply history best from the cache file and check its correctness.
diff --git a/docs/tutorials/dev/bring_your_own_datatypes.html b/docs/tutorials/dev/bring_your_own_datatypes.html
index e182767..232bb7b 100644
--- a/docs/tutorials/dev/bring_your_own_datatypes.html
+++ b/docs/tutorials/dev/bring_your_own_datatypes.html
@@ -45,7 +45,7 @@
     <link rel="index" title="Index" href="../../genindex.html" />
     <link rel="search" title="Search" href="../../search.html" />
     <link rel="next" title="Writing a Customized Pass" href="low_level_custom_pass.html" />
-    <link rel="prev" title="Auto-tuning a Neural Network for NVIDIA GPU" href="../auto_scheduler/tune_network_cuda.html" /> 
+    <link rel="prev" title="Auto-scheduling a Neural Network for NVIDIA GPU" href="../auto_scheduler/tune_network_cuda.html" /> 
 </head>
 
 <body class="wy-body-for-nav">
@@ -637,7 +637,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
 </pre></div>
 </div>
 <p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Check failed: lower == false: FloatImm lowering function for target llvm type 150 not found
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Check failed: lower == false: Intrinsic lowering function for target llvm, intrinsic name tir.sqrt, type 150 not found
 </pre></div>
 </div>
 <p>When we attempt to run the model, we get a familiar error telling us that more funcions need to be registerd for myfloat.</p>
@@ -759,7 +759,7 @@ where the minimum representable custom datatype value is implemented using calls
         <a href="low_level_custom_pass.html" class="btn btn-neutral float-right" title="Writing a Customized Pass" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="../auto_scheduler/tune_network_cuda.html" class="btn btn-neutral float-left" title="Auto-tuning a Neural Network for NVIDIA GPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="../auto_scheduler/tune_network_cuda.html" class="btn btn-neutral float-left" title="Auto-scheduling a Neural Network for NVIDIA GPU" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
 
diff --git a/docs/tutorials/dev/sg_execution_times.html b/docs/tutorials/dev/sg_execution_times.html
index 91fc56a..5462783 100644
--- a/docs/tutorials/dev/sg_execution_times.html
+++ b/docs/tutorials/dev/sg_execution_times.html
@@ -304,11 +304,11 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-tutorials-dev-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:31.889</strong> total execution time for <strong>tutorials_dev</strong> files:</p>
+<p><strong>00:32.590</strong> total execution time for <strong>tutorials_dev</strong> files:</p>
 <ul class="simple">
-<li><p><strong>00:31.318</strong>: <a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-tutorials-dev-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></li>
-<li><p><strong>00:00.391</strong>: <a class="reference internal" href="use_pass_infra.html#sphx-glr-tutorials-dev-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></li>
-<li><p><strong>00:00.180</strong>: <a class="reference internal" href="low_level_custom_pass.html#sphx-glr-tutorials-dev-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></li>
+<li><p><strong>00:32.007</strong>: <a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-tutorials-dev-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></li>
+<li><p><strong>00:00.400</strong>: <a class="reference internal" href="use_pass_infra.html#sphx-glr-tutorials-dev-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></li>
+<li><p><strong>00:00.184</strong>: <a class="reference internal" href="low_level_custom_pass.html#sphx-glr-tutorials-dev-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></li>
 </ul>
 </div>
 
diff --git a/docs/tutorials/frontend/deploy_model_on_android.html b/docs/tutorials/frontend/deploy_model_on_android.html
index 238bf31..0232b8a 100644
--- a/docs/tutorials/frontend/deploy_model_on_android.html
+++ b/docs/tutorials/frontend/deploy_model_on_android.html
@@ -645,7 +645,7 @@ to the remote android device.</p>
 <p class="sphx-glr-script-out">Out:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>TVM prediction top-1: tiger cat
 Evaluate inference time cost...
-Mean inference time (std dev): 5.41 ms (0.17 ms)
+Mean inference time (std dev): 5.88 ms (0.13 ms)
 </pre></div>
 </div>
 </div>
diff --git a/docs/tutorials/frontend/deploy_object_detection_pytorch.html b/docs/tutorials/frontend/deploy_object_detection_pytorch.html
index bf3ffc2..61bb530 100644
--- a/docs/tutorials/frontend/deploy_object_detection_pytorch.html
+++ b/docs/tutorials/frontend/deploy_object_detection_pytorch.html
@@ -499,7 +499,7 @@ torchvision rcnn models.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Get 9 valid boxes
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  5.919 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  5.731 seconds)</p>
 <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorials-frontend-deploy-object-detection-pytorch-py">
 <div class="sphx-glr-download docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/ec94e7a109437cf90cddcc60a7b5aaea/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/tutorials/frontend/deploy_prequantized.html b/docs/tutorials/frontend/deploy_prequantized.html
index b140236..91443bf 100644
--- a/docs/tutorials/frontend/deploy_prequantized.html
+++ b/docs/tutorials/frontend/deploy_prequantized.html
@@ -547,7 +547,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
 </pre></div>
 </div>
 <p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Elapsed average ms: 19.227042330000003
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Elapsed average ms: 20.06191538
 </pre></div>
 </div>
 <div class="admonition note">
diff --git a/docs/tutorials/frontend/deploy_prequantized_tflite.html b/docs/tutorials/frontend/deploy_prequantized_tflite.html
index 96e1cc1..0a1fd93 100644
--- a/docs/tutorials/frontend/deploy_prequantized_tflite.html
+++ b/docs/tutorials/frontend/deploy_prequantized_tflite.html
@@ -557,7 +557,7 @@ TFLite Top-5 labels: [387 102 386 341 880]
 </pre></div>
 </div>
 <p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Elapsed average ms: 36.272248340000004
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>Elapsed average ms: 36.5043426
 </pre></div>
 </div>
 <div class="admonition note">
@@ -584,7 +584,7 @@ device and follow <a class="reference external" href="https://tvm.apache.org/doc
 </ul>
 </div></blockquote>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  37.496 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  38.431 seconds)</p>
 <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorials-frontend-deploy-prequantized-tflite-py">
 <div class="sphx-glr-download docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/5c443f88ea44ce77c5ccade429af6e74/deploy_prequantized_tflite.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized_tflite.py</span></code></a></p>
diff --git a/docs/tutorials/frontend/deploy_ssd_gluoncv.html b/docs/tutorials/frontend/deploy_ssd_gluoncv.html
index 2c43007..7d96293 100644
--- a/docs/tutorials/frontend/deploy_ssd_gluoncv.html
+++ b/docs/tutorials/frontend/deploy_ssd_gluoncv.html
@@ -357,8 +357,8 @@ We will use GluonCV pre-trained SSD model and convert it to Relay IR</p>
 <p>We support compiling SSD on both CPUs and GPUs now.</p>
 <p>To get best inference performance on CPU, change
 target argument according to your device and
-follow the <a class="reference internal" href="../autotvm/tune_relay_x86.html#tune-relay-x86"><span class="std std-ref">Auto-tuning a convolutional network for x86 CPU</span></a> to tune x86 CPU and
-<a class="reference internal" href="../autotvm/tune_relay_arm.html#tune-relay-arm"><span class="std std-ref">Auto-tuning a convolutional network for ARM CPU</span></a> for arm CPU.</p>
+follow the <a class="reference internal" href="../autotvm/tune_relay_x86.html#tune-relay-x86"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> to tune x86 CPU and
+<a class="reference internal" href="../autotvm/tune_relay_arm.html#tune-relay-arm"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> for arm CPU.</p>
 <p>To get best inference performance on Intel graphics,
 change target argument to <code class="code docutils literal notranslate"><span class="pre">opencl</span> <span class="pre">-device=intel_graphics</span></code>.
 But when using Intel graphics on Mac, target needs to
@@ -445,7 +445,7 @@ to your device.</p>
 </pre></div>
 </div>
 <img alt="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" class="sphx-glr-single-img" src="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" />
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  54.583 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  55.744 seconds)</p>
 <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorials-frontend-deploy-ssd-gluoncv-py">
 <div class="sphx-glr-download docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/ca08de6c440df207921d807474d26f06/deploy_ssd_gluoncv.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_ssd_gluoncv.py</span></code></a></p>
diff --git a/docs/tutorials/frontend/from_pytorch.html b/docs/tutorials/frontend/from_pytorch.html
index c6b718e..4ed7fea 100644
--- a/docs/tutorials/frontend/from_pytorch.html
+++ b/docs/tutorials/frontend/from_pytorch.html
@@ -431,8 +431,8 @@ be unstable.</p>
 </div>
 <p class="sphx-glr-script-out">Out:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre>...47%, 0.01 MB, 40 KB/s, 0 seconds passed
-...94%, 0.02 MB, 81 KB/s, 0 seconds passed
-...100%, 0.02 MB, 121 KB/s, 0 seconds passed
+...94%, 0.02 MB, 80 KB/s, 0 seconds passed
+...100%, 0.02 MB, 120 KB/s, 0 seconds passed
 Cannot find config for target=llvm -keys=cpu, workload=(&#39;dense_nopack.x86&#39;, (&#39;TENSOR&#39;, (1, 512), &#39;float32&#39;), (&#39;TENSOR&#39;, (1000, 512), &#39;float32&#39;), None, &#39;float32&#39;). A fallback configuration is used, which may bring great performance regression.
 </pre></div>
 </div>
diff --git a/docs/tutorials/frontend/from_tensorflow.html b/docs/tutorials/frontend/from_tensorflow.html
index 2f1ff3a..6068f45 100644
--- a/docs/tutorials/frontend/from_tensorflow.html
+++ b/docs/tutorials/frontend/from_tensorflow.html
@@ -473,28 +473,28 @@ params: params converted from tensorflow params (tensor protobuf).</p>
   &quot;will be used for operator %s.&quot; % node.name
 /workspace/docs/../python/tvm/relay/frontend/tensorflow.py:735: UserWarning: DecodeJpeg: It&#39;s a pass through, please handle preprocessing before input
   warnings.warn(&quot;DecodeJpeg: It&#39;s a pass through, please handle preprocessing before input&quot;)
-WARNING:root:Attribute Tdim is ignored in relay.sym.expand_dims
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.expand_dims
+WARNING:root:Attribute Tdim is ignored in relay.sym.expand_dims
 WARNING:root:Attribute T is ignored in relay.sym.expand_dims
 WARNING:root:Attribute _node_name is ignored in relay.sym.expand_dims
 WARNING:root:Attribute _target_layout is ignored in relay.sym.expand_dims
+WARNING:root:Attribute T is ignored in relay.sym.resize
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.resize
 WARNING:root:Attribute half_pixel_centers is ignored in relay.sym.resize
-WARNING:root:Attribute T is ignored in relay.sym.resize
 WARNING:root:Attribute _node_name is ignored in relay.sym.resize
 WARNING:root:Attribute _target_layout is ignored in relay.sym.resize
-WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
 WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute message is ignored in relay.sym.copy
+WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -502,19 +502,19 @@ WARNING:root:Attribute T is ignored in relay.sym.relu
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute message is ignored in relay.sym.copy
+WARNING:root:Attribute T is ignored in relay.sym.copy
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
 WARNING:root:Attribute T is ignored in relay.sym.relu
@@ -522,13 +522,13 @@ WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
 WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
+WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
@@ -545,42 +545,42 @@ WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
 WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
+WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
-WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute message is ignored in relay.sym.copy
-WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute T is ignored in relay.sym.relu
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute message is ignored in relay.sym.copy
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -593,99 +593,99 @@ WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
 WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
 WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
 WARNING:root:Attribute T is ignored in relay.sym.relu
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute T is ignored in relay.sym.copy
-WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+WARNING:root:Attribute message is ignored in relay.sym.copy
+WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
 WARNING:root:Attribute T is ignored in relay.sym.relu
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute message is ignored in relay.sym.copy
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute T is ignored in relay.sym.copy
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute T is ignored in relay.sym.relu
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
 WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute T is ignored in relay.sym.copy
+WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute T is ignored in relay.sym.relu
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
 WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
+WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
@@ -693,19 +693,19 @@ WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute T is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
-WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
+WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute T is ignored in relay.sym.batch_norm
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
+WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
 WARNING:root:Attribute message is ignored in relay.sym.copy
-WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
@@ -713,13 +713,13 @@ WARNING:root:Attribute T is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
 WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
+WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
@@ -727,43 +727,43 @@ WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
 WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-WARNING:root:Attribute T is ignored in relay.sym.relu
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
+WARNING:root:Attribute T is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-WARNING:root:Attribute T is ignored in relay.sym.concatenate
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
+WARNING:root:Attribute T is ignored in relay.sym.concatenate
 WARNING:root:Attribute N is ignored in relay.sym.concatenate
 WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
 WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
+WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
+WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
+WARNING:root:Attribute T is ignored in relay.sym.copy
 WARNING:root:Attribute message is ignored in relay.sym.copy
 WARNING:root:Attribute _node_name is ignored in relay.sym.copy
 WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute T is ignored in relay.sym.relu
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
 WARNING:root:Attribute _node_name is ignored in relay.sym.relu
 WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
+WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
 WARNING:root:Attribute T is ignored in relay.sym.conv2d
-WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
+WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
 WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
 WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
@@ -771,75 +771,75 @@ WARNING:root:Attribute T is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
 WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-WARNING:root:Attribute T is ignored in relay.sym.copy
... 2877 lines suppressed ...