You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2023/01/17 23:17:37 UTC

[tvm-site] branch asf-site updated: deploying docs (apache/tvm@328122675da7800944211e7ac0b21b3ed9398060)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 8223225db7 deploying docs (apache/tvm@328122675da7800944211e7ac0b21b3ed9398060)
8223225db7 is described below

commit 8223225db785dcb20f33cefc752706fbc84f43f5
Author: tvm-bot <95...@users.noreply.github.com>
AuthorDate: Tue Jan 17 23:17:29 2023 +0000

    deploying docs (apache/tvm@328122675da7800944211e7ac0b21b3ed9398060)
---
 docs/_images/sphx_glr_micro_train_001.png          | Bin 330120 -> 298784 bytes
 docs/_images/sphx_glr_micro_train_thumb.png        | Bin 23754 -> 22856 bytes
 .../how_to/compile_models/from_darknet.rst.txt     |   2 +-
 .../how_to/compile_models/from_keras.rst.txt       |   2 +-
 .../how_to/compile_models/from_mxnet.rst.txt       |   2 +-
 .../how_to/compile_models/from_oneflow.rst.txt     |   2 +-
 .../how_to/compile_models/from_pytorch.rst.txt     |   2 +-
 .../how_to/compile_models/from_tensorflow.rst.txt  |   2 +-
 .../compile_models/sg_execution_times.rst.txt      |  22 +-
 .../deploy_models/deploy_model_on_adreno.rst.txt   |   2 +-
 .../deploy_models/deploy_model_on_android.rst.txt  |   2 +-
 .../deploy_object_detection_pytorch.rst.txt        |   4 +-
 .../deploy_models/deploy_prequantized.rst.txt      |   6 +-
 .../deploy_prequantized_tflite.rst.txt             |   4 +-
 .../how_to/deploy_models/deploy_quantized.rst.txt  |   2 +-
 .../deploy_models/deploy_ssd_gluoncv.rst.txt       |   4 +-
 .../deploy_models/sg_execution_times.rst.txt       |  20 +-
 .../extend_tvm/bring_your_own_datatypes.rst.txt    |   2 +-
 .../how_to/extend_tvm/sg_execution_times.rst.txt   |   8 +-
 .../how_to/extend_tvm/use_pass_instrument.rst.txt  |  16 +-
 .../optimize_operators/opt_conv_cuda.rst.txt       |   2 +-
 .../optimize_operators/opt_conv_tensorcore.rst.txt |   2 +-
 .../how_to/optimize_operators/opt_gemm.rst.txt     |  16 +-
 .../optimize_operators/sg_execution_times.rst.txt  |   8 +-
 .../sg_execution_times.rst.txt                     |  14 +-
 .../tune_conv2d_layer_cuda.rst.txt                 | 349 +++++----------------
 .../tune_network_cuda.rst.txt                      |   4 +-
 .../tune_network_x86.rst.txt                       |   4 +-
 .../tune_sparse_x86.rst.txt                        |  88 ++----
 .../tune_with_autotvm/sg_execution_times.rst.txt   |   4 +-
 .../tune_with_autotvm/tune_conv2d_cuda.rst.txt     | 335 ++++++++++----------
 .../work_with_microtvm/micro_autotune.rst.txt      |  16 +-
 .../work_with_microtvm/micro_pytorch.rst.txt       |   4 +-
 .../how_to/work_with_microtvm/micro_train.rst.txt  |  18 +-
 .../work_with_microtvm/sg_execution_times.rst.txt  |  16 +-
 .../work_with_relay/sg_execution_times.rst.txt     |   8 +-
 .../how_to/work_with_schedules/intrin_math.rst.txt |   2 +-
 .../work_with_schedules/sg_execution_times.rst.txt |  14 +-
 .../how_to/work_with_schedules/tensorize.rst.txt   |   2 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |   4 +-
 .../frontend/deploy_classification.rst.txt         |   2 +-
 .../tutorials/frontend/deploy_detection.rst.txt    |   2 +-
 .../tutorials/frontend/sg_execution_times.rst.txt  |   6 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |   6 +-
 .../topic/vta/tutorials/sg_execution_times.rst.txt |   6 +-
 .../tutorial/auto_scheduler_matmul_x86.rst.txt     |   4 +-
 docs/_sources/tutorial/autotvm_matmul_x86.rst.txt  |  20 +-
 docs/_sources/tutorial/autotvm_relay_x86.rst.txt   |  56 ++--
 .../tutorial/cross_compilation_and_rpc.rst.txt     |   2 +-
 docs/_sources/tutorial/intro_topi.rst.txt          |   2 +-
 docs/_sources/tutorial/sg_execution_times.rst.txt  |  18 +-
 .../tutorial/tensor_expr_get_started.rst.txt       |  42 +--
 docs/commit_hash                                   |   2 +-
 docs/how_to/compile_models/from_darknet.html       |   2 +-
 docs/how_to/compile_models/from_keras.html         |   2 +-
 docs/how_to/compile_models/from_mxnet.html         |   2 +-
 docs/how_to/compile_models/from_oneflow.html       |  14 +-
 docs/how_to/compile_models/from_pytorch.html       |  11 +-
 docs/how_to/compile_models/from_tensorflow.html    |   2 +-
 docs/how_to/compile_models/sg_execution_times.html |  26 +-
 .../deploy_models/deploy_model_on_adreno.html      |   2 +-
 .../deploy_models/deploy_model_on_android.html     |   2 +-
 .../deploy_object_detection_pytorch.html           |  40 ++-
 docs/how_to/deploy_models/deploy_prequantized.html |   8 +-
 .../deploy_models/deploy_prequantized_tflite.html  |   4 +-
 docs/how_to/deploy_models/deploy_quantized.html    |   2 +-
 docs/how_to/deploy_models/deploy_ssd_gluoncv.html  |  36 +--
 docs/how_to/deploy_models/sg_execution_times.html  |  20 +-
 .../extend_tvm/bring_your_own_datatypes.html       |   2 +-
 docs/how_to/extend_tvm/sg_execution_times.html     |   8 +-
 docs/how_to/extend_tvm/use_pass_instrument.html    |  16 +-
 docs/how_to/optimize_operators/opt_conv_cuda.html  |   2 +-
 .../optimize_operators/opt_conv_tensorcore.html    |   2 +-
 docs/how_to/optimize_operators/opt_gemm.html       |  16 +-
 .../optimize_operators/sg_execution_times.html     |   8 +-
 .../sg_execution_times.html                        |  14 +-
 .../tune_conv2d_layer_cuda.html                    | 349 +++++----------------
 .../tune_with_autoscheduler/tune_network_cuda.html |   4 +-
 .../tune_with_autoscheduler/tune_network_x86.html  |   4 +-
 .../tune_with_autoscheduler/tune_sparse_x86.html   |  88 ++----
 .../tune_with_autotvm/sg_execution_times.html      |   4 +-
 .../how_to/tune_with_autotvm/tune_conv2d_cuda.html | 335 ++++++++++----------
 docs/how_to/work_with_microtvm/micro_autotune.html |  16 +-
 docs/how_to/work_with_microtvm/micro_pytorch.html  |   5 +-
 docs/how_to/work_with_microtvm/micro_train.html    |  16 +-
 .../work_with_microtvm/sg_execution_times.html     |  16 +-
 .../how_to/work_with_relay/sg_execution_times.html |   8 +-
 docs/how_to/work_with_schedules/intrin_math.html   |   2 +-
 .../work_with_schedules/sg_execution_times.html    |  14 +-
 docs/how_to/work_with_schedules/tensorize.html     |   2 +-
 docs/install/nnpack.html                           |  12 +-
 ...sstvm_1_1meta__schedule_1_1Mutator-members.html |  53 ++--
 .../classtvm_1_1meta__schedule_1_1Mutator.html     |  34 +-
 ...m_1_1meta__schedule_1_1Mutator__coll__graph.svg | 113 ++++---
 ..._1meta__schedule_1_1Mutator__inherit__graph.svg |  85 +++--
 ...stvm_1_1meta__schedule_1_1Postproc-members.html |  12 +-
 .../classtvm_1_1meta__schedule_1_1Postproc.html    |  54 ++--
 ..._1_1meta__schedule_1_1ScheduleRule-members.html |   2 +-
 ...classtvm_1_1meta__schedule_1_1ScheduleRule.html |  17 +-
 docs/reference/api/doxygen/functions_d.html        |  13 +-
 docs/reference/api/doxygen/functions_func_d.html   |   9 +-
 docs/reference/api/doxygen/functions_m.html        |   2 +-
 docs/reference/api/doxygen/functions_s.html        |   4 +-
 docs/reference/api/doxygen/functions_t.html        |   4 +-
 docs/reference/api/doxygen/functions_u.html        |   2 +-
 docs/reference/api/doxygen/mutator_8h_source.html  |  14 +-
 docs/reference/api/doxygen/postproc_8h_source.html |   2 +-
 .../api/doxygen/schedule__rule_8h_source.html      |   2 +-
 docs/reference/api/doxygen/search/all_10.js        |   2 +-
 docs/reference/api/doxygen/search/all_11.js        |   2 +-
 docs/reference/api/doxygen/search/all_13.js        |   8 +-
 docs/reference/api/doxygen/search/all_14.js        |   8 +-
 docs/reference/api/doxygen/search/all_15.js        |   4 +-
 docs/reference/api/doxygen/search/all_16.js        |   4 +-
 docs/reference/api/doxygen/search/all_5.js         |   3 +-
 docs/reference/api/doxygen/search/all_e.js         |   6 +-
 docs/reference/api/doxygen/search/functions_10.js  |   2 +-
 docs/reference/api/doxygen/search/functions_12.js  |   4 +-
 docs/reference/api/doxygen/search/functions_13.js  |   4 +-
 docs/reference/api/doxygen/search/functions_15.js  |   2 +-
 docs/reference/api/doxygen/search/functions_4.js   |   3 +-
 docs/reference/api/doxygen/search/functions_d.js   |   4 +-
 docs/reference/api/python/auto_scheduler.html      |   4 +-
 .../api/typedoc/classes/bytestreamreader.html      |  12 +-
 .../api/typedoc/classes/cachedcallstack.html       |  34 +-
 docs/reference/api/typedoc/classes/dldatatype.html |  12 +-
 docs/reference/api/typedoc/classes/dldevice.html   |  10 +-
 .../reference/api/typedoc/classes/environment.html |  12 +-
 docs/reference/api/typedoc/classes/ffilibrary.html |  20 +-
 .../api/typedoc/classes/graphexecutor.html         |  16 +-
 docs/reference/api/typedoc/classes/instance.html   |  40 +--
 docs/reference/api/typedoc/classes/memory.html     |  34 +-
 docs/reference/api/typedoc/classes/module.html     |  10 +-
 docs/reference/api/typedoc/classes/ndarray.html    |  22 +-
 .../api/typedoc/classes/packedfunccell.html        |   6 +-
 docs/reference/api/typedoc/classes/rpcserver.html  |  14 +-
 docs/reference/api/typedoc/classes/scalar.html     |   6 +-
 .../api/typedoc/classes/webgpucontext.html         |  12 +-
 docs/reference/api/typedoc/enums/argtypecode.html  |  30 +-
 .../api/typedoc/enums/aynccallbackcode.html        |   4 +-
 .../api/typedoc/enums/dldatatypecode.html          |   8 +-
 .../api/typedoc/enums/rpcserverstate.html          |  12 +-
 docs/reference/api/typedoc/enums/sizeof.html       |  18 +-
 docs/reference/api/typedoc/index.html              | 112 +++----
 .../api/typedoc/interfaces/disposable.html         |   2 +-
 .../api/typedoc/interfaces/functioninfo.html       |   6 +-
 .../api/typedoc/interfaces/libraryprovider.html    |   4 +-
 docs/searchindex.js                                |   2 +-
 .../vta/tutorials/autotvm/sg_execution_times.html  |   4 +-
 .../tutorials/frontend/deploy_classification.html  |   2 +-
 .../vta/tutorials/frontend/deploy_detection.html   |   2 +-
 .../vta/tutorials/frontend/sg_execution_times.html |   6 +-
 .../vta/tutorials/optimize/sg_execution_times.html |   6 +-
 docs/topic/vta/tutorials/sg_execution_times.html   |   6 +-
 docs/tutorial/auto_scheduler_matmul_x86.html       |   4 +-
 docs/tutorial/autotvm_matmul_x86.html              |  20 +-
 docs/tutorial/autotvm_relay_x86.html               | 270 ++++++++--------
 docs/tutorial/cross_compilation_and_rpc.html       |   2 +-
 docs/tutorial/intro_topi.html                      |   2 +-
 docs/tutorial/sg_execution_times.html              |  22 +-
 docs/tutorial/tensor_expr_get_started.html         |  42 +--
 161 files changed, 1554 insertions(+), 2072 deletions(-)

diff --git a/docs/_images/sphx_glr_micro_train_001.png b/docs/_images/sphx_glr_micro_train_001.png
index 749f250b96..fb3c2850a3 100644
Binary files a/docs/_images/sphx_glr_micro_train_001.png and b/docs/_images/sphx_glr_micro_train_001.png differ
diff --git a/docs/_images/sphx_glr_micro_train_thumb.png b/docs/_images/sphx_glr_micro_train_thumb.png
index eb961b1b9c..86defffe09 100644
Binary files a/docs/_images/sphx_glr_micro_train_thumb.png and b/docs/_images/sphx_glr_micro_train_thumb.png differ
diff --git a/docs/_sources/how_to/compile_models/from_darknet.rst.txt b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
index a01a7643e7..98153cec60 100644
--- a/docs/_sources/how_to/compile_models/from_darknet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
@@ -318,7 +318,7 @@ The process is no different from other examples.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  16.919 seconds)
+   **Total running time of the script:** ( 1 minutes  17.012 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_darknet.py:
diff --git a/docs/_sources/how_to/compile_models/from_keras.rst.txt b/docs/_sources/how_to/compile_models/from_keras.rst.txt
index c27b36a73f..8c47d5b13e 100644
--- a/docs/_sources/how_to/compile_models/from_keras.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_keras.rst.txt
@@ -232,7 +232,7 @@ Look up prediction top 1 index in 1000 class synset.
  .. code-block:: none
 
     Relay top-1 id: 285, class name: Egyptian cat
-
    1/1 [==============================] - ETA: 0s
    1/1 [==============================] - 1s 976ms/step
+
    1/1 [==============================] - ETA: 0s
    1/1 [==============================] - 1s 966ms/step
     Keras top-1 id: 285, class name: Egyptian cat
 
 
diff --git a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
index 861fee367c..d760a64bfd 100644
--- a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
@@ -116,7 +116,7 @@ In this section, we download a pretrained imagenet model and classify an image.
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipfc4a67b1-b7a5-45f2-9f14-a82fa2495f34 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip46eab057-cce9-4764-a640-e8011944cde5 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
     x (1, 3, 224, 224)
 
 
diff --git a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
index 536534b94d..b3605f6d18 100644
--- a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
@@ -121,7 +121,7 @@ Load a pretrained OneFlow model and save model
  .. code-block:: none
 
     Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
-
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     15%|#5        | 6.33M/41.5M [00:00<00:00, 40.0MB/s]
     24%|##4       | 10.1M/41.5M [00:00<00:00, 35.3MB/s]
     35%|###4      | 14.3M/41.5M [00:00<00:00, 34.0MB/s]
     42%|####2     | 17.5M/41.5M [00:00<00:00, 33.4MB/s]
     58%|#####7    | 24.0M/41.5M [00:00<00:00, 35.5MB/s]
     78%|#######7  | 32.3M/41.5M [00:00<00:00, 49.0MB/s]
     96%|#########6| 40.0M/41.5M [00:00<00:00, 53.3MB/s]
    100%|##########| 41.5M/41.5M [00:00<00:00, 45.6MB/s]
+
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     19%|#9        | 7.99M/41.5M [00:00<00:00, 81.9MB/s]
     39%|###8      | 16.0M/41.5M [00:00<00:00, 63.0MB/s]
     58%|#####7    | 24.0M/41.5M [00:00<00:00, 58.1MB/s]
     77%|#######7  | 32.0M/41.5M [00:00<00:00, 58.8MB/s]
     96%|#########6| 40.0M/41.5M [00:00<00:00, 62.0MB/s]
    100%|##########| 41.5M/41.5M [00:00<00:00, 63.7MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
index f1a41ae186..5ead52bb44 100644
--- a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
@@ -101,7 +101,7 @@ Load a pretrained PyTorch model
     /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     32%|###1      | 14.2M/44.7M [00:00<00:00, 148MB/s]
     63%|######3   | 28.3M/44.7M [00:00<00:00, 115MB/s]
     89%|########9 | 39.8M/44.7M [00:00<00:00, 109MB/s]
    100%|##########| 44.7M/44.7M [00:00<00:00, 111MB/s]
+
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     18%|#7        | 7.99M/44.7M [00:00<00:00, 64.5MB/s]
     36%|###5      | 16.0M/44.7M [00:00<00:00, 62.6MB/s]
     54%|#####3    | 24.0M/44.7M [00:00<00:00, 67.8MB/s]
     68%|######8   | 30.5M/44.7M [00:00<00:00, 60.0MB/s]
     81%|########1 | 36.3M/44.7M [00:00<00:00, 52.9MB/s]
     93%|#########2| 41.5M/44.7M [00:00<00:00, 49.3MB/s]
    100%|##########| 44.7M/44.7M [00:00<00:00, 58.4MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
index e3d30eb04d..ae668f3db2 100644
--- a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
@@ -424,7 +424,7 @@ Run the corresponding model on tensorflow
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  20.542 seconds)
+   **Total running time of the script:** ( 1 minutes  20.509 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_tensorflow.py:
diff --git a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
index 7a8b096a8f..b9e556922d 100644
--- a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**06:18.591** total execution time for **how_to_compile_models** files:
+**06:17.313** total execution time for **how_to_compile_models** files:
 
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:20.542 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:20.509 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:16.919 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:17.012 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 00:51.878 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 00:51.153 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:34.968 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:34.958 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:30.560 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:30.365 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:29.497 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:29.779 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:26.858 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:26.352 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:24.030 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:24.552 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:20.703 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:20.019 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.637 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.614 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
index 564afc0054..429ee905a2 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
@@ -727,7 +727,7 @@ well as provides information about the model's performance
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-     2685.1657    2684.8620    2688.5736    2683.1320      1.4054   
+     2682.9338    2681.1330    2690.0914    2679.2744      3.6731   
                
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
index 569a8e6a8c..7b6782026b 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
@@ -437,7 +437,7 @@ Execute on TVM
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      16.1415      16.1502      16.2217      16.0335       0.0557   
+      16.4251      16.3604      17.1089      15.8652       0.4475   
                
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
index b62c475cb6..74d6f56b84 100644
--- a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
@@ -130,7 +130,7 @@ Load pre-trained maskrcnn from torchvision and do tracing
     /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=MaskRCNN_ResNet50_FPN_Weights.COCO_V1`. You can also use `weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-
      0%|          | 0.00/170M [00:00<?, ?B/s]
      8%|7         | 13.3M/170M [00:00<00:01, 139MB/s]
     16%|#5        | 26.6M/170M [00:00<00:01, 100MB/s]
     23%|##2       | 38.8M/170M [00:00<00:01, 111MB/s]
     29%|##9       | 49.9M/170M [00:00<00:01, 96.6MB/s]
     37%|###6      | 62.4M/170M [00:00<00:01, 107MB/s] 
     43%|####3     | 73.0M/170M [00:00<00:01, 98.5MB/s]
     50%|####9     | 84.6M/170M [00:00<00:00, 105MB/s] 
     56%|#####5    | 94.9M/170M [00:00<00:00, 103MB/s]
     62%|######1   | 105M/170M [00:01<00:00, 93.7MB/s]
     67%|######7   | 114M/170M [00:01<00:00, 90.4MB/s]
     75%|#######5  | 128M/170M [00:01<00:00, 97.9MB/s]
     81%|########  | 137M/170M [00:01<00:00, 65.3MB/s]
     89%|########9 | 152M/170M [00:01<00:00, 82.4MB/s]
     95%|#########5| 161M/170M [00:01<00:00, 67.2MB/s]
    100%|##########| 170M/170M [00:02<00:00, 88.5MB/s]
+
      0%|          | 0.00/170M [00:00<?, ?B/s]
      5%|4         | 7.99M/170M [00:00<00:03, 49.4MB/s]
      9%|9         | 16.0M/170M [00:00<00:03, 49.6MB/s]
     14%|#4        | 24.0M/170M [00:00<00:02, 53.1MB/s]
     19%|#8        | 32.0M/170M [00:00<00:02, 54.5MB/s]
     24%|##3       | 40.0M/170M [00:00<00:02, 59.5MB/s]
     28%|##8       | 48.0M/170M [00:00<00:01, 66.0MB/s]
     33%|###2      | 56.0M/170M [00:00<00:01, 65.4MB/s]
     38%|###7      | 64.0M/170M [00:01<00:01, 58.5MB/s]
     42%|####2     | 72.0M/170M [00:01<00:02, 50.4MB/s]
     47%|####7     | 80.0M/170M [00:01<00:01, 54.5MB/s]
     52%|#####1    | 88.0M/170M [00:01<00:01, 53.6MB/s]
     57%|#####6    | 96.1M/170M [00:01<00:01, 60.5MB/s]
     61%|######1   | 104M/170M [00:01<00:01, 54.3MB/s] 
     66%|######5   | 112M/170M [00:02<00:01, 55.4MB/s]
     71%|#######   | 120M/170M [00:02<00:00, 61.4MB/s]
     75%|#######5  | 128M/170M [00:02<00:00, 60.1MB/s]
     80%|########  | 136M/170M [00:02<00:00, 65.8MB/s]
 
     85%|########4 | 144M/170M [00:02<00:00, 59.6MB/s]
     88%|########8 | 150M/170M [00:02<00:00, 60.7MB/s]
     92%|#########2| 156M/170M [00:02<00:00, 60.0MB/s]
     96%|#########5| 162M/170M [00:02<00:00, 55.9MB/s]
     99%|#########8| 168M/170M [00:03<00:00, 55.2MB/s]
    100%|##########| 170M/170M [00:03<00:00, 57.8MB/s]
     /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torch/nn/functional.py:3897: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
       for i in range(dim)
     /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/detection/anchor_utils.py:124: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
@@ -299,7 +299,7 @@ Get boxes with score larger than 0.9
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 3 minutes  33.185 seconds)
+   **Total running time of the script:** ( 3 minutes  27.164 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
index e192359b29..795f4229b3 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
@@ -227,7 +227,7 @@ training. Other models require a full post training calibration.
     /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
-
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     97%|#########6| 13.1M/13.6M [00:00<00:00, 137MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 134MB/s]
+
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     59%|#####8    | 7.99M/13.6M [00:00<00:00, 72.4MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 79.0MB/s]
 
 
 
@@ -409,7 +409,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      90.4488      90.3820      92.8051      90.1836       0.2949   
+      90.3653      90.2098      96.1734      90.0266       0.6337   
                
 
 
@@ -458,7 +458,7 @@ TODO
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  14.137 seconds)
+   **Total running time of the script:** ( 1 minutes  12.494 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_prequantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
index 8c76c5054a..ed14666f97 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
@@ -423,7 +423,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      120.9158     120.9112     122.3274     120.2007      0.3495   
+      120.6118     120.5122     126.7901     119.8926      0.7046   
                
 
 
@@ -460,7 +460,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  27.049 seconds)
+   **Total running time of the script:** ( 2 minutes  28.601 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_prequantized_tflite.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
index 1548e66340..2c5fa58178 100644
--- a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
@@ -257,7 +257,7 @@ We create a Relay VM to build and execute the model.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  38.317 seconds)
+   **Total running time of the script:** ( 1 minutes  33.686 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_quantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt b/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
index 88a78af281..8032a32c1f 100644
--- a/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
@@ -170,7 +170,7 @@ Convert and compile model for CPU.
             data: None
       input_sym_arg_type = in_param.infer_type()[0]
     Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_512_resnet50_v1_voc-9c8b225a.zip...
-
      0%|          | 0/132723 [00:00<?, ?KB/s]
      3%|3         | 4248/132723 [00:00<00:03, 42477.00KB/s]
      9%|8         | 11356/132723 [00:00<00:02, 59295.52KB/s]
     14%|#4        | 18901/132723 [00:00<00:01, 66669.23KB/s]
     20%|##        | 26959/132723 [00:00<00:01, 72152.77KB/s]
     26%|##6       | 34901/132723 [00:00<00:01, 74768.82KB/s]
     32%|###2      | 42529/132723 [00:00<00:01, 75279.90KB/s]
     38%|###8      | 50503/132723 [00:00<00:01, 76734.48KB/s]
     44%|####3     | 58177/132723 [00:00<00:01, 71912.88KB/s]
     50%|####9     | 66231/132723 [00:00<00:00, 74489.13KB/s]
     56%|#####5    | 74150/132723 [00:01<00:00, 75890.46KB/s]
     62%|######1   | 82100/132723 [00:01<00:00, 76968.71KB/s]
     68%|######7   | 90145/132723 [00:01<00:00, 78010.19KB/s]
     74%|#######3  | 98119/132723 [00:01<00:00, 78525.55KB/s]
     80%|#######9  | 106157/132723 [00:01<00:00, 79080.03KB/s]
     86%|########5 | 114077/132723 [00:01<00:00, 79004.03KB/s]
     92%|#########
 1| 122025/132723 [00:01<00:00, 79145.57KB/s]
     98%|#########7| 130053/132723 [00:01<00:00, 79483.27KB/s]
    100%|##########| 132723/132723 [00:01<00:00, 75634.13KB/s]
+
      0%|          | 0/132723 [00:00<?, ?KB/s]
      5%|5         | 6908/132723 [00:00<00:01, 69072.49KB/s]
     12%|#1        | 15541/132723 [00:00<00:01, 79220.89KB/s]
     18%|#8        | 24166/132723 [00:00<00:01, 82427.11KB/s]
     25%|##4       | 32822/132723 [00:00<00:01, 84055.67KB/s]
     31%|###1      | 41482/132723 [00:00<00:01, 84971.77KB/s]
     38%|###7      | 50069/132723 [00:00<00:00, 85274.69KB/s]
     44%|####4     | 58672/132723 [00:00<00:00, 85520.03KB/s]
     51%|#####     | 67345/132723 [00:00<00:00, 85903.41KB/s]
     57%|#####7    | 75952/132723 [00:00<00:00, 85953.46KB/s]
     64%|######3   | 84594/132723 [00:01<00:00, 86094.91KB/s]
     70%|#######   | 93204/132723 [00:01<00:00, 86024.63KB/s]
     77%|#######6  | 101836/132723 [00:01<00:00, 86111.27KB/s]
     83%|########3 | 110455/132723 [00:01<00:00, 86132.27KB/s]
     90%|########9 | 119089/132723 [00:01<00:00, 86189.93KB/s]
     96%|#########6| 127787/132723 [00:01<00:00, 86425.06KB/s]
    100%|#######
 ###| 132723/132723 [00:01<00:00, 85092.56KB/s]
 
 
 
@@ -246,7 +246,7 @@ Display result
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 3 minutes  34.278 seconds)
+   **Total running time of the script:** ( 3 minutes  28.157 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_ssd_gluoncv.py:
diff --git a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
index 29769b3d57..aba729698d 100644
--- a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**14:57.528** total execution time for **how_to_deploy_models** files:
+**14:37.448** total execution time for **how_to_deploy_models** files:
 
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)                           | 03:34.278 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)                           | 03:28.157 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:33.185 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:27.164 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 02:27.049 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 02:28.601 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 01:38.317 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 01:33.686 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:14.137 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:12.494 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 00:55.568 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 00:54.877 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:40.702 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:39.597 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:27.354 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:26.609 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:26.932 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:26.256 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.006 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
index c5f27da226..94c7c4bf3c 100644
--- a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
@@ -463,7 +463,7 @@ First let us define two helper functions to get the mobilenet model and a cat im
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip1cb584b8-1b90-4e72-b317-e4be25ccc26f from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip90cae528-ebda-4fec-944e-cf32c7f47d93 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 
 
 
diff --git a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
index d58e2aec58..ac209422cb 100644
--- a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:53.102** total execution time for **how_to_extend_tvm** files:
+**00:53.770** total execution time for **how_to_extend_tvm** files:
 
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:49.281 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:49.968 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.719 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.708 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.094 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.088 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.007 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
index a94af69973..827e3f5219 100644
--- a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
@@ -220,10 +220,10 @@ profile the execution time of each passes.
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 18180us [18180us] (47.86%; 47.86%)
-    FoldScaleAxis: 19805us [10us] (52.14%; 52.14%)
-            FoldConstant: 19795us [1760us] (52.11%; 99.95%)
-                    InferType: 18035us [18035us] (47.48%; 91.11%)
+    InferType: 18090us [18090us] (47.42%; 47.42%)
+    FoldScaleAxis: 20062us [8us] (52.58%; 52.58%)
+            FoldConstant: 20054us [1774us] (52.56%; 99.96%)
+                    InferType: 18280us [18280us] (47.91%; 91.15%)
 
 
 
@@ -262,10 +262,10 @@ Refer to following sections and :py:func:`tvm.instrument.pass_instrument` for th
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 17575us [17575us] (47.88%; 47.88%)
-    FoldScaleAxis: 19128us [8us] (52.12%; 52.12%)
-            FoldConstant: 19120us [1767us] (52.09%; 99.96%)
-                    InferType: 17353us [17353us] (47.28%; 90.76%)
+    InferType: 17406us [17406us] (47.82%; 47.82%)
+    FoldScaleAxis: 18991us [6us] (52.18%; 52.18%)
+            FoldConstant: 18985us [1772us] (52.16%; 99.97%)
+                    InferType: 17213us [17213us] (47.29%; 90.67%)
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
index 8743e881d2..dd5b87b18c 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
@@ -331,7 +331,7 @@ latency of convolution.
 
  .. code-block:: none
 
-    Convolution: 33.708736 ms
+    Convolution: 52.678657 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
index 03e4530892..aa6fa9b80c 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
@@ -660,7 +660,7 @@ be able to run on our build server
 
  .. code-block:: none
 
-    conv2d with tensor core: 13.357292 ms
+    conv2d with tensor core: 6.611427 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
index 94b1bd8a14..f9fe9b7b2c 100644
--- a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
@@ -134,8 +134,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 
  .. code-block:: none
 
-    Numpy running time: 0.019340
-    Baseline: 3.366329
+    Numpy running time: 0.018607
+    Baseline: 3.388530
 
 
 
@@ -229,7 +229,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 
  .. code-block:: none
 
-    Opt1: 0.306912
+    Opt1: 0.295865
 
 
 
@@ -331,7 +331,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
 
  .. code-block:: none
 
-    Opt2: 0.341984
+    Opt2: 0.337846
 
 
 
@@ -426,7 +426,7 @@ the access pattern for A matrix is more cache friendly.
 
  .. code-block:: none
 
-    Opt3: 0.120675
+    Opt3: 0.116724
 
 
 
@@ -550,7 +550,7 @@ flattening.
 
  .. code-block:: none
 
-    Opt4: 0.109358
+    Opt4: 0.109615
 
 
 
@@ -671,7 +671,7 @@ write to C when all the block results are ready.
 
  .. code-block:: none
 
-    Opt5: 0.111938
+    Opt5: 0.111480
 
 
 
@@ -795,7 +795,7 @@ Furthermore, we can also utilize multi-core processors to do the thread-level pa
 
  .. code-block:: none
 
-    Opt6: 0.147480
+    Opt6: 0.146769
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
index 12b8ae59d3..8f1ba7ebeb 100644
--- a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:35.329** total execution time for **how_to_optimize_operators** files:
+**00:34.700** total execution time for **how_to_optimize_operators** files:
 
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:32.539 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:32.211 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:01.629 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:01.432 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.162 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.056 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
index ff9037e4d1..d5d4e90d9c 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
@@ -5,18 +5,18 @@
 
 Computation times
 =================
-**09:27.615** total execution time for **how_to_tune_with_autoscheduler** files:
+**09:16.639** total execution time for **how_to_tune_with_autoscheduler** files:
 
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 05:46.501 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 05:35.119 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:39.774 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:39.534 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:05.461 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:05.720 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:28.834 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:29.292 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:14.112 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:14.013 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:12.933 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:12.960 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
index 9e4690b73a..10901b7dcc 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
@@ -242,172 +242,55 @@ cooperative fetching, unrolling and operator fusion.
                  bias: Buffer(bias_2: Pointer(float32), float32, [1, 512, 1, 1], []),
                  compute: Buffer(compute_2: Pointer(float32), float32, [1, 512, 7, 7], [])}
       buffer_map = {data_1: data, kernel_1: kernel, bias_1: bias, compute_1: compute} {
-      attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 128;
-      allocate(conv2d_nchw: Pointer(local float32), float32, [14]), storage_scope = local;
-      allocate(pad_temp.shared: Pointer(shared float32), float32, [504]), storage_scope = shared;
-      allocate(kernel.shared: Pointer(shared float32), float32, [96]), storage_scope = shared;
-      attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14 {
-        conv2d_nchw_1: Buffer(conv2d_nchw, float32, [49], [], scope="local", align=16)[0] = 0f32
-        conv2d_nchw_1[7] = 0f32
+      attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 56;
+      allocate(conv2d_nchw: Pointer(local float32), float32, [7]), storage_scope = local;
+      allocate(pad_temp.shared: Pointer(shared float32), float32, [72]), storage_scope = shared;
+      allocate(kernel.shared: Pointer(shared float32), float32, [1536]), storage_scope = shared;
+      attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64 {
+        conv2d_nchw_1: Buffer(conv2d_nchw, float32, [7], [], scope="local", align=16)[0] = 0f32
         conv2d_nchw_1[1] = 0f32
-        conv2d_nchw_1[8] = 0f32
         conv2d_nchw_1[2] = 0f32
-        conv2d_nchw_1[9] = 0f32
         conv2d_nchw_1[3] = 0f32
-        conv2d_nchw_1[10] = 0f32
         conv2d_nchw_1[4] = 0f32
-        conv2d_nchw_1[11] = 0f32
         conv2d_nchw_1[5] = 0f32
-        conv2d_nchw_1[12] = 0f32
         conv2d_nchw_1[6] = 0f32
-        conv2d_nchw_1[13] = 0f32
         for (rc.outer.outer: int32, 0, 64) {
           for (ry.outer.outer: int32, 0, 3) {
-            let cse_var_4: int32 = (rc.outer.outer*392)
-            let cse_var_3: int32 = (ry.outer.outer*7)
-            let cse_var_2: int32 = (rc.outer.outer*72)
-            let cse_var_1: int32 = (ry.outer.outer*3)
-             {
-              attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1: Buffer(pad_temp.shared, float32, [504], [], scope="shared")[threadIdx.x_1] = @tir.if_then_else((((1 <= (floordiv(threadIdx.x_1, 9) + ry.outer.outer)) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data_3: Buffer(data_2, float32, [25088], [])[((((cse_var_4 + (floordiv(threadIdx.x_1, 9)*7)) + cse_var_3) + floormod(threadIdx.x_1, 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 14)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 5), 9)) && (floormod((threadIdx.x_1 + 5), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 14), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 28)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 1), 9)) && (floormod((threadIdx.x_1 + 1), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 28), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 42)] = @tir.if_then_else(((((floordiv((threadIdx.x_1 + 42), 9) + ry.outer.outer) < 8) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 42), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 56)] = @tir.if_then_else(((((1 <= (floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer)) && ((floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer) < 8)) && (1 <= floormod((threadIdx.x_1 + 2), 9))) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 56), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 70)] = @tir.if_then_else((((1 <= (floordiv(floormod((threadIdx.x_1 + 7), 63), 9) + ry.outer.outer)) && (1 <= floormod((threadIdx.x_1 + 7), 9))) && (floormod((threadIdx.x_1 + 7), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 70), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 84)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 3), 9)) && (floormod((threadIdx.x_1 + 3), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 84), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 98)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 8), 9)) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 98), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 112)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 49), 63), 9) + ry.outer.outer) < 8) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 112), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 126)] = @tir.if_then_else((((1 <= (floordiv(threadIdx.x_1, 9) + ry.outer.outer)) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data_3[((((cse_var_4 + (floordiv(threadIdx.x_1, 9)*7)) + cse_var_3) + floormod(threadIdx.x_1, 9)) + 90)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 140)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 5), 9)) && (floormod((threadIdx.x_1 + 5), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 140), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 154)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 1), 9)) && (floormod((threadIdx.x_1 + 1), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 154), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 168)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 42), 63), 9) + ry.outer.outer) < 8) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 168), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 182)] = @tir.if_then_else(((((1 <= (floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer)) && ((floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer) < 8)) && (1 <= floormod((threadIdx.x_1 + 2), 9))) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 182), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 196)] = @tir.if_then_else((((1 <= (floordiv(floormod((threadIdx.x_1 + 7), 63), 9) + ry.outer.outer)) && (1 <= floormod((threadIdx.x_1 + 7), 9))) && (floormod((threadIdx.x_1 + 7), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 196), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 210)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 3), 9)) && (floormod((threadIdx.x_1 + 3), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 210), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 224)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 8), 9)) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 224), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 238)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 49), 63), 9) + ry.outer.outer) < 8) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 238), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 252)] = @tir.if_then_else((((1 <= (floordiv(threadIdx.x_1, 9) + ry.outer.outer)) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data_3[((((cse_var_4 + (floordiv(threadIdx.x_1, 9)*7)) + cse_var_3) + floormod(threadIdx.x_1, 9)) + 188)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 266)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 5), 9)) && (floormod((threadIdx.x_1 + 5), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 266), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 280)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 1), 9)) && (floormod((threadIdx.x_1 + 1), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 280), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 294)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 42), 63), 9) + ry.outer.outer) < 8) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 294), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 308)] = @tir.if_then_else(((((1 <= (floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer)) && ((floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer) < 8)) && (1 <= floormod((threadIdx.x_1 + 2), 9))) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 308), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 322)] = @tir.if_then_else((((1 <= (floordiv(floormod((threadIdx.x_1 + 7), 63), 9) + ry.outer.outer)) && (1 <= floormod((threadIdx.x_1 + 7), 9))) && (floormod((threadIdx.x_1 + 7), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 322), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 336)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 3), 9)) && (floormod((threadIdx.x_1 + 3), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 336), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 350)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 8), 9)) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 350), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 364)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 49), 63), 9) + ry.outer.outer) < 8) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 364), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 378)] = @tir.if_then_else((((1 <= (floordiv(threadIdx.x_1, 9) + ry.outer.outer)) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data_3[((((cse_var_4 + (floordiv(threadIdx.x_1, 9)*7)) + cse_var_3) + floormod(threadIdx.x_1, 9)) + 286)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 392)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 5), 9)) && (floormod((threadIdx.x_1 + 5), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 392), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 406)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 1), 9)) && (floormod((threadIdx.x_1 + 1), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 406), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 420)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 42), 63), 9) + ry.outer.outer) < 8) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 420), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 434)] = @tir.if_then_else(((((1 <= (floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer)) && ((floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer) < 8)) && (1 <= floormod((threadIdx.x_1 + 2), 9))) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 434), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 448)] = @tir.if_then_else((((1 <= (floordiv(floormod((threadIdx.x_1 + 7), 63), 9) + ry.outer.outer)) && (1 <= floormod((threadIdx.x_1 + 7), 9))) && (floormod((threadIdx.x_1 + 7), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 448), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 462)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 3), 9)) && (floormod((threadIdx.x_1 + 3), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 462), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 476)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 8), 9)) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 476), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              pad_temp.shared_1[(threadIdx.x_1 + 490)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 49), 63), 9) + ry.outer.outer) < 8) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 490), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
-              attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              kernel.shared_1: Buffer(kernel.shared, float32, [96], [], scope="shared")[threadIdx.x_2] = kernel_3: Buffer(kernel_2, float32, [2359296], [])[(((((blockIdx.x*18432) + cse_var_2) + (floordiv(threadIdx.x_2, 3)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
-              attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              kernel.shared_1[(threadIdx.x_2 + 14)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 14), 24)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 14), 24), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 2), 3))]
-              attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              kernel.shared_1[(threadIdx.x_2 + 28)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 28), 24)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 4), 24), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 1), 3))]
-              attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              kernel.shared_1[(threadIdx.x_2 + 42)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 42), 24)*4608)) + cse_var_2) + (floormod((floordiv(threadIdx.x_2, 3) + 6), 8)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
-              attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              kernel.shared_1[(threadIdx.x_2 + 56)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 56), 24)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 2), 3))]
-              attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              kernel.shared_1[(threadIdx.x_2 + 70)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 70), 24)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 22), 24), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 1), 3))]
-              attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 14;
-              if @tir.likely((threadIdx.x_2 < 12), dtype=bool) {
-                kernel.shared_1[(threadIdx.x_2 + 84)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 84), 24)*4608)) + cse_var_2) + ((floordiv(threadIdx.x_2, 3) + 4)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
-              }
-              for (rc.outer.inner: int32, 0, 2) {
-                for (rc.inner: int32, 0, 4) {
-                  conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9))]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-                  conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9))]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-                  conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 1)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-                  conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 1)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-                  conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-                  conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-                  conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-                  conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-                  conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-                  conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-                  conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-                  conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-                  conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-                  conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-                  conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 1)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-                  conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 1)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-                  conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-                  conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-                  conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-                  conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-                  conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-                  conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-                  conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-                  conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-                  conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-                  conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-                  conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 7)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-                  conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 7)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-                  conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-                  conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-                  conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-                  conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-                  conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-                  conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-                  conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-                  conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-                  conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-                  conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-                  conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 7)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-                  conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 7)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-                  conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 8)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-                  conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 8)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
+            attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
+            pad_temp.shared_1: Buffer(pad_temp.shared, float32, [72], [], scope="shared")[threadIdx.x_1] = @tir.if_then_else(((((1 <= (ry.outer.outer + floormod(blockIdx.x, 7))) && ((ry.outer.outer + floormod(blockIdx.x, 7)) < 8)) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data_3: Buffer(data_2, float32, [25088], [])[((((((rc.outer.outer*392) + (floordiv(threadIdx.x_1, 9)*49)) + (ry.outer.outer*7)) + (floormod(blockIdx.x, 7)*7)) + floormod(threadIdx.x_1,  [...]
+            attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
+            if @tir.likely((threadIdx.x_1 < 8), dtype=bool) {
+              pad_temp.shared_1[(threadIdx.x_1 + 64)] = @tir.if_then_else((((1 <= (ry.outer.outer + floormod(blockIdx.x, 7))) && ((ry.outer.outer + floormod(blockIdx.x, 7)) < 8)) && (threadIdx.x_1 < 7)), data_3[((((((rc.outer.outer*392) + (floordiv((threadIdx.x_1 + 64), 9)*49)) + (ry.outer.outer*7)) + (floormod(blockIdx.x, 7)*7)) + threadIdx.x_1) - 7)], 0f32, dtype=float32)
+            }
+            for (ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer: int32, 0, 24) {
+              attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
+              kernel.shared_1: Buffer(kernel.shared, float32, [1536], [], scope="shared")[((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*64) + threadIdx.x_2)] = kernel_3: Buffer(kernel_2, float32, [2359296], [])[((((((floordiv(blockIdx.x, 7)*294912) + (floordiv(((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*8) + floordiv(threadIdx.x_2, 8)), 3)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*16) + threadIdx.x_2), 24), 3)*9)) + (ry.outer [...]
+            }
+            for (rc.outer.inner: int32, 0, 4) {
+              for (rx.outer.inner: int32, 0, 3) {
+                let cse_var_1: int32 = ((rc.outer.inner*18) + rx.outer.inner)
+                 {
+                  conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[cse_var_1]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+                  conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(cse_var_1 + 9)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+                  conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(cse_var_1 + 1)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+                  conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(cse_var_1 + 10)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+                  conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(cse_var_1 + 2)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+                  conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(cse_var_1 + 11)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+                  conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(cse_var_1 + 3)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+                  conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(cse_var_1 + 12)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+                  conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(cse_var_1 + 4)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+                  conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(cse_var_1 + 13)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+                  conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(cse_var_1 + 5)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+                  conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(cse_var_1 + 14)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+                  conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(cse_var_1 + 6)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+                  conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(cse_var_1 + 15)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
                 }
               }
             }
           }
         }
         for (i3.inner: int32, 0, 7) {
-          compute_3: Buffer(compute_2, float32, [25088], [])[(((blockIdx.x*196) + (threadIdx.x*7)) + i3.inner)] = max((conv2d_nchw_1[i3.inner] + bias_3: Buffer(bias_2, float32, [512], [])[((blockIdx.x*4) + floordiv(threadIdx.x, 7))]), 0f32)
-          compute_3[((((blockIdx.x*196) + (threadIdx.x*7)) + i3.inner) + 98)] = max((conv2d_nchw_1[(i3.inner + 7)] + bias_3[(((blockIdx.x*4) + floordiv(threadIdx.x, 7)) + 2)]), 0f32)
+          compute_3: Buffer(compute_2, float32, [25088], [])[((((floordiv(blockIdx.x, 7)*3136) + (threadIdx.x*49)) + (floormod(blockIdx.x, 7)*7)) + i3.inner)] = max((conv2d_nchw_1[i3.inner] + bias_3: Buffer(bias_2, float32, [512], [])[((floordiv(blockIdx.x, 7)*64) + threadIdx.x)]), 0f32)
         }
       }
     }
@@ -462,7 +345,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 0.274 ms
+    Execution time of this operator: 0.468 ms
 
 
 
@@ -512,31 +395,31 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
     conv2d_nchw_nn_o_o_o_o, conv2d_nchw_nn_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_o_i, factor=1)
     conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=1)
     conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=1)
-    conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=2)
-    conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=2)
+    conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=64)
+    conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=1)
     conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=1)
     conv2d_nchw_yy_o_o_i, conv2d_nchw_yy_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_i, factor=1)
-    conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=7)
+    conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=1)
     conv2d_nchw_yy_o_o_o_o, conv2d_nchw_yy_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_o_i, factor=1)
-    conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=7)
-    conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=1)
+    conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=1)
+    conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=7)
     conv2d_nchw_xx_o_o_o_i, conv2d_nchw_xx_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_i, factor=1)
     conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=1)
-    conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=4)
-    conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=2)
+    conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=2)
+    conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=4)
     conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=1)
     conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=1)
-    conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=3)
-    conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=1)
+    conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=1)
+    conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=3)
     s[conv2d_nchw].reorder(conv2d_nchw_nn_o_o_o_o, conv2d_nchw_ff_o_o_o_o, conv2d_nchw_yy_o_o_o_o, conv2d_nchw_xx_o_o_o_o, conv2d_nchw_nn_o_o_o_i, conv2d_nchw_ff_o_o_o_i, conv2d_nchw_yy_o_o_o_i, conv2d_nchw_xx_o_o_o_i, conv2d_nchw_nn_o_o_i, conv2d_nchw_ff_o_o_i, conv2d_nchw_yy_o_o_i, conv2d_nchw_xx_o_o_i, conv2d_nchw_rc_o_o, conv2d_nchw_ry_o_o, conv2d_nchw_rx_o_o, conv2d_nchw_rc_o_i, conv2d_nchw_ry_o_i, conv2d_nchw_rx_o_i, conv2d_nchw_nn_o_i, conv2d_nchw_ff_o_i, conv2d_nchw_yy_o_i, conv2 [...]
     compute_i0_o_i, compute_i0_i = s[compute].split(compute_i0, factor=1)
     compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
     compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
     compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=1)
-    compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=2)
-    compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=2)
+    compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=64)
+    compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
     compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
-    compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
+    compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=1)
     compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
     compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=7)
     compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=1)
@@ -559,14 +442,14 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
     kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
     kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
     s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=14)
+    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
     s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
     pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
     pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
     s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=14)
+    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
     s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
-    s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 64)
+    s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 16)
     s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "unroll_explicit", True)
 
     CUDA source code:
@@ -584,124 +467,50 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
       #define int64_t long long
       #define uint64_t unsigned long long
     #endif
-    extern "C" __global__ void __launch_bounds__(14) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
-      float conv2d_nchw[14];
-      __shared__ float pad_temp_shared[504];
-      __shared__ float kernel_shared[96];
+    extern "C" __global__ void __launch_bounds__(64) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
+      float conv2d_nchw[7];
+      __shared__ float pad_temp_shared[72];
+      __shared__ float kernel_shared[1536];
       conv2d_nchw[0] = 0.000000e+00f;
-      conv2d_nchw[7] = 0.000000e+00f;
       conv2d_nchw[1] = 0.000000e+00f;
-      conv2d_nchw[8] = 0.000000e+00f;
       conv2d_nchw[2] = 0.000000e+00f;
-      conv2d_nchw[9] = 0.000000e+00f;
       conv2d_nchw[3] = 0.000000e+00f;
-      conv2d_nchw[10] = 0.000000e+00f;
       conv2d_nchw[4] = 0.000000e+00f;
-      conv2d_nchw[11] = 0.000000e+00f;
       conv2d_nchw[5] = 0.000000e+00f;
-      conv2d_nchw[12] = 0.000000e+00f;
       conv2d_nchw[6] = 0.000000e+00f;
-      conv2d_nchw[13] = 0.000000e+00f;
       for (int rc_outer_outer = 0; rc_outer_outer < 64; ++rc_outer_outer) {
         for (int ry_outer_outer = 0; ry_outer_outer < 3; ++ry_outer_outer) {
           __syncthreads();
-          pad_temp_shared[((int)threadIdx.x)] = ((((1 <= ((((int)threadIdx.x) / 9) + ry_outer_outer)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 7)) + (ry_outer_outer * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 14)] = (((1 <= ((((int)threadIdx.x) + 5) % 9)) && (((((int)threadIdx.x) + 5) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 14) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 28)] = (((1 <= ((((int)threadIdx.x) + 1) % 9)) && (((((int)threadIdx.x) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 28) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 42)] = (((((((((int)threadIdx.x) + 42) / 9) + ry_outer_outer) < 8) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 42) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 56)] = (((((1 <= ((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer)) && (((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer) < 8)) && (1 <= ((((int)threadIdx.x) + 2) % 9))) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 56) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 70)] = ((((1 <= (((((int)threadIdx.x) + 7) / 9) + ry_outer_outer)) && (1 <= ((((int)threadIdx.x) + 7) % 9))) && (((((int)threadIdx.x) + 7) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 70) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 84)] = (((1 <= ((((int)threadIdx.x) + 3) % 9)) && (((((int)threadIdx.x) + 3) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 84) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 98)] = (((1 <= ((((int)threadIdx.x) + 8) % 9)) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 98) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 112)] = (((((((((int)threadIdx.x) + 49) / 9) + ry_outer_outer) < 8) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 112) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 126)] = ((((1 <= ((((int)threadIdx.x) / 9) + ry_outer_outer)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 7)) + (ry_outer_outer * 7)) + (((int)threadIdx.x) % 9)) + 90)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 140)] = (((1 <= ((((int)threadIdx.x) + 5) % 9)) && (((((int)threadIdx.x) + 5) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 140) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 154)] = (((1 <= ((((int)threadIdx.x) + 1) % 9)) && (((((int)threadIdx.x) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 154) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 168)] = (((((((((int)threadIdx.x) + 42) / 9) + ry_outer_outer) < 8) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 168) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 182)] = (((((1 <= ((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer)) && (((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer) < 8)) && (1 <= ((((int)threadIdx.x) + 2) % 9))) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 182) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 196)] = ((((1 <= (((((int)threadIdx.x) + 7) / 9) + ry_outer_outer)) && (1 <= ((((int)threadIdx.x) + 7) % 9))) && (((((int)threadIdx.x) + 7) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 196) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 210)] = (((1 <= ((((int)threadIdx.x) + 3) % 9)) && (((((int)threadIdx.x) + 3) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 210) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 224)] = (((1 <= ((((int)threadIdx.x) + 8) % 9)) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 224) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 238)] = (((((((((int)threadIdx.x) + 49) / 9) + ry_outer_outer) < 8) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 238) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 252)] = ((((1 <= ((((int)threadIdx.x) / 9) + ry_outer_outer)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 7)) + (ry_outer_outer * 7)) + (((int)threadIdx.x) % 9)) + 188)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 266)] = (((1 <= ((((int)threadIdx.x) + 5) % 9)) && (((((int)threadIdx.x) + 5) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 266) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 280)] = (((1 <= ((((int)threadIdx.x) + 1) % 9)) && (((((int)threadIdx.x) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 280) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 294)] = (((((((((int)threadIdx.x) + 42) / 9) + ry_outer_outer) < 8) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 294) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 308)] = (((((1 <= ((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer)) && (((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer) < 8)) && (1 <= ((((int)threadIdx.x) + 2) % 9))) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 308) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 322)] = ((((1 <= (((((int)threadIdx.x) + 7) / 9) + ry_outer_outer)) && (1 <= ((((int)threadIdx.x) + 7) % 9))) && (((((int)threadIdx.x) + 7) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 322) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 336)] = (((1 <= ((((int)threadIdx.x) + 3) % 9)) && (((((int)threadIdx.x) + 3) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 336) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 350)] = (((1 <= ((((int)threadIdx.x) + 8) % 9)) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 350) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 364)] = (((((((((int)threadIdx.x) + 49) / 9) + ry_outer_outer) < 8) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 364) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 378)] = ((((1 <= ((((int)threadIdx.x) / 9) + ry_outer_outer)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 7)) + (ry_outer_outer * 7)) + (((int)threadIdx.x) % 9)) + 286)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 392)] = (((1 <= ((((int)threadIdx.x) + 5) % 9)) && (((((int)threadIdx.x) + 5) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 392) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 406)] = (((1 <= ((((int)threadIdx.x) + 1) % 9)) && (((((int)threadIdx.x) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 406) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 420)] = (((((((((int)threadIdx.x) + 42) / 9) + ry_outer_outer) < 8) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 420) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 434)] = (((((1 <= ((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer)) && (((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer) < 8)) && (1 <= ((((int)threadIdx.x) + 2) % 9))) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 434) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 448)] = ((((1 <= (((((int)threadIdx.x) + 7) / 9) + ry_outer_outer)) && (1 <= ((((int)threadIdx.x) + 7) % 9))) && (((((int)threadIdx.x) + 7) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 448) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 462)] = (((1 <= ((((int)threadIdx.x) + 3) % 9)) && (((((int)threadIdx.x) + 3) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 462) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 476)] = (((1 <= ((((int)threadIdx.x) + 8) % 9)) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 476) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
-          pad_temp_shared[(((int)threadIdx.x) + 490)] = (((((((((int)threadIdx.x) + 49) / 9) + ry_outer_outer) < 8) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 490) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
-          kernel_shared[((int)threadIdx.x)] = kernel[(((((((int)blockIdx.x) * 18432) + (rc_outer_outer * 72)) + ((((int)threadIdx.x) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
-          kernel_shared[(((int)threadIdx.x) + 14)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 14) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 14) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
-          kernel_shared[(((int)threadIdx.x) + 28)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 28) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) + 4) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
-          kernel_shared[(((int)threadIdx.x) + 42)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 42) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) / 3) + 6) & 7) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
-          kernel_shared[(((int)threadIdx.x) + 56)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 56) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) + 8) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
-          kernel_shared[(((int)threadIdx.x) + 70)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 70) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 22) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
-          if (((int)threadIdx.x) < 12) {
-            kernel_shared[(((int)threadIdx.x) + 84)] = kernel[(((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 84) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((int)threadIdx.x) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 36)];
+          pad_temp_shared[((int)threadIdx.x)] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[((((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
+          if (((int)threadIdx.x) < 8) {
+            pad_temp_shared[(((int)threadIdx.x) + 64)] = ((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (((int)threadIdx.x) < 7)) ? data[((((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 64) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + ((int)threadIdx.x)) - 7)] : 0.000000e+00f);
+          }
+          for (int ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer = 0; ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer < 24; ++ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer) {
+            kernel_shared[((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 64) + ((int)threadIdx.x))] = kernel[(((((((((int)blockIdx.x) / 7) * 294912) + ((((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 8) + (((int)threadIdx.x) >> 3)) / 3) * 4608)) + (rc_outer_outer * 72)) + (((((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 16) + ((int)threadIdx.x)) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer) % 3))];
           }
           __syncthreads();
-          for (int rc_outer_inner = 0; rc_outer_inner < 2; ++rc_outer_inner) {
-            for (int rc_inner = 0; rc_inner < 4; ++rc_inner) {
-              conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9))] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-              conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9))] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-              conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 1)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-              conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 1)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-              conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-              conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-              conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-              conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-              conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-              conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-              conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-              conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-              conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-              conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-              conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 1)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-              conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 1)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-              conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-              conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-              conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-              conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-              conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-              conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-              conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-              conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-              conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-              conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-              conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 7)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-              conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 7)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-              conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-              conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-              conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-              conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-              conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-              conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-              conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-              conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-              conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-              conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-              conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 7)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-              conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 7)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-              conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 8)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-              conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 8)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
+          for (int rc_outer_inner = 0; rc_outer_inner < 4; ++rc_outer_inner) {
+            for (int rx_outer_inner = 0; rx_outer_inner < 3; ++rx_outer_inner) {
+              conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((rc_outer_inner * 18) + rx_outer_inner)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+              conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 9)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+              conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 1)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+              conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 10)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+              conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 2)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+              conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 11)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+              conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 3)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+              conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 12)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+              conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 4)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+              conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 13)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+              conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 5)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+              conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 14)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+              conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 6)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+              conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 15)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
             }
           }
         }
       }
       for (int i3_inner = 0; i3_inner < 7; ++i3_inner) {
-        compute[(((((int)blockIdx.x) * 196) + (((int)threadIdx.x) * 7)) + i3_inner)] = max((conv2d_nchw[i3_inner] + bias[((((int)blockIdx.x) * 4) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
-        compute[((((((int)blockIdx.x) * 196) + (((int)threadIdx.x) * 7)) + i3_inner) + 98)] = max((conv2d_nchw[(i3_inner + 7)] + bias[(((((int)blockIdx.x) * 4) + (((int)threadIdx.x) / 7)) + 2)]), 0.000000e+00f);
+        compute[(((((((int)blockIdx.x) / 7) * 3136) + (((int)threadIdx.x) * 49)) + ((((int)blockIdx.x) % 7) * 7)) + i3_inner)] = max((conv2d_nchw[i3_inner] + bias[(((((int)blockIdx.x) / 7) * 64) + ((int)threadIdx.x))]), 0.000000e+00f);
       }
     }
 
@@ -763,7 +572,7 @@ In the example below we resume the status and do more 5 trials.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 5 minutes  46.501 seconds)
+   **Total running time of the script:** ( 5 minutes  35.119 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
index e8dfef5f2e..17077e1990 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
@@ -647,7 +647,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       7.9208       7.9243       7.9282       7.9099       0.0079   
+       7.8772       7.8747       7.8854       7.8715       0.0059   
                
 
 
@@ -675,7 +675,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  5.461 seconds)
+   **Total running time of the script:** ( 1 minutes  5.720 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
index a46e457ca8..b8dd36eb5f 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
@@ -666,7 +666,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      761.4538     761.2513     762.2114     760.8987      0.5547   
+      767.7132     768.2357     768.4878     766.4161      0.9229   
                
 
 
@@ -694,7 +694,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  39.774 seconds)
+   **Total running time of the script:** ( 1 minutes  39.534 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_x86.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
index 52b2d2eb76..2938afe4b6 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
@@ -390,79 +390,29 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
                  placeholder_4: Buffer(placeholder_14: Pointer(float32), float32, [128, 512], []),
                  compute: Buffer(compute_2: Pointer(float32), float32, [128, 512], [])}
       buffer_map = {placeholder_5: placeholder, placeholder_6: placeholder_1, placeholder_7: placeholder_2, placeholder_8: placeholder_3, placeholder_9: placeholder_4, compute_1: compute} {
-      for (i0.outer.i1.outer.fused: int32, 0, 32) "parallel" {
-        allocate(compute_3: Pointer(global float32), float32, [2048]), storage_scope = global {
-          for (i.outer.inner: int32, 0, 8) {
-            for (nb_j.inner: int32, 0, 2) {
-              for (i.inner.init: int32, 0, 8) {
-                let cse_var_1: int32 = (((i.outer.inner*256) + (i.inner.init*32)) + (nb_j.inner*16))
-                 {
-                  compute_4: Buffer(compute_3, float32, [2048], [])[cse_var_1] = 0f32
-                  compute_4[(cse_var_1 + 1)] = 0f32
-                  compute_4[(cse_var_1 + 2)] = 0f32
-                  compute_4[(cse_var_1 + 3)] = 0f32
-                  compute_4[(cse_var_1 + 4)] = 0f32
-                  compute_4[(cse_var_1 + 5)] = 0f32
-                  compute_4[(cse_var_1 + 6)] = 0f32
-                  compute_4[(cse_var_1 + 7)] = 0f32
-                  compute_4[(cse_var_1 + 8)] = 0f32
-                  compute_4[(cse_var_1 + 9)] = 0f32
-                  compute_4[(cse_var_1 + 10)] = 0f32
-                  compute_4[(cse_var_1 + 11)] = 0f32
-                  compute_4[(cse_var_1 + 12)] = 0f32
-                  compute_4[(cse_var_1 + 13)] = 0f32
-                  compute_4[(cse_var_1 + 14)] = 0f32
-                  compute_4[(cse_var_1 + 15)] = 0f32
-                }
+      for (i0.outer: int32, 0, 8) "parallel" {
+        allocate(compute_3: Pointer(global float32), float32, [256]), storage_scope = global;
+        for (i1.outer: int32, 0, 64) {
+          for (i.outer.inner: int32, 0, 2) {
+            for (i.inner.init: int32, 0, 8) {
+              for (j.init: int32, 0, 16) {
+                compute_4: Buffer(compute_3, float32, [256], [])[(((i.outer.inner*128) + (i.inner.init*16)) + j.init)] = 0f32
               }
-              for (elem_idx: int32, 0, let cse_var_2: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner) in (placeholder_15: Buffer(placeholder_13, int32, [33], [])[(cse_var_2 + 1)] - placeholder_15[cse_var_2])) {
-                for (i.inner: int32, 0, 8) {
-                  let cse_var_21: int32 = (elem_idx*16)
-                  let cse_var_20: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner)
-                  let cse_var_19: int32 = (((i.outer.inner*256) + (i.inner*32)) + (nb_j.inner*16))
-                  let cse_var_18: int32 = (((floordiv(i0.outer.i1.outer.fused, 16)*16384) + (i.outer.inner*2048)) + (i.inner*256))
-                  let cse_var_17: int32 = (cse_var_19 + 9)
-                  let cse_var_16: int32 = (cse_var_19 + 8)
-                  let cse_var_15: int32 = (cse_var_19 + 7)
-                  let cse_var_14: int32 = (cse_var_19 + 6)
-                  let cse_var_13: int32 = (cse_var_19 + 5)
-                  let cse_var_12: int32 = (cse_var_19 + 4)
-                  let cse_var_11: int32 = (cse_var_19 + 3)
-                  let cse_var_10: int32 = (cse_var_19 + 2)
-                  let cse_var_9: int32 = (cse_var_19 + 15)
-                  let cse_var_8: int32 = (cse_var_19 + 14)
-                  let cse_var_7: int32 = (cse_var_19 + 13)
-                  let cse_var_6: int32 = (cse_var_19 + 12)
-                  let cse_var_5: int32 = (cse_var_19 + 11)
-                  let cse_var_4: int32 = (cse_var_19 + 10)
-                  let cse_var_3: int32 = (cse_var_19 + 1)
-                   {
-                    compute_4[cse_var_19] = (compute_4[cse_var_19] + (placeholder_16: Buffer(placeholder_11, float32, [78656], [])[((placeholder_15[cse_var_20]*16) + cse_var_21)]*max(placeholder_17: Buffer(placeholder_10, float32, [32768], [])[(cse_var_18 + placeholder_18: Buffer(placeholder_12, int32, [4916], [])[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_3] = (compute_4[cse_var_3] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 1)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_10] = (compute_4[cse_var_10] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 2)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_11] = (compute_4[cse_var_11] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 3)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_12] = (compute_4[cse_var_12] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 4)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_13] = (compute_4[cse_var_13] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 5)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_14] = (compute_4[cse_var_14] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 6)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_15] = (compute_4[cse_var_15] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 7)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_16] = (compute_4[cse_var_16] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 8)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_17] = (compute_4[cse_var_17] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 9)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_4] = (compute_4[cse_var_4] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 10)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_5] = (compute_4[cse_var_5] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 11)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_6] = (compute_4[cse_var_6] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 12)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_7] = (compute_4[cse_var_7] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 13)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_8] = (compute_4[cse_var_8] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 14)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                    compute_4[cse_var_9] = (compute_4[cse_var_9] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 15)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                  }
+            }
+            for (elem_idx: int32, 0, let cse_var_1: int32 = floordiv(i1.outer, 2) in (placeholder_15: Buffer(placeholder_13, int32, [33], [])[(cse_var_1 + 1)] - placeholder_15[cse_var_1])) {
+              for (i.inner: int32, 0, 8) {
+                for (j: int32, 0, 16) {
+                  let cse_var_3: int32 = floordiv(i1.outer, 2)
+                  let cse_var_2: int32 = (((i.outer.inner*128) + (i.inner*16)) + j)
+                  compute_4[cse_var_2] = (compute_4[cse_var_2] + (placeholder_16: Buffer(placeholder_11, float32, [78656], [])[(((placeholder_15[cse_var_3]*16) + (elem_idx*16)) + j)]*max(placeholder_17: Buffer(placeholder_10, float32, [32768], [])[((((i0.outer*4096) + (i.outer.inner*2048)) + (i.inner*256)) + placeholder_18: Buffer(placeholder_12, int32, [4916], [])[(placeholder_15[cse_var_3] + elem_idx)])], 0f32)))
                 }
               }
             }
           }
-          for (i0.inner: int32, 0, 64) {
-            for (i1.inner: int32, 0, 32) {
-              let cse_var_22: int32 = ((((floordiv(i0.outer.i1.outer.fused, 16)*32768) + (i0.inner*512)) + (floormod(i0.outer.i1.outer.fused, 16)*32)) + i1.inner)
-              compute_5: Buffer(compute_2, float32, [65536], [])[cse_var_22] = max((compute_4[((i0.inner*32) + i1.inner)] + placeholder_19: Buffer(placeholder_14, float32, [65536], [])[cse_var_22]), 0f32)
-            }
+          for (i0.inner: int32, 0, 16) {
+            let cse_var_5: int32 = (i1.outer*8)
+            let cse_var_4: int32 = (((i0.outer*8192) + (i0.inner*512)) + cse_var_5)
+            compute_5: Buffer(compute_2, float32, [65536], [])[ramp(cse_var_4, 1, 8)] = max((compute_4[ramp((((i0.inner*16) + cse_var_5) - (floordiv(i1.outer, 2)*16)), 1, 8)] + placeholder_19: Buffer(placeholder_14, float32, [65536], [])[ramp(cse_var_4, 1, 8)]), broadcast(0f32, 8))
           }
         }
       }
@@ -518,7 +468,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 1.847 ms
+    Execution time of this operator: 3.041 ms
 
 
 
diff --git a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
index da105c83b5..7175cdaf43 100644
--- a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:48.521** total execution time for **how_to_tune_with_autotvm** files:
+**00:47.871** total execution time for **how_to_tune_with_autotvm** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:48.487 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:47.838 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.020 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
index 5a15068be8..be73a249ef 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
@@ -268,8 +268,7 @@ for this template
     waiting for device...
     device available
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 6.13/6.13       result: MeasureResult(costs=(0.037739070750000006,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.1840627193450928, timestamp=1673986430.7595634)       [('tile_f', [-1, 1, 4, 16]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1175635
-    No: 2   GFLOPS: 0.00/6.13       result: Traceback (most recent call last):
+    No: 1   GFLOPS: 0.00/0.00       result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -391,8 +390,9 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 512, 1, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 256, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3514729
-    No: 3   GFLOPS: 0.00/6.13       result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 2, 128, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 32, 2]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6250690
+    No: 2   GFLOPS: 1.83/1.83       result: MeasureResult(costs=(0.12647833249999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.213264465332031, timestamp=1673995932.0615451) [('tile_f', [-1, 16, 4, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2941333
+    No: 3   GFLOPS: 0.00/1.83       result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -514,8 +514,9 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 32, 2, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 2, 4]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2587313
-    No: 4   GFLOPS: 0.00/6.13       result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 4, 4, 32]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 256, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7036456
+    No: 4   GFLOPS: 27.41/27.41     result: MeasureResult(costs=(0.008446498285714286,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.635021686553955, timestamp=1673995935.0071206)        [('tile_f', [-1, 8, 1, 4]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 16, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,5338403
+    No: 5   GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -637,8 +638,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 128, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7429889
-    No: 5   GFLOPS: 0.00/6.13       result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 64, 4, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 16, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6524398
+    No: 6   GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -760,9 +761,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 256, 2]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6484379
-    No: 6   GFLOPS: 49.72/49.72     result: MeasureResult(costs=(0.004656136590909092,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.4881622791290283, timestamp=1673986438.2100587)       [('tile_f', [-1, 8, 1, 8]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8807619
-    No: 7   GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 32, 8]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 128, 2]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3934421
+    No: 7   GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -884,161 +884,151 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 128, 1, 4]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7784147
-    No: 8   GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
-      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 742, in __call__
-        yield remote, remote.load_module(os.path.split(build_result.filename)[1])
-      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 706, in run_through_rpc
-        costs = time_f(*args).results
-      File "/workspace/python/tvm/runtime/module.py", line 357, in evaluator
-        blob = feval(*args)
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 8, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 256, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,259721
+    No: 8   GFLOPS: 2.51/27.41      result: MeasureResult(costs=(0.09205633525000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=10.663654088973999, timestamp=1673995946.8162959)        [('tile_f', [-1, 16, 4, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 8, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3531736
+    No: 9   GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
+        func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
+        func = build(s, args, target_host=task.target_host, runtime=runtime)
+      File "/workspace/python/tvm/driver/build_module.py", line 227, in build
+        input_mod = lower(inputs, args, name=name, binds=binds)
+      File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
+        return ffi.lower_schedule(inp, args, name, binds, simple_mode)
       File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
-      File "tvm/_ffi/_cython/./packed_func.pxi", line 262, in tvm._ffi._cy3.core.FuncCall
-      File "tvm/_ffi/_cython/./packed_func.pxi", line 251, in tvm._ffi._cy3.core.FuncCall3
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
       File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
     tvm._ffi.base.TVMError: Traceback (most recent call last):
-      4: TVMFuncCall
+      24: TVMFuncCall
             at ../src/runtime/c_runtime_api.cc:477
-      3: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+      23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
             at ../include/tvm/runtime/packed_func.h:1217
-      2: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
-            at ../src/runtime/rpc/rpc_module.cc:129
-      1: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)
-            at ../src/runtime/rpc/rpc_endpoint.cc:1012
-      0: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)
-            at ../src/runtime/rpc/rpc_endpoint.cc:804
-      File "../src/runtime/rpc/rpc_endpoint.cc", line 804
-    TVMError: 
-    ---------------------------------------------------------------
-    An error occurred during the execution of TVM.
-    For more information, please see: https://tvm.apache.org/docs/errors.html
-    ---------------------------------------------------------------
-      Check failed: (code == RPCCode::kReturn) is false: code=kShutdown
-
-    During handling of the above exception, another exception occurred:
-
-    Traceback (most recent call last):
-      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 706, in run_through_rpc
-        costs = time_f(*args).results
-      File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
-        self.gen.throw(type, value, traceback)
-      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 746, in __call__
-        remote.remove(build_result.filename)
-      File "/workspace/python/tvm/rpc/client.py", line 144, in remove
-        self._remote_funcs["remove"] = self.get_function("tvm.rpc.server.remove")
-      File "/workspace/python/tvm/rpc/client.py", line 72, in get_function
-        return self._sess.get_function(name)
-      File "/workspace/python/tvm/runtime/module.py", line 171, in get_function
-        self.handle, c_str(name), ctypes.c_int(query_imports), ctypes.byref(ret_handle)
-      File "/workspace/python/tvm/_ffi/base.py", line 348, in check_call
-        raise get_last_ffi_error()
-    tvm._ffi.base.TVMError: Traceback (most recent call last):
-      52: 0xffffffffffffffff
-      51: _start
-      50: __libc_start_main
-      49: _Py_UnixMain
-      48: 0x0000000000650da0
-      47: 0x0000000000650afa
-      46: _PyFunction_FastCallDict
-      45: _PyEval_EvalCodeWithName
-      44: _PyEval_EvalFrameDefault
-      43: _PyFunction_FastCallKeywords
-      42: _PyEval_EvalCodeWithName
-      41: _PyEval_EvalFrameDefault
-      40: _PyMethodDef_RawFastCallKeywords
-      39: 0x0000000000546369
-      38: _PyEval_EvalCodeWithName
-      37: _PyEval_EvalFrameDefault
-      36: _PyFunction_FastCallKeywords
-      35: _PyEval_EvalCodeWithName
-      34: _PyEval_EvalFrameDefault
-      33: _PyFunction_FastCallDict
-      32: _PyEval_EvalCodeWithName
-      31: _PyEval_EvalFrameDefault
-      30: _PyObject_FastCallDict
-      29: 0x00000000004c06e1
-      28: _PyFunction_FastCallDict
-      27: _PyEval_EvalFrameDefault
-      26: _PyMethodDescr_FastCallKeywords
-      25: 0x00000000005dcb58
-      24: 0x00000000005dc83f
-      23: 0x00000000004ba127
-      22: _PyEval_EvalFrameDefault
-      21: _PyFunction_FastCallKeywords
-      20: _PyEval_EvalFrameDefault
-      19: _PyFunction_FastCallKeywords
-      18: _PyEval_EvalFrameDefault
-      17: _PyFunction_FastCallKeywords
-      16: _PyEval_EvalCodeWithName
-      15: _PyEval_EvalFrameDefault
-      14: 0x0000000000537c30
-      13: _PyObject_FastCallKeywords
-      12: 0x00007fd1dce0efa2
-      11: _ctypes_callproc
-      10: ffi_call
-      9: ffi_call_unix64
-      8: TVMModGetFunction
-            at ../src/runtime/c_runtime_api.cc:408
-      7: tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)
-            at ../src/runtime/module.cc:66
-      6: tvm::runtime::RPCModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)
-            at ../src/runtime/rpc/rpc_module.cc:185
-      5: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
-            at ../src/runtime/rpc/rpc_endpoint.cc:1007
-      4: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(tvm::runtime::RPCCode, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
-            at ../src/runtime/rpc/rpc_endpoint.h:223
-      3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(int&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
+      22: Call
+            at ../include/tvm/runtime/packed_func.h:1213
+      21: operator()
+            at ../include/tvm/runtime/packed_func.h:1730
+      20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
+            at ../include/tvm/runtime/packed_func.h:1670
+      19: run<>
+            at ../include/tvm/runtime/packed_func.h:1630
+      18: run<tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1630
+      17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1630
+      16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1630
+      15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1630
+      14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1645
+      13: operator()
+            at ../src/driver/driver_api.cc:395
+      12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
+            at ../src/driver/driver_api.cc:381
+      11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
+            at ../src/driver/driver_api.cc:276
+      10: tvm::transform::Pass::operator()(tvm::IRModule) const
+            at ../src/ir/transform.cc:258
+      9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
+            at ../src/ir/transform.cc:274
+      8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
+            at ../src/ir/transform.cc:454
+      7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
+            at ../src/ir/transform.cc:274
+      6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
+            at ../src/tir/ir/transform.cc:100
+      5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
+            at ../include/tvm/runtime/packed_func.h:1749
+      4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
+            at ../include/tvm/runtime/packed_func.h:1693
+      3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
             at ../include/tvm/runtime/packed_func.h:1617
       2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
             at ../include/tvm/runtime/packed_func.h:1217
       1: Call
             at ../include/tvm/runtime/packed_func.h:1213
       0: operator()
-            at ../src/runtime/rpc/rpc_endpoint.cc:684
-      File "../src/runtime/rpc/rpc_endpoint.cc", line 684
-    TVMError: 
-    ---------------------------------------------------------------
-    An error occurred during the execution of TVM.
-    For more information, please see: https://tvm.apache.org/docs/errors.html
-    ---------------------------------------------------------------
-      Check failed: (code == RPCCode::kReturn) is false: code=1
+            at ../src/runtime/c_runtime_api.cc:534
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
+        raise InstantiationError("Skipped because of invalid gpu kernel")
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
 
     Traceback (most recent call last):
-      52: 0xffffffffffffffff
-      51: _start
-      50: __libc_start_main
-      49: _Py_UnixMain
-      48: 0x0000000000650da0
-      47: 0x0000000000650afa
-      46: _PyFunction_FastCallDict
-      45: _PyEval_EvalCodeWithName
-      44: _PyEval_EvalFrameDefault
-      43: _PyFunction_FastCallKeywords
-      42: _PyEval_EvalCodeWithName
-      41: _PyEval_EvalFrameDefault
-      40: _PyMethodDef_RawFastCallKeywords
-      39: 0x0000000000546369
-      38: _PyEval_EvalCodeWithName
-      37: _PyEval_EvalFrameDefault
-      36: _PyFunction_FastCallKeywords
-      35: _PyEval_EvalCodeWithName
-      34: _PyEval_EvalFrameDefault
-      33: _PyFunction_FastCallDict
-      32: _PyEval_EvalCodeWithName
-      31: _PyEval_EvalFrameDefault
-      30: _PyObject_FastCallDict
-      29: 0x00000000004c06e1
-      28: _PyFunction_FastCallDict
-      27: _PyEval_EvalFrameDefault
-      26: _PyMethodDescr_FastCallKeywords
-      25: 0x00000000005dcb58
-      24: 0x00000000005dc83f
-      23: 0x00000000004ba127
-      22: _PyEval_EvalFrameDefault
-      21: _PyFunction_FastCallKeywords
-      20: _PyEval_EvalFrameDefault
-      19: _PyFunction_FastCall      [('tile_f', [-1, 64, 2, 4]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 4, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2333214
-    No: 9   GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
+      24: TVMFuncCall
+            at ../src/runtime/c_runtime_api.cc:477
+      23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+            at ../include/tvm/runtime/packed_func.h:1217
+      22: Call
+            at ../include/tvm/runtime/packed_func.h:1213
+      21: operator()
+            at ../include/tvm/runtime/packed_func.h:1730
+      20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
+            at ../include/tvm/runtime/packed_func.h:1670
+      19: run<>
+            at ../include/tvm/runtime/packed_func.h:1630
+      18: run<tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1630
+      17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1630
+      16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1630
+      15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1630
+      14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
+            at ../include/tvm/runtime/packed_func.h:1645
+      13: operator()
+            at ../src/driver/driver_api.cc:395
+      12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
+            at ../src/driver/driver_api.cc:381
+      11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
+            at ../src/driver/driver_api.cc:276
+      10: tvm::transform::Pass::operator()(tvm::IRModule) const
+            at ../src/ir/transform.cc:258
+      9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
+            at ../src/ir/transform.cc:274
+      8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
+            at ../src/ir/transform.cc:454
+      7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
+            at ../src/ir/transform.cc:274
+      6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
+            at ../src/tir/ir/transform.cc:100
+      5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
+            at ../include/tvm/runtime/packed_func.h:1749
+      4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
+            at ../include/tvm/runtime/packed_func.h:1693
+      3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
+            at ../include/tvm/runtime/packed_func.h:1617
+      2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+            at ../include/tvm/runtime/packed_func.h:1217
+      1: Call
+            at ../include/tvm/runtime/packed_func.h:1213
+      0: operator()
+            at ../src/runtime/c_runtime_api.cc:534
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
+        raise InstantiationError("Skipped because of invalid gpu kernel")
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 8, 8, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 64]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7136924
+    No: 10  GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 142, in build
+        res = future.result()
+      File "/usr/lib/python3.7/concurrent/futures/_base.py", line 435, in result
+        return self.__get_result()
+      File "/usr/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
+        raise self._exception
+      File "/usr/lib/python3.7/concurrent/futures/thread.py", line 57, in run
+        result = self.fn(*self.args, **self.kwargs)
+      File "/workspace/python/tvm/contrib/popen_pool.py", line 432, in <lambda>
+        worker = lambda *args: self._worker_run(*args)
+      File "/workspace/python/tvm/contrib/popen_pool.py", line 401, in _worker_run
+        return proc.recv()
+      File "/workspace/python/tvm/contrib/popen_pool.py", line 309, in recv
+        raise TimeoutError()
+    TimeoutError
+
+            [('tile_f', [-1, 256, 1, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7013388
+    No: 11  GFLOPS: 189.68/189.68   result: MeasureResult(costs=(0.0012204934777777779,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4920103549957275, timestamp=1673995957.8767965)      [('tile_f', [-1, 1, 16, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2233566
+    No: 12  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1160,8 +1150,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 8, 2, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 256, 2]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2196213
-    No: 10  GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 16, 2, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 1, 128]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,3077472
+    No: 13  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1283,9 +1273,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 1, 128]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 64, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7607810
-    No: 11  GFLOPS: 3.85/49.72      result: MeasureResult(costs=(0.060141681499999995,), error_no=MeasureErrorNo.NO_ERROR, all_cost=5.4033591747283936, timestamp=1673986448.3987403)       [('tile_f', [-1, 2, 2, 16]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 4, 4]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7240151
-    No: 12  GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 32, 4, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 64, 2]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7995484
+    No: 14  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1407,10 +1396,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 128, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9908849
-    No: 13  GFLOPS: 2.32/49.72      result: MeasureResult(costs=(0.09972912575,), error_no=MeasureErrorNo.NO_ERROR, all_cost=11.010488986968994, timestamp=1673986459.577202)       [('tile_f', [-1, 8, 1, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9390758
-    No: 14  GFLOPS: 63.21/63.21     result: MeasureResult(costs=(0.003662555392857143,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.0892367362976074, timestamp=1673986460.3195736)       [('tile_f', [-1, 16, 4, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,1759196
-    No: 15  GFLOPS: 0.00/63.21      result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 128, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,69617
+    No: 15  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1532,8 +1519,9 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 16, 16]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 256]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8897642
-    No: 16  GFLOPS: 0.00/63.21      result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 2, 1, 8]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 1, 512]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9678157
+    No: 16  GFLOPS: 96.04/189.68    result: MeasureResult(costs=(0.002410495928571429,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.7908122539520264, timestamp=1673995960.8565032)       [('tile_f', [-1, 1, 32, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 4]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6075340
+    No: 17  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1655,9 +1643,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 512, 1]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 4, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9338174
-    No: 17  GFLOPS: 148.02/148.02   result: MeasureResult(costs=(0.001563994515625,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.1416263580322266, timestamp=1673986461.6723034)  [('tile_f', [-1, 2, 32, 4]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4259991
-    No: 18  GFLOPS: 0.00/148.02     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 4, 2, 64]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2822366
+    No: 18  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1779,8 +1766,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 64, 4, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 32]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,10403438
-    No: 19  GFLOPS: 0.00/148.02     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 4, 32, 2]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 256, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,1773072
+    No: 19  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1902,8 +1889,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 2, 8, 4]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 32, 8]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9601142
-    No: 20  GFLOPS: 0.00/148.02     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 2, 4, 64]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 2, 128]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8889308
+    No: 20  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2025,7 +2012,7 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 64, 8]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 32]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6538563
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 2, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 128]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,3082123
 
 
 
@@ -2080,9 +2067,9 @@ and measure running time.
     Finish loading 20 records
 
     Best config:
-    [('tile_f', [-1, 2, 32, 4]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4259991
+    [('tile_f', [-1, 1, 16, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2233566
     Finish loading 20 records
-    Time cost of this operator: 0.001954
+    Time cost of this operator: 0.001633
 
 
 
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
index e84f3282f6..8b4d45792e 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
@@ -363,10 +363,10 @@ Timing the untuned program
     ########## Build without Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  309.6     98.712   (1, 2, 10, 10, 3)  2       1        [309.6]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       3.078     0.981    (1, 6, 10, 10)     1       1        [3.078]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.961     0.307    (1, 1, 10, 10, 3)  1       1        [0.961]           
-    Total_time                                    -                                             313.639   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  310.7     98.738   (1, 2, 10, 10, 3)  2       1        [310.7]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       3.019     0.959    (1, 6, 10, 10)     1       1        [3.019]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.952     0.303    (1, 1, 10, 10, 3)  1       1        [0.952]           
+    Total_time                                    -                                             314.671   -        -                  -       -        -                 
 
 
 
@@ -431,10 +431,10 @@ Timing the tuned program
     ########## Build with Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  135.9     97.966   (1, 6, 10, 10, 1)  2       1        [135.9]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.853     1.336    (1, 6, 10, 10)     1       1        [1.853]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.968     0.698    (1, 1, 10, 10, 3)  1       1        [0.968]           
-    Total_time                                    -                                             138.721   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.9     97.425   (1, 6, 10, 10, 1)  2       1        [102.9]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.766     1.672    (1, 6, 10, 10)     1       1        [1.766]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.953     0.903    (1, 1, 10, 10, 3)  1       1        [0.953]           
+    Total_time                                    -                                             105.62    -        -                  -       -        -                 
 
 
 
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
index a74ec742f2..515cfc5b54 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
@@ -117,7 +117,7 @@ download a cat image and preprocess it to use as the model input.
     /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torch/ao/quantization/utils.py:281: UserWarning: must run observer before calling calculate_qparams. Returning default values.
       "must run observer before calling calculate_qparams. " +
     Downloading: "https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
-
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 83.4MB/s]
+
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
     61%|######    | 2.09M/3.42M [00:00<00:00, 20.0MB/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 31.2MB/s]
     /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
       return LooseVersion(torch_ver) > ver
     /venv/apache-tvm-py3.7/lib/python3.7/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -322,7 +322,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  10.184 seconds)
+   **Total running time of the script:** ( 1 minutes  8.976 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_pytorch.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
index 53a03d3ef3..6c8830a0b7 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
@@ -218,7 +218,7 @@ take about **2 minutes** to download the Stanford Cars, while COCO 2017 validati
  .. code-block:: none
 
 
-    '/tmp/tmpvq_95_9v/images/random'
+    '/tmp/tmppqdyvab_/images/random'
 
 
 
@@ -309,7 +309,7 @@ objects to other stuff? We can display some examples from our datasets using ``m
 
 
 .. image-sg:: /how_to/work_with_microtvm/images/sphx_glr_micro_train_001.png
-   :alt: [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0]
+   :alt: [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0]
    :srcset: /how_to/work_with_microtvm/images/sphx_glr_micro_train_001.png
    :class: sphx-glr-single-img
 
@@ -318,8 +318,8 @@ objects to other stuff? We can display some examples from our datasets using ``m
 
  .. code-block:: none
 
-    /tmp/tmpvq_95_9v/images/target contains 8144 images
-    /tmp/tmpvq_95_9v/images/random contains 5000 images
+    /tmp/tmppqdyvab_/images/target contains 8144 images
+    /tmp/tmppqdyvab_/images/random contains 5000 images
 
 
 
@@ -494,13 +494,13 @@ the time on our validation set).
  .. code-block:: none
 
     Epoch 1/3
-    328/328 - 48s - loss: 0.2171 - accuracy: 0.9241 - val_loss: 0.1121 - val_accuracy: 0.9607 - 48s/epoch - 145ms/step
+    328/328 - 47s - loss: 0.2152 - accuracy: 0.9245 - val_loss: 0.2124 - val_accuracy: 0.9177 - 47s/epoch - 143ms/step
     Epoch 2/3
-    328/328 - 44s - loss: 0.0964 - accuracy: 0.9655 - val_loss: 0.1093 - val_accuracy: 0.9664 - 44s/epoch - 134ms/step
+    328/328 - 43s - loss: 0.0930 - accuracy: 0.9663 - val_loss: 0.1170 - val_accuracy: 0.9566 - 43s/epoch - 132ms/step
     Epoch 3/3
-    328/328 - 44s - loss: 0.0665 - accuracy: 0.9755 - val_loss: 0.1519 - val_accuracy: 0.9562 - 44s/epoch - 134ms/step
+    328/328 - 43s - loss: 0.0655 - accuracy: 0.9754 - val_loss: 0.1277 - val_accuracy: 0.9547 - 43s/epoch - 133ms/step
 
-    <keras.callbacks.History object at 0x7fc25315b4d0>
+    <keras.callbacks.History object at 0x7ff441784690>
 
 
 
@@ -857,7 +857,7 @@ Arduino tutorial for how to do that `on GitHub <https://github.com/guberti/tvm-a
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 4 minutes  47.189 seconds)
+   **Total running time of the script:** ( 5 minutes  16.733 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_train.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
index afe4c15f08..e7c70da34d 100644
--- a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
@@ -5,22 +5,22 @@
 
 Computation times
 =================
-**07:03.423** total execution time for **how_to_work_with_microtvm** files:
+**07:31.018** total execution time for **how_to_work_with_microtvm** files:
 
 +---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)               | 04:47.189 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)               | 05:16.733 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)           | 01:10.184 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)           | 01:08.976 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)         | 00:53.168 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)         | 00:52.269 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)                   | 00:08.906 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)                   | 00:09.140 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)             | 00:03.976 | 0.0 MB |
-+---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)             | 00:00.000 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)             | 00:03.900 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_microtvm_micro_tvmc.py` (``micro_tvmc.py``)                 | 00:00.000 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)             | 00:00.000 | 0.0 MB |
++---------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_microtvm_micro_reference_vm.py` (``micro_reference_vm.py``) | 00:00.000 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
index 6ba7fe2c15..d4937c13f7 100644
--- a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:46.014** total execution time for **how_to_work_with_relay** files:
+**00:45.127** total execution time for **how_to_work_with_relay** files:
 
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:33.664 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:32.834 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:10.785 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:10.520 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.560 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.767 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``)                 | 00:00.006 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
index 9f72e295f2..8a6356c3f6 100644
--- a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
@@ -264,7 +264,7 @@ The following example customizes CUDA lowering rule for :code:`exp`.
  .. code-block:: none
 
 
-    <function my_cuda_math_rule at 0x7fc03de62680>
+    <function my_cuda_math_rule at 0x7ff4419b33b0>
 
 
 
diff --git a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
index 7a8ff36a25..f0aaded029 100644
--- a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
@@ -5,20 +5,20 @@
 
 Computation times
 =================
-**00:07.829** total execution time for **how_to_work_with_schedules** files:
+**00:06.213** total execution time for **how_to_work_with_schedules** files:
 
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:05.217 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:03.706 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.220 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.141 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.594 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.586 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.572 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.561 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.119 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.115 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.052 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.049 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.032 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt b/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt
index f0b556ac12..79b5641b5b 100644
--- a/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt
@@ -347,7 +347,7 @@ The importing needs to happen before the tensorized GEMV being executed.
                  B: Buffer(B_2: Pointer(float32), float32, [512, 64], []),
                  C: Buffer(C_2: Pointer(float32), float32, [1024, 512], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
-      attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmpu8koxk36/input0.cc'\nsource_filename = \"/tmp/tmpu8koxk36/input0.cc\"\ntarget datalayout = \"e-m:e-i64:64-f80:128-n8:16:32:64-S128\"\ntarget triple = \"x86_64-pc-linux-gnu\"\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n  %7 = alloca float*, align 8\n  %8 = alloca float*, align 8\n  %9 = alloca floa [...]
+      attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmpl9766ior/input0.cc'\nsource_filename = \"/tmp/tmpl9766ior/input0.cc\"\ntarget datalayout = \"e-m:e-i64:64-f80:128-n8:16:32:64-S128\"\ntarget triple = \"x86_64-pc-linux-gnu\"\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n  %7 = alloca float*, align 8\n  %8 = alloca float*, align 8\n  %9 = alloca floa [...]
       for (i, 0, 1024) {
         for (j.outer: int32, 0, 32) {
           @tir.call_extern("gemv_update", @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), C_2, ((i*512) + (j.outer*16)), 16, 2, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), A_2, (i*64), 64, 1, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), B_2, (j.outer*1024), 1024, 1, dtype=handle), 16, 64, 64, dtype=int32)
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
index 1c6d1bf4b7..40e8f6700f 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:30.622** total execution time for **topic_vta_tutorials_autotvm** files:
+**00:29.740** total execution time for **topic_vta_tutorials_autotvm** files:
 
 +---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:30.615 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:29.734 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)     | 00:00.007 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
index 8d968cd5eb..8de365c6b0 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -293,7 +293,7 @@ The compilation steps are:
       DeprecationWarning,
     /workspace/vta/tutorials/frontend/deploy_classification.py:213: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
       relay_prog, target=tvm.target.Target(target, host=env.target_host), params=params
-    resnet18_v1 inference graph built in 32.94s!
+    resnet18_v1 inference graph built in 32.07s!
 
 
 
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
index 5f4ca67e42..4a1746ad4c 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
@@ -337,7 +337,7 @@ The compilation steps are:
 
     /workspace/python/tvm/relay/build_module.py:348: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
       DeprecationWarning,
-    yolov3-tiny inference graph built in 22.43s!
+    yolov3-tiny inference graph built in 21.69s!
 
 
 
diff --git a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
index d4735de598..795d131f0b 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**01:39.276** total execution time for **topic_vta_tutorials_frontend** files:
+**01:37.533** total execution time for **topic_vta_tutorials_frontend** files:
 
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 00:49.885 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 00:48.897 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 00:49.390 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 00:48.636 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
index 60d208fa47..8e3966a5ff 100644
--- a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:03.159** total execution time for **topic_vta_tutorials_optimize** files:
+**00:03.128** total execution time for **topic_vta_tutorials_optimize** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.676 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.673 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.483 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.455 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
index 6b23a8f5ec..e97311e08a 100644
--- a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:00.868** total execution time for **topic_vta_tutorials** files:
+**00:00.830** total execution time for **topic_vta_tutorials** files:
 
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.466 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.447 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.402 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.383 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
index 9f9cb2a2fb..c55e9f797b 100644
--- a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
@@ -329,7 +329,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 94.114 ms
+    Execution time of this operator: 93.924 ms
 
 
 
@@ -447,7 +447,7 @@ operations.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  10.690 seconds)
+   **Total running time of the script:** ( 1 minutes  29.634 seconds)
 
 
 .. _sphx_glr_download_tutorial_auto_scheduler_matmul_x86.py:
diff --git a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
index 00e9f9df64..a906559afe 100644
--- a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
@@ -454,16 +454,16 @@ reduce variance, we take 5 measurements and average them.
     waiting for device...
     device available
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 9.47/9.47       result: MeasureResult(costs=(0.0283574308,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7383923530578613, timestamp=1673984925.7289474)       [('tile_y', [-1, 8]), ('tile_x', [-1, 32])],None,53
-    No: 2   GFLOPS: 8.80/9.47       result: MeasureResult(costs=(0.0305009988,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.9284043312072754, timestamp=1673984926.473074)        [('tile_y', [-1, 16]), ('tile_x', [-1, 64])],None,64
-    No: 3   GFLOPS: 11.57/11.57     result: MeasureResult(costs=(0.023199505199999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6386911869049072, timestamp=1673984927.8991182)       [('tile_y', [-1, 32]), ('tile_x', [-1, 32])],None,55
-    No: 4   GFLOPS: 2.10/11.57      result: MeasureResult(costs=(0.1275317764,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3018863201141357, timestamp=1673984930.9908652)       [('tile_y', [-1, 128]), ('tile_x', [-1, 4])],None,27
-    No: 5   GFLOPS: 10.32/11.57     result: MeasureResult(costs=(0.026002647,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7220709323883057, timestamp=1673984931.8265462)        [('tile_y', [-1, 8]), ('tile_x', [-1, 64])],None,63
-    No: 6   GFLOPS: 9.12/11.57      result: MeasureResult(costs=(0.0294320272,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7722988128662109, timestamp=1673984932.554135)        [('tile_y', [-1, 16]), ('tile_x', [-1, 32])],None,54
-    No: 7   GFLOPS: 10.18/11.57     result: MeasureResult(costs=(0.0263612358,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6615927219390869, timestamp=1673984934.0353885)       [('tile_y', [-1, 512]), ('tile_x', [-1, 512])],None,99
-    No: 8   GFLOPS: 9.87/11.57      result: MeasureResult(costs=(0.027185856400000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7084567546844482, timestamp=1673984934.724381)        [('tile_y', [-1, 4]), ('tile_x', [-1, 64])],None,62
-    No: 9   GFLOPS: 3.87/11.57      result: MeasureResult(costs=(0.0692972198,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.3387506008148193, timestamp=1673984936.176239)        [('tile_y', [-1, 32]), ('tile_x', [-1, 16])],None,45
-    No: 10  GFLOPS: 3.04/11.57      result: MeasureResult(costs=(0.088355355,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.6279051303863525, timestamp=1673984937.8458364)        [('tile_y', [-1, 256]), ('tile_x', [-1, 8])],None,38
+    No: 1   GFLOPS: 3.27/3.27       result: MeasureResult(costs=(0.0821269278,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.5673408508300781, timestamp=1673994419.8678765)       [('tile_y', [-1, 32]), ('tile_x', [-1, 8])],None,35
+    No: 2   GFLOPS: 3.86/3.86       result: MeasureResult(costs=(0.06947809320000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.3771319389343262, timestamp=1673994422.0002797)        [('tile_y', [-1, 32]), ('tile_x', [-1, 16])],None,45
+    No: 3   GFLOPS: 9.85/9.85       result: MeasureResult(costs=(0.027254431599999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6701366901397705, timestamp=1673994422.6914155)       [('tile_y', [-1, 1]), ('tile_x', [-1, 256])],None,80
+    No: 4   GFLOPS: 1.18/9.85       result: MeasureResult(costs=(0.2271749152,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.8723807334899902, timestamp=1673994427.3629255)       [('tile_y', [-1, 16]), ('tile_x', [-1, 1])],None,4
+    No: 5   GFLOPS: 1.77/9.85       result: MeasureResult(costs=(0.1514958012,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.6639575958251953, timestamp=1673994430.1423552)       [('tile_y', [-1, 16]), ('tile_x', [-1, 2])],None,14
+    No: 6   GFLOPS: 2.02/9.85       result: MeasureResult(costs=(0.13314590980000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3672876358032227, timestamp=1673994432.5257761)        [('tile_y', [-1, 8]), ('tile_x', [-1, 2])],None,13
+    No: 7   GFLOPS: 1.25/9.85       result: MeasureResult(costs=(0.21483905599999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.659679412841797, timestamp=1673994436.9883332) [('tile_y', [-1, 1]), ('tile_x', [-1, 2])],None,10
+    No: 8   GFLOPS: 7.96/9.85       result: MeasureResult(costs=(0.033740898199999994,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7652373313903809, timestamp=1673994437.7808642)       [('tile_y', [-1, 1]), ('tile_x', [-1, 32])],None,50
+    No: 9   GFLOPS: 1.35/9.85       result: MeasureResult(costs=(0.1989227634,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.379838228225708, timestamp=1673994441.512279) [('tile_y', [-1, 1]), ('tile_x', [-1, 1])],None,0
+    No: 10  GFLOPS: 9.96/9.96       result: MeasureResult(costs=(0.026950177600000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.8734245300292969, timestamp=1673994442.2069173)       [('tile_y', [-1, 2]), ('tile_x', [-1, 128])],None,71
 
 
 
diff --git a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
index 6c5daeca5f..7f43dc1cd6 100644
--- a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
@@ -311,7 +311,7 @@ standard deviation.
 
  .. code-block:: none
 
-    {'mean': 516.0710022699914, 'median': 516.3282544500362, 'std': 1.3528910766007418}
+    {'mean': 515.8998823199988, 'median': 515.666321499998, 'std': 1.6769409329523433}
 
 
 
@@ -545,30 +545,30 @@ the tuning data to.
 
  .. code-block:: none
 
-
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   15.61/  15.61 GFLOPS | Progress: (4/20) | 9.47 s
    [Task  1/25]  Current/Best:   12.41/  22.05 GFLOPS | Progress: (8/20) | 12.53 s
    [Task  1/25]  Current/Best:   15.58/  22.05 GFLOPS | Progress: (12/20) | 14.93 s
    [Task  1/25]  Current/Best:   23.41/  23.41 GFLOPS | Progress: (16/20) | 17.95 s
    [Task  1/25]  Current/Best:   11.31/  23.41 GFLOPS | Progress: (20/20) | 20.61 s Done.
-
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   11.38/  19.14 GFLOPS | Progress: (4/20) | 3.42 s
    [Task  2/25]  Current/Best:   13.91/  19.54 GFLOPS | Progress: (8/20) | 5.54 s
    [Task  2/25]  Current/Best:    8.50/  19.54 GFLOPS | Progress: (12/20) | 7.96 s
    [Task  2/25]  Current/Best:   12.74/  19.54 GFLOPS | Progress: (16/20) | 9.47 s
    [Task  2/25]  Current/Best:   15.68/  19.54 GFLOPS | Progress: (20/20) | 11.35 s Done.
-
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:   10.68/  16.15 GFLOPS | Progress: (4/20) | 4.37 s
    [Task  3/25]  Current/Best:   17.13/  22.38 GFLOPS | Progress: (8/20) | 6.48 s
    [Task  3/25]  Current/Best:   11.10/  22.38 GFLOPS | Progress: (12/20) | 9.15 s
    [Task  3/25]  Current/Best:   10.98/  22.38 GFLOPS | Progress: (16/20) | 12.32 s
    [Task  3/25]  Current/Best:    8.43/  22.38 GFLOPS | Progress: (20/20) | 14.63 s Done.
-
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:   18.30/  18.30 GFLOPS | Progress: (4/20) | 8.13 s
    [Task  4/25]  Current/Best:    5.05/  18.30 GFLOPS | Progress: (8/20) | 13.41 s
    [Task  4/25]  Current/Best:    7.49/  18.30 GFLOPS | Progress: (12/20) | 19.01 s
    [Task  4/25]  Current/Best:    5.99/  18.30 GFLOPS | Progress: (16/20) | 22.02 s
    [Task  4/25]  Current/Best:   14.47/  18.30 GFLOPS | Progress: (20/20) | 23.88 s Done.
-
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:    8.56/  15.51 GFLOPS | Progress: (4/20) | 3.86 s
    [Task  5/25]  Current/Best:    1.69/  15.72 GFLOPS | Progress: (8/20) | 6.37 s
    [Task  5/25]  Current/Best:    5.33/  20.66 GFLOPS | Progress: (12/20) | 8.94 s
    [Task  5/25]  Current/Best:    6.64/  20.66 GFLOPS | Progress: (16/20) | 11.40 s
    [Task  5/25]  Current/Best:   12.47/  20.66 GFLOPS | Progress: (20/20) | 13.43 s Done.
-
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   14.42/  15.00 GFLOPS | Progress: (4/20) | 4.47 s
    [Task  6/25]  Current/Best:   10.92/  15.00 GFLOPS | Progress: (8/20) | 9.27 s
    [Task  6/25]  Current/Best:   14.10/  23.15 GFLOPS | Progress: (12/20) | 12.87 s
    [Task  6/25]  Current/Best:    6.08/  23.15 GFLOPS | Progress: (16/20) | 15.35 s
    [Task  6/25]  Current/Best:    6.48/  23.15 GFLOPS | Progress: (20/20) | 18.44 s Done.
-
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:   13.04/  13.15 GFLOPS | Progress: (4/20) | 4.95 s
    [Task  7/25]  Current/Best:    3.04/  15.55 GFLOPS | Progress: (8/20) | 8.05 s
    [Task  7/25]  Current/Best:    6.11/  15.55 GFLOPS | Progress: (12/20) | 11.00 s
    [Task  7/25]  Current/Best:   16.78/  16.78 GFLOPS | Progress: (16/20) | 15.06 s
    [Task  7/25]  Current/Best:    6.91/  16.78 GFLOPS | Progress: (20/20) | 18.03 s Done.
-
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:    8.14/  21.95 GFLOPS | Progress: (4/20) | 8.68 s
    [Task  8/25]  Current/Best:    5.71/  21.95 GFLOPS | Progress: (8/20) | 20.40 s
    [Task  8/25]  Current/Best:   13.99/  21.95 GFLOPS | Progress: (12/20) | 28.60 s
    [Task  8/25]  Current/Best:   16.14/  21.95 GFLOPS | Progress: (16/20) | 34.51 s
    [Task  8/25]  Current/Best:    6.28/  21.95 GFLOPS | Progress: (20/20) | 37.01 s
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   16.33/  17.59 GFLOPS | Progress: (4/20) | 4.63 s
    [Task  9/25]  Current/Best:   21.95/  21.95 GFLOPS | Progress: (8/20) | 12.22 s
    [Task  9/25]  Current/Best:   10.06/  21.95 GFLOPS | Progress: (12/20) | 17.61 s
    [Task  9/25]  Current/Best:    6.61/  21.95 GFLOPS | Progress: (16/20) | 19.92 s
    [Task  9/25]  Current/Best:   11.78/  21.95 GFLOPS | Progress: (20/
 20) | 28.84 s Done.
-
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   14.42/  17.81 GFLOPS | Progress: (4/20) | 3.64 s
    [Task 10/25]  Current/Best:    5.61/  17.81 GFLOPS | Progress: (8/20) | 5.76 s
    [Task 10/25]  Current/Best:   12.81/  17.81 GFLOPS | Progress: (12/20) | 7.43 s
    [Task 10/25]  Current/Best:   16.50/  17.81 GFLOPS | Progress: (16/20) | 9.69 s
    [Task 10/25]  Current/Best:   13.01/  18.30 GFLOPS | Progress: (20/20) | 11.86 s Done.
-
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   18.52/  18.52 GFLOPS | Progress: (4/20) | 4.22 s
    [Task 11/25]  Current/Best:   21.47/  23.97 GFLOPS | Progress: (8/20) | 6.59 s
    [Task 11/25]  Current/Best:   10.97/  23.97 GFLOPS | Progress: (12/20) | 9.39 s
    [Task 11/25]  Current/Best:   11.97/  23.97 GFLOPS | Progress: (16/20) | 11.89 s
    [Task 11/25]  Current/Best:    6.93/  23.97 GFLOPS | Progress: (20/20) | 14.60 s Done.
-
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   10.70/  17.06 GFLOPS | Progress: (4/20) | 6.08 s
    [Task 12/25]  Current/Best:    9.28/  17.06 GFLOPS | Progress: (8/20) | 10.34 s
    [Task 12/25]  Current/Best:    6.45/  17.06 GFLOPS | Progress: (12/20) | 14.73 s
    [Task 12/25]  Current/Best:   12.89/  21.49 GFLOPS | Progress: (16/20) | 17.00 s
    [Task 12/25]  Current/Best:   13.86/  21.49 GFLOPS | Progress: (20/20) | 21.05 s Done.
-
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:    9.81/  12.87 GFLOPS | Progress: (4/20) | 5.74 s
    [Task 13/25]  Current/Best:   16.19/  19.41 GFLOPS | Progress: (8/20) | 8.11 s
    [Task 13/25]  Current/Best:   21.76/  21.76 GFLOPS | Progress: (12/20) | 11.19 s
    [Task 13/25]  Current/Best:   17.65/  21.76 GFLOPS | Progress: (16/20) | 14.83 s
    [Task 13/25]  Current/Best:   12.23/  21.76 GFLOPS | Progress: (20/20) | 18.40 s Done.
-
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   13.73/  14.59 GFLOPS | Progress: (4/20) | 7.24 s
    [Task 14/25]  Current/Best:   10.69/  14.59 GFLOPS | Progress: (8/20) | 13.97 s
    [Task 14/25]  Current/Best:   14.83/  14.83 GFLOPS | Progress: (12/20) | 16.28 s
    [Task 14/25]  Current/Best:    8.22/  20.41 GFLOPS | Progress: (16/20) | 23.51 s
    [Task 14/25]  Current/Best:    8.00/  20.41 GFLOPS | Progress: (20/20) | 30.51 s
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:   18.43/  18.43 GFLOPS | Progress: (4/20) | 7.21 s
    [Task 15/25]  Current/Best:   22.08/  22.08 GFLOPS | Progress: (8/20) | 8.92 s Done.
+
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   18.98/  18.98 GFLOPS | Progress: (4/20) | 9.43 s
    [Task  1/25]  Current/Best:   13.57/  18.98 GFLOPS | Progress: (8/20) | 12.46 s
    [Task  1/25]  Current/Best:   23.81/  23.81 GFLOPS | Progress: (12/20) | 14.81 s
    [Task  1/25]  Current/Best:    9.50/  23.81 GFLOPS | Progress: (16/20) | 18.82 s
    [Task  1/25]  Current/Best:   12.93/  23.81 GFLOPS | Progress: (20/20) | 21.27 s Done.
+
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   17.92/  17.92 GFLOPS | Progress: (4/20) | 3.63 s
    [Task  2/25]  Current/Best:   12.95/  17.99 GFLOPS | Progress: (8/20) | 5.22 s
    [Task  2/25]  Current/Best:   13.06/  17.99 GFLOPS | Progress: (12/20) | 7.39 s
    [Task  2/25]  Current/Best:    5.52/  17.99 GFLOPS | Progress: (16/20) | 8.93 s
    [Task  2/25]  Current/Best:   18.49/  18.49 GFLOPS | Progress: (20/20) | 10.39 s Done.
+
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:   22.72/  22.72 GFLOPS | Progress: (4/20) | 4.15 s
    [Task  3/25]  Current/Best:   12.62/  22.72 GFLOPS | Progress: (8/20) | 7.08 s
    [Task  3/25]  Current/Best:   11.64/  22.72 GFLOPS | Progress: (12/20) | 9.75 s
    [Task  3/25]  Current/Best:   21.78/  22.72 GFLOPS | Progress: (16/20) | 11.70 s
    [Task  3/25]  Current/Best:   16.39/  24.10 GFLOPS | Progress: (20/20) | 13.79 s Done.
+
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:    7.13/  15.81 GFLOPS | Progress: (4/20) | 4.33 s
    [Task  4/25]  Current/Best:    8.65/  15.81 GFLOPS | Progress: (8/20) | 6.30 s
    [Task  4/25]  Current/Best:   20.81/  20.81 GFLOPS | Progress: (12/20) | 8.88 s
    [Task  4/25]  Current/Best:   21.42/  21.42 GFLOPS | Progress: (16/20) | 11.42 s
    [Task  4/25]  Current/Best:   10.38/  21.42 GFLOPS | Progress: (20/20) | 13.90 s Done.
+
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:   13.84/  13.88 GFLOPS | Progress: (4/20) | 3.97 s
    [Task  5/25]  Current/Best:   13.14/  22.62 GFLOPS | Progress: (8/20) | 6.45 s
    [Task  5/25]  Current/Best:   15.79/  22.62 GFLOPS | Progress: (12/20) | 8.54 s
    [Task  5/25]  Current/Best:   13.15/  22.62 GFLOPS | Progress: (16/20) | 11.05 s
    [Task  5/25]  Current/Best:   12.38/  22.62 GFLOPS | Progress: (20/20) | 13.45 s Done.
+
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   13.63/  16.94 GFLOPS | Progress: (4/20) | 4.74 s
    [Task  6/25]  Current/Best:   21.18/  21.18 GFLOPS | Progress: (8/20) | 6.69 s
    [Task  6/25]  Current/Best:    2.96/  21.18 GFLOPS | Progress: (12/20) | 10.61 s
    [Task  6/25]  Current/Best:   20.55/  21.18 GFLOPS | Progress: (16/20) | 13.46 s
    [Task  6/25]  Current/Best:   10.76/  22.40 GFLOPS | Progress: (20/20) | 17.15 s Done.
+
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:   16.89/  16.89 GFLOPS | Progress: (4/20) | 4.25 s
    [Task  7/25]  Current/Best:    5.55/  16.89 GFLOPS | Progress: (8/20) | 6.87 s
    [Task  7/25]  Current/Best:   18.85/  18.85 GFLOPS | Progress: (12/20) | 9.68 s
    [Task  7/25]  Current/Best:   19.15/  19.15 GFLOPS | Progress: (16/20) | 12.53 s
    [Task  7/25]  Current/Best:    6.05/  19.15 GFLOPS | Progress: (20/20) | 14.96 s Done.
+
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   19.96/  19.96 GFLOPS | Progress: (4/20) | 13.49 s
    [Task  8/25]  Current/Best:   13.79/  19.96 GFLOPS | Progress: (8/20) | 16.61 s
    [Task  8/25]  Current/Best:   12.87/  19.96 GFLOPS | Progress: (12/20) | 19.45 s
    [Task  8/25]  Current/Best:   12.15/  19.96 GFLOPS | Progress: (16/20) | 22.70 s
    [Task  8/25]  Current/Best:    8.28/  19.96 GFLOPS | Progress: (20/20) | 34.20 s
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   15.51/  15.51 GFLOPS | Progress: (4/20) | 4.41 s
    [Task  9/25]  Current/Best:   16.76/  16.76 GFLOPS | Progress: (8/20) | 11.63 s
    [Task  9/25]  Current/Best:   16.46/  16.76 GFLOPS | Progress: (12/20) | 14.19 s
    [Task  9/25]  Current/Best:   17.41/  17.97 GFLOPS | Progress: (16/20) | 18.26 s
    [Task  9/25]  Current/Best:   16.64/  17.97 GFLOPS | Progress: (20
 /20) | 21.38 s Done.
+
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   10.90/  18.90 GFLOPS | Progress: (4/20) | 3.98 s
    [Task 10/25]  Current/Best:   14.85/  18.90 GFLOPS | Progress: (8/20) | 6.22 s
    [Task 10/25]  Current/Best:   20.42/  21.15 GFLOPS | Progress: (12/20) | 8.41 s
    [Task 10/25]  Current/Best:   10.99/  21.15 GFLOPS | Progress: (16/20) | 10.17 s
    [Task 10/25]  Current/Best:   14.07/  21.15 GFLOPS | Progress: (20/20) | 13.46 s Done.
+
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   11.66/  17.08 GFLOPS | Progress: (4/20) | 4.50 s
    [Task 11/25]  Current/Best:   11.18/  18.89 GFLOPS | Progress: (8/20) | 7.72 s
    [Task 11/25]  Current/Best:    8.76/  18.89 GFLOPS | Progress: (12/20) | 10.69 s
    [Task 11/25]  Current/Best:   21.83/  21.83 GFLOPS | Progress: (16/20) | 13.34 s
    [Task 11/25]  Current/Best:   19.55/  21.83 GFLOPS | Progress: (20/20) | 15.99 s Done.
+
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:    5.23/  14.62 GFLOPS | Progress: (4/20) | 4.56 s
    [Task 12/25]  Current/Best:   12.42/  21.87 GFLOPS | Progress: (8/20) | 9.28 s
    [Task 12/25]  Current/Best:   15.91/  21.87 GFLOPS | Progress: (12/20) | 12.80 s
    [Task 12/25]  Current/Best:   10.48/  21.87 GFLOPS | Progress: (16/20) | 15.61 s
    [Task 12/25]  Current/Best:   21.38/  21.87 GFLOPS | Progress: (20/20) | 17.81 s Done.
+
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:    5.98/  15.95 GFLOPS | Progress: (4/20) | 5.58 s
    [Task 13/25]  Current/Best:   18.73/  18.73 GFLOPS | Progress: (8/20) | 8.91 s
    [Task 13/25]  Current/Best:    8.42/  18.73 GFLOPS | Progress: (12/20) | 11.49 s
    [Task 13/25]  Current/Best:   17.28/  18.73 GFLOPS | Progress: (16/20) | 14.86 s
    [Task 13/25]  Current/Best:   18.48/  22.00 GFLOPS | Progress: (20/20) | 17.66 s Done.
+
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   10.42/  13.56 GFLOPS | Progress: (4/20) | 4.17 s
    [Task 14/25]  Current/Best:    6.07/  16.88 GFLOPS | Progress: (8/20) | 7.04 s
    [Task 14/25]  Current/Best:    5.31/  16.88 GFLOPS | Progress: (12/20) | 10.11 s
    [Task 14/25]  Current/Best:   15.66/  16.88 GFLOPS | Progress: (16/20) | 15.53 s
    [Task 14/25]  Current/Best:    9.13/  16.88 GFLOPS | Progress: (20/20) | 17.93 s
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
      Done.
-
    [Task 15/25]  Current/Best:   12.41/  22.08 GFLOPS | Progress: (12/20) | 12.20 s
    [Task 15/25]  Current/Best:   13.56/  22.08 GFLOPS | Progress: (16/20) | 16.07 s
    [Task 15/25]  Current/Best:   20.89/  22.08 GFLOPS | Progress: (20/20) | 24.55 s Done.
-
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:    5.11/  14.77 GFLOPS | Progress: (4/20) | 4.19 s
    [Task 16/25]  Current/Best:   17.68/  22.02 GFLOPS | Progress: (8/20) | 5.80 s
    [Task 16/25]  Current/Best:   20.44/  22.02 GFLOPS | Progress: (12/20) | 7.26 s
    [Task 16/25]  Current/Best:    9.79/  22.02 GFLOPS | Progress: (16/20) | 9.47 s
    [Task 16/25]  Current/Best:   16.68/  22.02 GFLOPS | Progress: (20/20) | 11.29 s Done.
-
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:    6.13/  19.59 GFLOPS | Progress: (4/20) | 5.64 s
    [Task 17/25]  Current/Best:   22.02/  22.02 GFLOPS | Progress: (8/20) | 7.94 s
    [Task 17/25]  Current/Best:   16.57/  22.02 GFLOPS | Progress: (12/20) | 10.70 s
    [Task 17/25]  Current/Best:   15.68/  22.02 GFLOPS | Progress: (16/20) | 13.45 s
    [Task 17/25]  Current/Best:   17.95/  22.02 GFLOPS | Progress: (20/20) | 15.69 s Done.
-
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:   11.68/  19.25 GFLOPS | Progress: (4/20) | 5.47 s
    [Task 18/25]  Current/Best:   12.82/  19.25 GFLOPS | Progress: (8/20) | 9.31 s
    [Task 18/25]  Current/Best:   11.73/  21.99 GFLOPS | Progress: (12/20) | 11.97 s
    [Task 18/25]  Current/Best:   12.20/  21.99 GFLOPS | Progress: (16/20) | 20.30 s
    [Task 18/25]  Current/Best:   14.97/  22.17 GFLOPS | Progress: (20/20) | 22.66 s Done.
-
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   11.40/  19.30 GFLOPS | Progress: (4/20) | 4.77 s
    [Task 19/25]  Current/Best:    7.32/  19.30 GFLOPS | Progress: (8/20) | 10.23 s
    [Task 19/25]  Current/Best:   17.22/  20.42 GFLOPS | Progress: (12/20) | 13.39 s
    [Task 19/25]  Current/Best:   19.93/  20.42 GFLOPS | Progress: (16/20) | 20.59 s
    [Task 19/25]  Current/Best:    1.55/  20.42 GFLOPS | Progress: (20/20) | 26.65 s Done.
-
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:    8.92/  16.07 GFLOPS | Progress: (4/20) | 4.30 s
    [Task 20/25]  Current/Best:   10.41/  16.07 GFLOPS | Progress: (8/20) | 6.30 s
    [Task 20/25]  Current/Best:   10.13/  16.07 GFLOPS | Progress: (12/20) | 9.48 s
    [Task 20/25]  Current/Best:   14.34/  16.07 GFLOPS | Progress: (16/20) | 11.68 s
    [Task 20/25]  Current/Best:   14.39/  16.07 GFLOPS | Progress: (20/20) | 14.57 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:   14.29/  19.56 GFLOPS | Progress: (4/20) | 4.16 s
    [Task 21/25]  Current/Best:    7.24/  19.56 GFLOPS | Progress: (8/20) | 6.07 s
    [Task 21/25]  Current/Best:   12.13/  19.56 GFLOPS | Progress: (12/20) | 8.41 s Done.
-
    [Task 21/25]  Current/Best:   12.88/  19.56 GFLOPS | Progress: (16/20) | 11.09 s
    [Task 21/25]  Current/Best:   11.76/  19.56 GFLOPS | Progress: (20/20) | 14.00 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:    2.68/  10.80 GFLOPS | Progress: (4/20) | 4.76 s
    [Task 22/25]  Current/Best:    2.69/  21.23 GFLOPS | Progress: (8/20) | 6.87 s
    [Task 22/25]  Current/Best:   14.46/  21.23 GFLOPS | Progress: (12/20) | 9.05 s
    [Task 22/25]  Current/Best:   10.20/  21.23 GFLOPS | Progress: (16/20) | 11.21 s
    [Task 22/25]  Current/Best:    4.44/  21.23 GFLOPS | Progress: (20/20) | 13.71 s Done.
-
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:    9.55/  18.29 GFLOPS | Progress: (4/20) | 4.54 s
    [Task 23/25]  Current/Best:   12.29/  18.29 GFLOPS | Progress: (8/20) | 7.62 s
    [Task 23/25]  Current/Best:   21.95/  21.95 GFLOPS | Progress: (12/20) | 10.14 s
    [Task 23/25]  Current/Best:   12.91/  22.63 GFLOPS | Progress: (16/20) | 13.19 s
    [Task 23/25]  Current/Best:   16.29/  22.63 GFLOPS | Progress: (20/20) | 16.32 s Done.
-
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    3.14/  10.03 GFLOPS | Progress: (4/20) | 7.79 s
    [Task 24/25]  Current/Best:    7.49/  10.03 GFLOPS | Progress: (8/20) | 18.51 s
    [Task 24/25]  Current/Best:    6.92/  10.07 GFLOPS | Progress: (12/20) | 20.96 s
    [Task 24/25]  Current/Best:    8.26/  10.07 GFLOPS | Progress: (16/20) | 29.10 s
    [Task 24/25]  Current/Best:    3.30/  10.07 GFLOPS | Progress: (20/20) | 31.44 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    3.48/   4.18 GFLOPS | Progress: (4/20) | 12.86 s
    [Task 25/25]  Current/Best:    3.58/   8.27 GFLOPS | Progress: (8/20) | 23.80 s Done.
-
    [Task 25/25]  Current/Best:    8.43/   8.73 GFLOPS | Progress: (12/20) | 34.74 s
    [Task 25/25]  Current/Best:    4.42/   9.20 GFLOPS | Progress: (16/20) | 37.62 s
    [Task 25/25]  Current/Best:    6.81/   9.20 GFLOPS | Progress: (20/20) | 48.55 s
+
    [Task 15/25]  Current/Best:   13.48/  14.40 GFLOPS | Progress: (4/20) | 3.96 s
    [Task 15/25]  Current/Best:   10.87/  22.01 GFLOPS | Progress: (8/20) | 8.74 s
    [Task 15/25]  Current/Best:   10.84/  22.01 GFLOPS | Progress: (12/20) | 11.03 s
    [Task 15/25]  Current/Best:   19.88/  22.01 GFLOPS | Progress: (16/20) | 12.90 s
    [Task 15/25]  Current/Best:    6.33/  22.01 GFLOPS | Progress: (20/20) | 15.25 s
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   13.76/  18.07 GFLOPS | Progress: (4/20) | 4.30 s
    [Task 16/25]  Current/Best:    6.58/  18.07 GFLOPS | Progress: (8/20) | 6.02 s
    [Task 16/25]  Current/Best:    6.18/  18.07 GFLOPS | Progress: (12/20) | 9.00 s
    [Task 16/25]  Current/Best:    5.77/  18.07 GFLOPS | Progress: (16/20) | 10.75 s
    [Task 16/25]  Current/Best:   11.01/  18.97 GFLOPS | Progress: (20/20) | 13.60 s Done.
+
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:   11.64/  21.25 GFLOPS | Progress: (4/20) | 4.98 s
    [Task 17/25]  Current/Best:   11.12/  21.25 GFLOPS | Progress: (8/20) | 7.98 s
    [Task 17/25]  Current/Best:   15.83/  21.25 GFLOPS | Progress: (12/20) | 11.60 s
    [Task 17/25]  Current/Best:   12.20/  21.25 GFLOPS | Progress: (16/20) | 14.25 s
    [Task 17/25]  Current/Best:   22.01/  22.01 GFLOPS | Progress: (20/20) | 16.26 s Done.
+
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:   16.77/  18.95 GFLOPS | Progress: (4/20) | 3.86 s
    [Task 18/25]  Current/Best:   16.07/  18.95 GFLOPS | Progress: (8/20) | 6.15 s
    [Task 18/25]  Current/Best:   11.02/  18.95 GFLOPS | Progress: (12/20) | 10.45 s
    [Task 18/25]  Current/Best:   11.61/  18.95 GFLOPS | Progress: (16/20) | 13.11 s
    [Task 18/25]  Current/Best:   13.83/  18.95 GFLOPS | Progress: (20/20) | 16.86 s Done.
+
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:    2.65/   9.71 GFLOPS | Progress: (4/20) | 6.64 s
    [Task 19/25]  Current/Best:   11.82/  18.09 GFLOPS | Progress: (8/20) | 10.61 s
    [Task 19/25]  Current/Best:   12.92/  18.09 GFLOPS | Progress: (12/20) | 14.17 s
    [Task 19/25]  Current/Best:   13.61/  18.09 GFLOPS | Progress: (16/20) | 17.26 s
    [Task 19/25]  Current/Best:   11.82/  18.76 GFLOPS | Progress: (20/20) | 20.35 s Done.
+
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:   16.68/  16.68 GFLOPS | Progress: (4/20) | 3.88 s
    [Task 20/25]  Current/Best:   17.80/  17.80 GFLOPS | Progress: (8/20) | 6.96 s
    [Task 20/25]  Current/Best:   18.13/  19.21 GFLOPS | Progress: (12/20) | 8.78 s
    [Task 20/25]  Current/Best:   18.93/  19.21 GFLOPS | Progress: (16/20) | 12.06 s
    [Task 20/25]  Current/Best:    9.81/  19.21 GFLOPS | Progress: (20/20) | 14.32 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+     Done.
+
    [Task 21/25]  Current/Best:    5.25/  10.03 GFLOPS | Progress: (4/20) | 4.30 s
    [Task 21/25]  Current/Best:    8.15/  10.03 GFLOPS | Progress: (8/20) | 6.22 s
    [Task 21/25]  Current/Best:   22.52/  22.52 GFLOPS | Progress: (12/20) | 8.51 s
    [Task 21/25]  Current/Best:   12.28/  22.52 GFLOPS | Progress: (16/20) | 12.05 s
    [Task 21/25]  Current/Best:    9.83/  22.52 GFLOPS | Progress: (20/20) | 13.97 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:    9.29/  19.91 GFLOPS | Progress: (4/20) | 5.40 s
    [Task 22/25]  Current/Best:   15.70/  19.91 GFLOPS | Progress: (8/20) | 7.18 s
    [Task 22/25]  Current/Best:    6.89/  19.91 GFLOPS | Progress: (12/20) | 9.94 s
    [Task 22/25]  Current/Best:   16.01/  19.91 GFLOPS | Progress: (16/20) | 12.09 s
    [Task 22/25]  Current/Best:   10.84/  19.91 GFLOPS | Progress: (20/20) | 13.91 s Done.
+
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   18.37/  18.37 GFLOPS | Progress: (4/20) | 5.12 s
    [Task 23/25]  Current/Best:   11.62/  18.37 GFLOPS | Progress: (8/20) | 12.77 s
    [Task 23/25]  Current/Best:   21.25/  21.25 GFLOPS | Progress: (12/20) | 15.28 s
    [Task 23/25]  Current/Best:   23.43/  23.43 GFLOPS | Progress: (16/20) | 17.50 s
    [Task 23/25]  Current/Best:   20.16/  23.43 GFLOPS | Progress: (20/20) | 19.88 s Done.
+
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    3.05/   4.82 GFLOPS | Progress: (4/20) | 12.78 s
    [Task 24/25]  Current/Best:    2.17/   4.82 GFLOPS | Progress: (8/20) | 23.74 s
    [Task 24/25]  Current/Best:    3.70/   4.82 GFLOPS | Progress: (12/20) | 34.41 s
    [Task 24/25]  Current/Best:    6.80/  10.35 GFLOPS | Progress: (16/20) | 46.36 s Done.
+
    [Task 24/25]  Current/Best:    2.87/  10.35 GFLOPS | Progress: (20/20) | 58.02 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    5.64/   8.08 GFLOPS | Progress: (4/20) | 4.16 s
    [Task 25/25]  Current/Best:    4.53/   8.08 GFLOPS | Progress: (8/20) | 5.86 s
    [Task 25/25]  Current/Best:    5.90/   8.08 GFLOPS | Progress: (12/20) | 16.82 s
    [Task 25/25]  Current/Best:    3.02/   8.08 GFLOPS | Progress: (16/20) | 18.94 s
    [Task 25/25]  Current/Best:    9.24/   9.24 GFLOPS | Progress: (20/20) | 28.99 s
 
 
 
@@ -664,7 +664,7 @@ Verify that the optimized model runs and produces the same results:
 
  .. code-block:: none
 
-    class='n02123045 tabby, tabby cat' with probability=0.621102
+    class='n02123045 tabby, tabby cat' with probability=0.621103
     class='n02123159 tiger cat' with probability=0.356379
     class='n02124075 Egyptian cat' with probability=0.019712
     class='n02129604 tiger, Panthera tigris' with probability=0.001215
@@ -722,8 +722,8 @@ improvement in comparing the optimized model to the unoptimized model.
 
  .. code-block:: none
 
-    optimized: {'mean': 409.3763455399767, 'median': 409.3127982999249, 'std': 1.043979226518125}
-    unoptimized: {'mean': 516.0710022699914, 'median': 516.3282544500362, 'std': 1.3528910766007418}
+    optimized: {'mean': 409.63730430999703, 'median': 408.2600837999962, 'std': 2.9548387924640633}
+    unoptimized: {'mean': 515.8998823199988, 'median': 515.666321499998, 'std': 1.6769409329523433}
 
 
 
@@ -746,7 +746,7 @@ profiling/benchmarking.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 12 minutes  26.333 seconds)
+   **Total running time of the script:** ( 11 minutes  36.214 seconds)
 
 
 .. _sphx_glr_download_tutorial_autotvm_relay_x86.py:
diff --git a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
index 8eff72f5fd..28eeebf443 100644
--- a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
@@ -274,7 +274,7 @@ device and returns the measured cost. Network overhead is excluded.
 
  .. code-block:: none
 
-    1.252e-07 secs/op
+    1.248e-07 secs/op
 
 
 
diff --git a/docs/_sources/tutorial/intro_topi.rst.txt b/docs/_sources/tutorial/intro_topi.rst.txt
index f6e620fd77..793a12ab06 100644
--- a/docs/_sources/tutorial/intro_topi.rst.txt
+++ b/docs/_sources/tutorial/intro_topi.rst.txt
@@ -263,7 +263,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
 
  .. code-block:: none
 
-    [stage(a, placeholder(a, 0xe56b460)), stage(b, placeholder(b, 0x223a2f30)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T. [...]
+    [stage(a, placeholder(a, 0x20168730)), stage(b, placeholder(b, 0x216a1830)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T [...]
 
 
 
diff --git a/docs/_sources/tutorial/sg_execution_times.rst.txt b/docs/_sources/tutorial/sg_execution_times.rst.txt
index 354195960b..b18ebbae87 100644
--- a/docs/_sources/tutorial/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorial/sg_execution_times.rst.txt
@@ -5,24 +5,24 @@
 
 Computation times
 =================
-**15:33.061** total execution time for **tutorial** files:
+**15:12.816** total execution time for **tutorial** files:
 
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 12:26.333 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 11:36.214 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:10.690 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:29.634 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 01:01.321 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 01:00.638 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:35.710 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:35.638 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:17.376 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:28.372 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.828 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:01.324 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.620 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.827 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.183 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.169 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_uma.py` (``uma.py``)                                             | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
index b972bf1e54..4d0d0925a6 100644
--- a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
@@ -286,7 +286,7 @@ helper function to run a profile of the TVM generated code.
  .. code-block:: none
 
     Numpy running time: 0.000007
-    naive: 0.000008
+    naive: 0.000007
 
 
 
@@ -490,10 +490,10 @@ We can now compare the different schedules
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                   numpy    6.819860000177869e-06                    1.0
-                   naive              7.7823e-06      1.1411231315301238
-                parallel              6.9705e-06      1.0220884299411135
-                  vector    2.4512300000000002e-05     3.594252667849589
+                   numpy    7.4744799985637655e-06                   1.0
+                   naive              6.7377e-06      0.9014272566512535
+                parallel              6.9822e-06      0.9341385623269635
+                  vector             2.46861e-05      3.3027180492480386
 
 
 
@@ -914,7 +914,7 @@ matrix multiplication.
 
  .. code-block:: none
 
-    Numpy running time: 0.019000
+    Numpy running time: 0.018132
 
 
 
@@ -972,7 +972,7 @@ optimizations.
 
  .. code-block:: none
 
-    none: 3.394787
+    none: 3.369933
 
 
 
@@ -1074,7 +1074,7 @@ schedule.
 
  .. code-block:: none
 
-    blocking: 0.320045
+    blocking: 0.300298
 
 
 
@@ -1169,7 +1169,7 @@ already cache friendly from our previous optimizations.
 
  .. code-block:: none
 
-    vectorization: 0.350142
+    vectorization: 0.343557
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1242,7 +1242,7 @@ more cache friendly.
 
  .. code-block:: none
 
-    loop permutation: 0.122443
+    loop permutation: 0.116184
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1340,7 +1340,7 @@ optimized schedule.
 
  .. code-block:: none
 
-    array packing: 0.108446
+    array packing: 0.107706
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1432,7 +1432,7 @@ to `C` when all the block results are ready.
 
  .. code-block:: none
 
-    block caching: 0.110250
+    block caching: 0.110915
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1517,7 +1517,7 @@ of thread-level parallelization.
 
  .. code-block:: none
 
-    parallelization: 0.146611
+    parallelization: 0.145955
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1597,13 +1597,13 @@ working, we can compare the results.
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                    none      3.3947865387999996                     1.0
-                blocking     0.32004532770000005     0.09427553810588948
-           vectorization            0.3501424029      0.1031412134159603
-        loop permutation     0.12244309140000001    0.036067979532899176
-           array packing            0.1084455223     0.03194472496592781
-           block caching            0.1102499436    0.032476252141311814
-         parallelization            0.1466109915     0.04318710169972117
+                    none      3.3699329131999995                     1.0
+                blocking            0.3002977844     0.08911090877320911
+           vectorization            0.3435569755     0.10194771953895287
+        loop permutation            0.1161838557    0.034476607900682174
+           array packing     0.10770565089999999     0.03196077004326046
+           block caching     0.11091495809999999     0.03291310567802315
+         parallelization     0.14595452820000002     0.04331081121178921
 
 
 
@@ -1645,7 +1645,7 @@ the computation for specific platforms.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  1.321 seconds)
+   **Total running time of the script:** ( 1 minutes  0.638 seconds)
 
 
 .. _sphx_glr_download_tutorial_tensor_expr_get_started.py:
diff --git a/docs/commit_hash b/docs/commit_hash
index 8a4e7ed872..8c0c9dd529 100644
--- a/docs/commit_hash
+++ b/docs/commit_hash
@@ -1 +1 @@
-c9b4016000af34ac8dda765ba4f97cef45d587e6
+328122675da7800944211e7ac0b21b3ed9398060
diff --git a/docs/how_to/compile_models/from_darknet.html b/docs/how_to/compile_models/from_darknet.html
index fcd0d0045c..0bf11df38e 100644
--- a/docs/how_to/compile_models/from_darknet.html
+++ b/docs/how_to/compile_models/from_darknet.html
@@ -585,7 +585,7 @@ class:[&#39;truck 0.9266&#39;] left:471 top:83 right:689 bottom:169
 class:[&#39;bicycle 0.9984&#39;] left:111 top:113 right:577 bottom:447
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  16.919 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  17.012 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-darknet-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7716f96385bd5abb6e822041e285be54/from_darknet.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_darknet.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_keras.html b/docs/how_to/compile_models/from_keras.html
index 0c9b4b3a32..fbdbec985b 100644
--- a/docs/how_to/compile_models/from_keras.html
+++ b/docs/how_to/compile_models/from_keras.html
@@ -506,7 +506,7 @@ Tensorflow is also required since it’s used as the default backend of keras.</
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Relay top-1 id: 285, class name: Egyptian cat
 
 1/1 [==============================] - ETA: 0s
-1/1 [==============================] - 1s 976ms/step
+1/1 [==============================] - 1s 966ms/step
 Keras top-1 id: 285, class name: Egyptian cat
 </pre></div>
 </div>
diff --git a/docs/how_to/compile_models/from_mxnet.html b/docs/how_to/compile_models/from_mxnet.html
index 7af39325ff..72b3c86fde 100644
--- a/docs/how_to/compile_models/from_mxnet.html
+++ b/docs/how_to/compile_models/from_mxnet.html
@@ -439,7 +439,7 @@
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;x&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#tuple" title="builtins.tuple" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">x</span><span class="o">.</span><span class="n">shape</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipfc4a67b1-b7a5-45f2-9f14-a82fa2495f34 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip46eab057-cce9-4764-a640-e8011944cde5 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
 x (1, 3, 224, 224)
 </pre></div>
 </div>
diff --git a/docs/how_to/compile_models/from_oneflow.html b/docs/how_to/compile_models/from_oneflow.html
index 86a08137dd..a7d1f7ee43 100644
--- a/docs/how_to/compile_models/from_oneflow.html
+++ b/docs/how_to/compile_models/from_oneflow.html
@@ -449,14 +449,12 @@ Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdo
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip&quot; to /workspace/.oneflow/flowvision_cache/resnet18.zip
 
   0%|          | 0.00/41.5M [00:00&lt;?, ?B/s]
- 15%|#5        | 6.33M/41.5M [00:00&lt;00:00, 40.0MB/s]
- 24%|##4       | 10.1M/41.5M [00:00&lt;00:00, 35.3MB/s]
- 35%|###4      | 14.3M/41.5M [00:00&lt;00:00, 34.0MB/s]
- 42%|####2     | 17.5M/41.5M [00:00&lt;00:00, 33.4MB/s]
- 58%|#####7    | 24.0M/41.5M [00:00&lt;00:00, 35.5MB/s]
- 78%|#######7  | 32.3M/41.5M [00:00&lt;00:00, 49.0MB/s]
- 96%|#########6| 40.0M/41.5M [00:00&lt;00:00, 53.3MB/s]
-100%|##########| 41.5M/41.5M [00:00&lt;00:00, 45.6MB/s]
+ 19%|#9        | 7.99M/41.5M [00:00&lt;00:00, 81.9MB/s]
+ 39%|###8      | 16.0M/41.5M [00:00&lt;00:00, 63.0MB/s]
+ 58%|#####7    | 24.0M/41.5M [00:00&lt;00:00, 58.1MB/s]
+ 77%|#######7  | 32.0M/41.5M [00:00&lt;00:00, 58.8MB/s]
+ 96%|#########6| 40.0M/41.5M [00:00&lt;00:00, 62.0MB/s]
+100%|##########| 41.5M/41.5M [00:00&lt;00:00, 63.7MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_pytorch.html b/docs/how_to/compile_models/from_pytorch.html
index 07e2577e21..c779f9dde2 100644
--- a/docs/how_to/compile_models/from_pytorch.html
+++ b/docs/how_to/compile_models/from_pytorch.html
@@ -432,10 +432,13 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/resnet18-f37072fd.pth&quot; to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
 
   0%|          | 0.00/44.7M [00:00&lt;?, ?B/s]
- 32%|###1      | 14.2M/44.7M [00:00&lt;00:00, 148MB/s]
- 63%|######3   | 28.3M/44.7M [00:00&lt;00:00, 115MB/s]
- 89%|########9 | 39.8M/44.7M [00:00&lt;00:00, 109MB/s]
-100%|##########| 44.7M/44.7M [00:00&lt;00:00, 111MB/s]
+ 18%|#7        | 7.99M/44.7M [00:00&lt;00:00, 64.5MB/s]
+ 36%|###5      | 16.0M/44.7M [00:00&lt;00:00, 62.6MB/s]
+ 54%|#####3    | 24.0M/44.7M [00:00&lt;00:00, 67.8MB/s]
+ 68%|######8   | 30.5M/44.7M [00:00&lt;00:00, 60.0MB/s]
+ 81%|########1 | 36.3M/44.7M [00:00&lt;00:00, 52.9MB/s]
+ 93%|#########2| 41.5M/44.7M [00:00&lt;00:00, 49.3MB/s]
+100%|##########| 44.7M/44.7M [00:00&lt;00:00, 58.4MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_tensorflow.html b/docs/how_to/compile_models/from_tensorflow.html
index 390cf1295d..67eea6b5bf 100644
--- a/docs/how_to/compile_models/from_tensorflow.html
+++ b/docs/how_to/compile_models/from_tensorflow.html
@@ -649,7 +649,7 @@ banana (score = 0.00022)
 desk (score = 0.00019)
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  20.542 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  20.509 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-tensorflow-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7f1d3d1b878694c201c614c807cdebc8/from_tensorflow.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_tensorflow.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/sg_execution_times.html b/docs/how_to/compile_models/sg_execution_times.html
index 05cd325930..0fb8a0dc63 100644
--- a/docs/how_to/compile_models/sg_execution_times.html
+++ b/docs/how_to/compile_models/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-compile-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>06:18.591</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
+<p><strong>06:17.313</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -349,43 +349,43 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
-<td><p>01:20.542</p></td>
+<td><p>01:20.509</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
-<td><p>01:16.919</p></td>
+<td><p>01:17.012</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></td>
-<td><p>00:51.878</p></td>
+<td><p>00:51.153</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></td>
-<td><p>00:34.968</p></td>
+<td><p>00:34.958</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
-<td><p>00:30.560</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></td>
+<td><p>00:30.365</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></td>
-<td><p>00:29.497</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
+<td><p>00:29.779</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></td>
-<td><p>00:26.858</p></td>
+<td><p>00:26.352</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
-<td><p>00:24.030</p></td>
+<td><p>00:24.552</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
-<td><p>00:20.703</p></td>
+<td><p>00:20.019</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></td>
-<td><p>00:02.637</p></td>
+<td><p>00:02.614</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno.html b/docs/how_to/deploy_models/deploy_model_on_adreno.html
index ed3f8433ed..13faaa42ce 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno.html
@@ -920,7 +920,7 @@ Top5 predictions:
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
- 2685.1657    2684.8620    2688.5736    2683.1320      1.4054
+ 2682.9338    2681.1330    2690.0914    2679.2744      3.6731
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-model-on-adreno-py">
diff --git a/docs/how_to/deploy_models/deploy_model_on_android.html b/docs/how_to/deploy_models/deploy_model_on_android.html
index 646c130e10..9cb4300500 100644
--- a/docs/how_to/deploy_models/deploy_model_on_android.html
+++ b/docs/how_to/deploy_models/deploy_model_on_android.html
@@ -662,7 +662,7 @@ to the remote android device.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  16.1415      16.1502      16.2217      16.0335       0.0557
+  16.4251      16.3604      17.1089      15.8652       0.4475
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
index 843449a4d1..5caf1bdabb 100644
--- a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
+++ b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
@@ -454,21 +454,29 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth&quot; to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
 
   0%|          | 0.00/170M [00:00&lt;?, ?B/s]
-  8%|7         | 13.3M/170M [00:00&lt;00:01, 139MB/s]
- 16%|#5        | 26.6M/170M [00:00&lt;00:01, 100MB/s]
- 23%|##2       | 38.8M/170M [00:00&lt;00:01, 111MB/s]
- 29%|##9       | 49.9M/170M [00:00&lt;00:01, 96.6MB/s]
- 37%|###6      | 62.4M/170M [00:00&lt;00:01, 107MB/s]
- 43%|####3     | 73.0M/170M [00:00&lt;00:01, 98.5MB/s]
- 50%|####9     | 84.6M/170M [00:00&lt;00:00, 105MB/s]
- 56%|#####5    | 94.9M/170M [00:00&lt;00:00, 103MB/s]
- 62%|######1   | 105M/170M [00:01&lt;00:00, 93.7MB/s]
- 67%|######7   | 114M/170M [00:01&lt;00:00, 90.4MB/s]
- 75%|#######5  | 128M/170M [00:01&lt;00:00, 97.9MB/s]
- 81%|########  | 137M/170M [00:01&lt;00:00, 65.3MB/s]
- 89%|########9 | 152M/170M [00:01&lt;00:00, 82.4MB/s]
- 95%|#########5| 161M/170M [00:01&lt;00:00, 67.2MB/s]
-100%|##########| 170M/170M [00:02&lt;00:00, 88.5MB/s]
+  5%|4         | 7.99M/170M [00:00&lt;00:03, 49.4MB/s]
+  9%|9         | 16.0M/170M [00:00&lt;00:03, 49.6MB/s]
+ 14%|#4        | 24.0M/170M [00:00&lt;00:02, 53.1MB/s]
+ 19%|#8        | 32.0M/170M [00:00&lt;00:02, 54.5MB/s]
+ 24%|##3       | 40.0M/170M [00:00&lt;00:02, 59.5MB/s]
+ 28%|##8       | 48.0M/170M [00:00&lt;00:01, 66.0MB/s]
+ 33%|###2      | 56.0M/170M [00:00&lt;00:01, 65.4MB/s]
+ 38%|###7      | 64.0M/170M [00:01&lt;00:01, 58.5MB/s]
+ 42%|####2     | 72.0M/170M [00:01&lt;00:02, 50.4MB/s]
+ 47%|####7     | 80.0M/170M [00:01&lt;00:01, 54.5MB/s]
+ 52%|#####1    | 88.0M/170M [00:01&lt;00:01, 53.6MB/s]
+ 57%|#####6    | 96.1M/170M [00:01&lt;00:01, 60.5MB/s]
+ 61%|######1   | 104M/170M [00:01&lt;00:01, 54.3MB/s]
+ 66%|######5   | 112M/170M [00:02&lt;00:01, 55.4MB/s]
+ 71%|#######   | 120M/170M [00:02&lt;00:00, 61.4MB/s]
+ 75%|#######5  | 128M/170M [00:02&lt;00:00, 60.1MB/s]
+ 80%|########  | 136M/170M [00:02&lt;00:00, 65.8MB/s]
+ 85%|########4 | 144M/170M [00:02&lt;00:00, 59.6MB/s]
+ 88%|########8 | 150M/170M [00:02&lt;00:00, 60.7MB/s]
+ 92%|#########2| 156M/170M [00:02&lt;00:00, 60.0MB/s]
+ 96%|#########5| 162M/170M [00:02&lt;00:00, 55.9MB/s]
+ 99%|#########8| 168M/170M [00:03&lt;00:00, 55.2MB/s]
+100%|##########| 170M/170M [00:03&lt;00:00, 57.8MB/s]
 /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torch/nn/functional.py:3897: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
   for i in range(dim)
 /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/detection/anchor_utils.py:124: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the &#39;trunc&#39; function NOT &#39;floor&#39;). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode=&#39;trunc&#39;), or for actual floor division, use torch.div(a, b, rounding_mode=& [...]
@@ -566,7 +574,7 @@ torchvision rcnn models.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Get 9 valid boxes
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  33.185 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  27.164 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-object-detection-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7795da4b258c8feff986668b95ef57ad/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized.html b/docs/how_to/deploy_models/deploy_prequantized.html
index 622b036fc0..dabf320906 100644
--- a/docs/how_to/deploy_models/deploy_prequantized.html
+++ b/docs/how_to/deploy_models/deploy_prequantized.html
@@ -495,8 +495,8 @@ training. Other models require a full post training calibration.</p>
 Downloading: &quot;https://download.pytorch.org/models/mobilenet_v2-b0353104.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
 
   0%|          | 0.00/13.6M [00:00&lt;?, ?B/s]
- 97%|#########6| 13.1M/13.6M [00:00&lt;00:00, 137MB/s]
-100%|##########| 13.6M/13.6M [00:00&lt;00:00, 134MB/s]
+ 59%|#####8    | 7.99M/13.6M [00:00&lt;00:00, 72.4MB/s]
+100%|##########| 13.6M/13.6M [00:00&lt;00:00, 79.0MB/s]
 </pre></div>
 </div>
 </div>
@@ -587,7 +587,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  90.4488      90.3820      92.8051      90.1836       0.2949
+  90.3653      90.2098      96.1734      90.0266       0.6337
 </pre></div>
 </div>
 <div class="admonition note">
@@ -626,7 +626,7 @@ This includes support for the VNNI 8 bit dot product instruction (CascadeLake or
 <div class="section" id="deploy-a-quantized-tflite-model">
 <h2>Deploy a quantized TFLite Model<a class="headerlink" href="#deploy-a-quantized-tflite-model" title="Permalink to this headline">¶</a></h2>
 <p>TODO</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  14.137 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  12.494 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/fb8217c13f4351224c6cf3aacf1a87fc/deploy_prequantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized_tflite.html b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
index a3c21685a0..b6ee9cf61c 100644
--- a/docs/how_to/deploy_models/deploy_prequantized_tflite.html
+++ b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
@@ -580,7 +580,7 @@ TFLite Top-5 labels: [387 102 386 341 349]
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  120.9158     120.9112     122.3274     120.2007      0.3495
+  120.6118     120.5122     126.7901     119.8926      0.7046
 </pre></div>
 </div>
 <div class="admonition note">
@@ -608,7 +608,7 @@ network for ARM CPU</span></a>.</p></li>
 </ul>
 </div></blockquote>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  27.049 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  28.601 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-tflite-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/56691c7a27d45da61d112276334640d3/deploy_prequantized_tflite.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized_tflite.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_quantized.html b/docs/how_to/deploy_models/deploy_quantized.html
index 01486685ec..4d6c136779 100644
--- a/docs/how_to/deploy_models/deploy_quantized.html
+++ b/docs/how_to/deploy_models/deploy_quantized.html
@@ -521,7 +521,7 @@ for calibration. But the accuracy might be impacted.</p>
   DeprecationWarning,
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  38.317 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  33.686 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-quantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7810ecf51bfc05f7d5e8a400ac3e815d/deploy_quantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_quantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_ssd_gluoncv.html b/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
index 0d24aeea84..be8026c4af 100644
--- a/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
+++ b/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
@@ -463,24 +463,22 @@ to your device.</p>
 Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_512_resnet50_v1_voc-9c8b225a.zip...
 
   0%|          | 0/132723 [00:00&lt;?, ?KB/s]
-  3%|3         | 4248/132723 [00:00&lt;00:03, 42477.00KB/s]
-  9%|8         | 11356/132723 [00:00&lt;00:02, 59295.52KB/s]
- 14%|#4        | 18901/132723 [00:00&lt;00:01, 66669.23KB/s]
- 20%|##        | 26959/132723 [00:00&lt;00:01, 72152.77KB/s]
- 26%|##6       | 34901/132723 [00:00&lt;00:01, 74768.82KB/s]
- 32%|###2      | 42529/132723 [00:00&lt;00:01, 75279.90KB/s]
- 38%|###8      | 50503/132723 [00:00&lt;00:01, 76734.48KB/s]
- 44%|####3     | 58177/132723 [00:00&lt;00:01, 71912.88KB/s]
- 50%|####9     | 66231/132723 [00:00&lt;00:00, 74489.13KB/s]
- 56%|#####5    | 74150/132723 [00:01&lt;00:00, 75890.46KB/s]
- 62%|######1   | 82100/132723 [00:01&lt;00:00, 76968.71KB/s]
- 68%|######7   | 90145/132723 [00:01&lt;00:00, 78010.19KB/s]
- 74%|#######3  | 98119/132723 [00:01&lt;00:00, 78525.55KB/s]
- 80%|#######9  | 106157/132723 [00:01&lt;00:00, 79080.03KB/s]
- 86%|########5 | 114077/132723 [00:01&lt;00:00, 79004.03KB/s]
- 92%|#########1| 122025/132723 [00:01&lt;00:00, 79145.57KB/s]
- 98%|#########7| 130053/132723 [00:01&lt;00:00, 79483.27KB/s]
-100%|##########| 132723/132723 [00:01&lt;00:00, 75634.13KB/s]
+  5%|5         | 6908/132723 [00:00&lt;00:01, 69072.49KB/s]
+ 12%|#1        | 15541/132723 [00:00&lt;00:01, 79220.89KB/s]
+ 18%|#8        | 24166/132723 [00:00&lt;00:01, 82427.11KB/s]
+ 25%|##4       | 32822/132723 [00:00&lt;00:01, 84055.67KB/s]
+ 31%|###1      | 41482/132723 [00:00&lt;00:01, 84971.77KB/s]
+ 38%|###7      | 50069/132723 [00:00&lt;00:00, 85274.69KB/s]
+ 44%|####4     | 58672/132723 [00:00&lt;00:00, 85520.03KB/s]
+ 51%|#####     | 67345/132723 [00:00&lt;00:00, 85903.41KB/s]
+ 57%|#####7    | 75952/132723 [00:00&lt;00:00, 85953.46KB/s]
+ 64%|######3   | 84594/132723 [00:01&lt;00:00, 86094.91KB/s]
+ 70%|#######   | 93204/132723 [00:01&lt;00:00, 86024.63KB/s]
+ 77%|#######6  | 101836/132723 [00:01&lt;00:00, 86111.27KB/s]
+ 83%|########3 | 110455/132723 [00:01&lt;00:00, 86132.27KB/s]
+ 90%|########9 | 119089/132723 [00:01&lt;00:00, 86189.93KB/s]
+ 96%|#########6| 127787/132723 [00:01&lt;00:00, 86425.06KB/s]
+100%|##########| 132723/132723 [00:01&lt;00:00, 85092.56KB/s]
 </pre></div>
 </div>
 <p>Create TVM runtime and do inference
@@ -519,7 +517,7 @@ Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from h
 <span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" srcset="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" alt="deploy ssd gluoncv" class = "sphx-glr-single-img"/><p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  34.278 seconds)</p>
+<img src="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" srcset="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" alt="deploy ssd gluoncv" class = "sphx-glr-single-img"/><p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  28.157 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-ssd-gluoncv-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/cccb17d28e5e8b2e94ea8cd5ec59f6ed/deploy_ssd_gluoncv.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_ssd_gluoncv.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/sg_execution_times.html b/docs/how_to/deploy_models/sg_execution_times.html
index 53519386a8..fecae9fff0 100644
--- a/docs/how_to/deploy_models/sg_execution_times.html
+++ b/docs/how_to/deploy_models/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-deploy-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>14:57.528</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
+<p><strong>14:37.448</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 86%" />
@@ -349,39 +349,39 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_ssd_gluoncv.html#sphx-glr-how-to-deploy-models-deploy-ssd-gluoncv-py"><span class="std std-ref">Deploy Single Shot Multibox Detector(SSD) model</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_ssd_gluoncv.py</span></code>)</p></td>
-<td><p>03:34.278</p></td>
+<td><p>03:28.157</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></td>
-<td><p>03:33.185</p></td>
+<td><p>03:27.164</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></td>
-<td><p>02:27.049</p></td>
+<td><p>02:28.601</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></td>
-<td><p>01:38.317</p></td>
+<td><p>01:33.686</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></td>
-<td><p>01:14.137</p></td>
+<td><p>01:12.494</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_adreno.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno.py</span></code>)</p></td>
-<td><p>00:55.568</p></td>
+<td><p>00:54.877</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></td>
-<td><p>00:40.702</p></td>
+<td><p>00:39.597</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_nano.html#sphx-glr-how-to-deploy-models-deploy-model-on-nano-py"><span class="std std-ref">Deploy the Pretrained Model on Jetson Nano</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_nano.py</span></code>)</p></td>
-<td><p>00:27.354</p></td>
+<td><p>00:26.609</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></td>
-<td><p>00:26.932</p></td>
+<td><p>00:26.256</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/bring_your_own_datatypes.html b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
index 3d42185683..32bf19ef16 100644
--- a/docs/how_to/extend_tvm/bring_your_own_datatypes.html
+++ b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
@@ -619,7 +619,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
 <span class="n">module</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">params</span></a> <span class="o">=</span> <span class="n">get_mobilenet</span><span class="p">()</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip1cb584b8-1b90-4e72-b317-e4be25ccc26f from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip90cae528-ebda-4fec-944e-cf32c7f47d93 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 </pre></div>
 </div>
 <p>It’s easy to execute MobileNet with native TVM:</p>
diff --git a/docs/how_to/extend_tvm/sg_execution_times.html b/docs/how_to/extend_tvm/sg_execution_times.html
index 8aea95dc64..4d5b284a36 100644
--- a/docs/how_to/extend_tvm/sg_execution_times.html
+++ b/docs/how_to/extend_tvm/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-extend-tvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:53.102</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
+<p><strong>00:53.770</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -349,15 +349,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></td>
-<td><p>00:49.281</p></td>
+<td><p>00:49.968</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></td>
-<td><p>00:02.719</p></td>
+<td><p>00:02.708</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></td>
-<td><p>00:01.094</p></td>
+<td><p>00:01.088</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/use_pass_instrument.html b/docs/how_to/extend_tvm/use_pass_instrument.html
index f43f238571..232235b470 100644
--- a/docs/how_to/extend_tvm/use_pass_instrument.html
+++ b/docs/how_to/extend_tvm/use_pass_instrument.html
@@ -526,10 +526,10 @@ profile the execution time of each passes.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 18180us [18180us] (47.86%; 47.86%)
-FoldScaleAxis: 19805us [10us] (52.14%; 52.14%)
-        FoldConstant: 19795us [1760us] (52.11%; 99.95%)
-                InferType: 18035us [18035us] (47.48%; 91.11%)
+InferType: 18090us [18090us] (47.42%; 47.42%)
+FoldScaleAxis: 20062us [8us] (52.58%; 52.58%)
+        FoldConstant: 20054us [1774us] (52.56%; 99.96%)
+                InferType: 18280us [18280us] (47.91%; 91.15%)
 </pre></div>
 </div>
 </div>
@@ -551,10 +551,10 @@ Refer to following sections and <a class="reference internal" href="../../refere
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 17575us [17575us] (47.88%; 47.88%)
-FoldScaleAxis: 19128us [8us] (52.12%; 52.12%)
-        FoldConstant: 19120us [1767us] (52.09%; 99.96%)
-                InferType: 17353us [17353us] (47.28%; 90.76%)
+InferType: 17406us [17406us] (47.82%; 47.82%)
+FoldScaleAxis: 18991us [6us] (52.18%; 52.18%)
+        FoldConstant: 18985us [1772us] (52.16%; 99.97%)
+                InferType: 17213us [17213us] (47.29%; 90.67%)
 </pre></div>
 </div>
 <p>Register empty list to clear existing instruments.</p>
diff --git a/docs/how_to/optimize_operators/opt_conv_cuda.html b/docs/how_to/optimize_operators/opt_conv_cuda.html
index 90b935c097..2d4aeeba44 100644
--- a/docs/how_to/optimize_operators/opt_conv_cuda.html
+++ b/docs/how_to/optimize_operators/opt_conv_cuda.html
@@ -575,7 +575,7 @@ latency of convolution.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Convolution: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">*</span> <span cl [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 33.708736 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 52.678657 ms
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-optimize-operators-opt-conv-cuda-py">
diff --git a/docs/how_to/optimize_operators/opt_conv_tensorcore.html b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
index c15126d86f..b1936487d3 100644
--- a/docs/how_to/optimize_operators/opt_conv_tensorcore.html
+++ b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
@@ -915,7 +915,7 @@ be able to run on our build server</p>
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;conv2d with tensor core: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">* [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 13.357292 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 6.611427 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/optimize_operators/opt_gemm.html b/docs/how_to/optimize_operators/opt_gemm.html
index dfbe1682e8..18b61e87a2 100644
--- a/docs/how_to/optimize_operators/opt_gemm.html
+++ b/docs/how_to/optimize_operators/opt_gemm.html
@@ -472,8 +472,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Baseline: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019340
-Baseline: 3.366329
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018607
+Baseline: 3.388530
 </pre></div>
 </div>
 <p>In TVM, we can always inspect lower level IR to debug or optimize our schedule.
@@ -532,7 +532,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt1: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.306912
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.295865
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -598,7 +598,7 @@ vastly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt2: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.341984
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.337846
 </pre></div>
 </div>
 <p>Here is the generated IR after vectorization.</p>
@@ -658,7 +658,7 @@ the access pattern for A matrix is more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt3: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.120675
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.116724
 </pre></div>
 </div>
 <p>Here is the generated IR after loop permutation.</p>
@@ -740,7 +740,7 @@ flattening.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt4: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.109358
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.109615
 </pre></div>
 </div>
 <p>Here is the generated IR after array packing.</p>
@@ -825,7 +825,7 @@ write to C when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt5: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111938
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111480
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -914,7 +914,7 @@ write to C when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt6: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">opt6_time</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.147480
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.146769
 </pre></div>
 </div>
 <p>Here is the generated IR after parallelization.</p>
diff --git a/docs/how_to/optimize_operators/sg_execution_times.html b/docs/how_to/optimize_operators/sg_execution_times.html
index f777d59c41..c808e05c6a 100644
--- a/docs/how_to/optimize_operators/sg_execution_times.html
+++ b/docs/how_to/optimize_operators/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-optimize-operators-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:35.329</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
+<p><strong>00:34.700</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -349,15 +349,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></td>
-<td><p>00:32.539</p></td>
+<td><p>00:32.211</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></td>
-<td><p>00:01.629</p></td>
+<td><p>00:01.432</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></td>
-<td><p>00:01.162</p></td>
+<td><p>00:01.056</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
index 77234708c3..b3458f6a6e 100644
--- a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
+++ b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autoscheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>09:27.615</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
+<p><strong>09:16.639</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 85%" />
@@ -349,27 +349,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></td>
-<td><p>05:46.501</p></td>
+<td><p>05:35.119</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></td>
-<td><p>01:39.774</p></td>
+<td><p>01:39.534</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></td>
-<td><p>01:05.461</p></td>
+<td><p>01:05.720</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></td>
-<td><p>00:28.834</p></td>
+<td><p>00:29.292</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></td>
-<td><p>00:14.112</p></td>
+<td><p>00:14.013</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></td>
-<td><p>00:12.933</p></td>
+<td><p>00:12.960</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
index bc97561011..fb0d4cd03f 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
@@ -504,172 +504,55 @@ cooperative fetching, unrolling and operator fusion.</p>
              bias: Buffer(bias_2: Pointer(float32), float32, [1, 512, 1, 1], []),
              compute: Buffer(compute_2: Pointer(float32), float32, [1, 512, 7, 7], [])}
   buffer_map = {data_1: data, kernel_1: kernel, bias_1: bias, compute_1: compute} {
-  attr [IterVar(blockIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;blockIdx.x&quot;)] &quot;thread_extent&quot; = 128;
-  allocate(conv2d_nchw: Pointer(local float32), float32, [14]), storage_scope = local;
-  allocate(pad_temp.shared: Pointer(shared float32), float32, [504]), storage_scope = shared;
-  allocate(kernel.shared: Pointer(shared float32), float32, [96]), storage_scope = shared;
-  attr [IterVar(threadIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14 {
-    conv2d_nchw_1: Buffer(conv2d_nchw, float32, [49], [], scope=&quot;local&quot;, align=16)[0] = 0f32
-    conv2d_nchw_1[7] = 0f32
+  attr [IterVar(blockIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;blockIdx.x&quot;)] &quot;thread_extent&quot; = 56;
+  allocate(conv2d_nchw: Pointer(local float32), float32, [7]), storage_scope = local;
+  allocate(pad_temp.shared: Pointer(shared float32), float32, [72]), storage_scope = shared;
+  allocate(kernel.shared: Pointer(shared float32), float32, [1536]), storage_scope = shared;
+  attr [IterVar(threadIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 64 {
+    conv2d_nchw_1: Buffer(conv2d_nchw, float32, [7], [], scope=&quot;local&quot;, align=16)[0] = 0f32
     conv2d_nchw_1[1] = 0f32
-    conv2d_nchw_1[8] = 0f32
     conv2d_nchw_1[2] = 0f32
-    conv2d_nchw_1[9] = 0f32
     conv2d_nchw_1[3] = 0f32
-    conv2d_nchw_1[10] = 0f32
     conv2d_nchw_1[4] = 0f32
-    conv2d_nchw_1[11] = 0f32
     conv2d_nchw_1[5] = 0f32
-    conv2d_nchw_1[12] = 0f32
     conv2d_nchw_1[6] = 0f32
-    conv2d_nchw_1[13] = 0f32
     for (rc.outer.outer: int32, 0, 64) {
       for (ry.outer.outer: int32, 0, 3) {
-        let cse_var_4: int32 = (rc.outer.outer*392)
-        let cse_var_3: int32 = (ry.outer.outer*7)
-        let cse_var_2: int32 = (rc.outer.outer*72)
-        let cse_var_1: int32 = (ry.outer.outer*3)
-         {
-          attr [IterVar(threadIdx.x_1: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1: Buffer(pad_temp.shared, float32, [504], [], scope=&quot;shared&quot;)[threadIdx.x_1] = @tir.if_then_else((((1 &lt;= (floordiv(threadIdx.x_1, 9) + ry.outer.outer)) &amp;&amp; (1 &lt;= floormod(threadIdx.x_1, 9))) &amp;&amp; (floormod(threadIdx.x_1, 9) &lt; 8)), data_3: Buffer(data_2, float32, [25088], [])[((((cse_var_4 + (floordiv(threadIdx.x_1, 9)*7)) + cse_var_3) + floormod(threadIdx.x_1, 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 14)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 5), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 5), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 14), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 28)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 1), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 1), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 28), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 42)] = @tir.if_then_else(((((floordiv((threadIdx.x_1 + 42), 9) + ry.outer.outer) &lt; 8) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 6), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 6), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 42), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 56)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 2), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 2), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 56), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 70)] = @tir.if_then_else((((1 &lt;= (floordiv(floormod((threadIdx.x_1 + 7), 63), 9) + ry.outer.outer)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 7), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 7), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 70), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 84)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 3), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 3), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 84), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 98)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 8), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 8), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 98), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 112)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 49), 63), 9) + ry.outer.outer) &lt; 8) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 4), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 4), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 112), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 126)] = @tir.if_then_else((((1 &lt;= (floordiv(threadIdx.x_1, 9) + ry.outer.outer)) &amp;&amp; (1 &lt;= floormod(threadIdx.x_1, 9))) &amp;&amp; (floormod(threadIdx.x_1, 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv(threadIdx.x_1, 9)*7)) + cse_var_3) + floormod(threadIdx.x_1, 9)) + 90)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 140)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 5), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 5), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 140), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 154)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 1), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 1), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 154), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 168)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 42), 63), 9) + ry.outer.outer) &lt; 8) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 6), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 6), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 168), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 182)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 2), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 2), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 182), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 196)] = @tir.if_then_else((((1 &lt;= (floordiv(floormod((threadIdx.x_1 + 7), 63), 9) + ry.outer.outer)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 7), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 7), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 196), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 210)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 3), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 3), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 210), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 224)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 8), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 8), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 224), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 238)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 49), 63), 9) + ry.outer.outer) &lt; 8) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 4), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 4), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 238), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 252)] = @tir.if_then_else((((1 &lt;= (floordiv(threadIdx.x_1, 9) + ry.outer.outer)) &amp;&amp; (1 &lt;= floormod(threadIdx.x_1, 9))) &amp;&amp; (floormod(threadIdx.x_1, 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv(threadIdx.x_1, 9)*7)) + cse_var_3) + floormod(threadIdx.x_1, 9)) + 188)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 266)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 5), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 5), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 266), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 280)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 1), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 1), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 280), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 294)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 42), 63), 9) + ry.outer.outer) &lt; 8) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 6), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 6), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 294), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 308)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 2), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 2), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 308), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 322)] = @tir.if_then_else((((1 &lt;= (floordiv(floormod((threadIdx.x_1 + 7), 63), 9) + ry.outer.outer)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 7), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 7), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 322), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 336)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 3), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 3), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 336), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 350)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 8), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 8), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 350), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 364)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 49), 63), 9) + ry.outer.outer) &lt; 8) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 4), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 4), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 364), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 378)] = @tir.if_then_else((((1 &lt;= (floordiv(threadIdx.x_1, 9) + ry.outer.outer)) &amp;&amp; (1 &lt;= floormod(threadIdx.x_1, 9))) &amp;&amp; (floormod(threadIdx.x_1, 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv(threadIdx.x_1, 9)*7)) + cse_var_3) + floormod(threadIdx.x_1, 9)) + 286)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 392)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 5), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 5), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 392), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 406)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 1), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 1), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 406), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 420)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 42), 63), 9) + ry.outer.outer) &lt; 8) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 6), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 6), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 420), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 434)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod((threadIdx.x_1 + 56), 63), 9) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 2), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 2), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 434), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 448)] = @tir.if_then_else((((1 &lt;= (floordiv(floormod((threadIdx.x_1 + 7), 63), 9) + ry.outer.outer)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 7), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 7), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 448), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 462)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 3), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 3), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 462), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 476)] = @tir.if_then_else(((1 &lt;= floormod((threadIdx.x_1 + 8), 9)) &amp;&amp; (floormod((threadIdx.x_1 + 8), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 476), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          pad_temp.shared_1[(threadIdx.x_1 + 490)] = @tir.if_then_else(((((floordiv(floormod((threadIdx.x_1 + 49), 63), 9) + ry.outer.outer) &lt; 8) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1 + 4), 9))) &amp;&amp; (floormod((threadIdx.x_1 + 4), 9) &lt; 8)), data_3[((((cse_var_4 + (floordiv((threadIdx.x_1 + 490), 9)*7)) + cse_var_3) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
-          attr [IterVar(threadIdx.x_2: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          kernel.shared_1: Buffer(kernel.shared, float32, [96], [], scope=&quot;shared&quot;)[threadIdx.x_2] = kernel_3: Buffer(kernel_2, float32, [2359296], [])[(((((blockIdx.x*18432) + cse_var_2) + (floordiv(threadIdx.x_2, 3)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
-          attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          kernel.shared_1[(threadIdx.x_2 + 14)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 14), 24)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 14), 24), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 2), 3))]
-          attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          kernel.shared_1[(threadIdx.x_2 + 28)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 28), 24)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 4), 24), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 1), 3))]
-          attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          kernel.shared_1[(threadIdx.x_2 + 42)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 42), 24)*4608)) + cse_var_2) + (floormod((floordiv(threadIdx.x_2, 3) + 6), 8)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
-          attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          kernel.shared_1[(threadIdx.x_2 + 56)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 56), 24)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 2), 3))]
-          attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          kernel.shared_1[(threadIdx.x_2 + 70)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 70), 24)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 22), 24), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 1), 3))]
-          attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 14;
-          if @tir.likely((threadIdx.x_2 &lt; 12), dtype=bool) {
-            kernel.shared_1[(threadIdx.x_2 + 84)] = kernel_3[((((((blockIdx.x*18432) + (floordiv((threadIdx.x_2 + 84), 24)*4608)) + cse_var_2) + ((floordiv(threadIdx.x_2, 3) + 4)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
-          }
-          for (rc.outer.inner: int32, 0, 2) {
-            for (rc.inner: int32, 0, 4) {
-              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9))]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9))]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 1)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 1)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[(((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3))]))
-              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 48)]))
-              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 1)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 1)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 7)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 1)]))
-              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 7)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 49)]))
-              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 7)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 7)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
-              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 8)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 2)]))
-              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[((((rc.outer.inner*252) + (rc.inner*63)) + (floormod(threadIdx.x, 7)*9)) + 8)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*24) + (rc.outer.inner*12)) + (rc.inner*3)) + 50)]))
+        attr [IterVar(threadIdx.x_1: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 64;
+        pad_temp.shared_1: Buffer(pad_temp.shared, float32, [72], [], scope=&quot;shared&quot;)[threadIdx.x_1] = @tir.if_then_else(((((1 &lt;= (ry.outer.outer + floormod(blockIdx.x, 7))) &amp;&amp; ((ry.outer.outer + floormod(blockIdx.x, 7)) &lt; 8)) &amp;&amp; (1 &lt;= floormod(threadIdx.x_1, 9))) &amp;&amp; (floormod(threadIdx.x_1, 9) &lt; 8)), data_3: Buffer(data_2, float32, [25088], [])[((((((rc.outer.outer*392) + (floordiv(threadIdx.x_1, 9)*49)) + (ry.outer.outer*7)) + (floormod(blo [...]
+        attr [IterVar(threadIdx.x_1, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 64;
+        if @tir.likely((threadIdx.x_1 &lt; 8), dtype=bool) {
+          pad_temp.shared_1[(threadIdx.x_1 + 64)] = @tir.if_then_else((((1 &lt;= (ry.outer.outer + floormod(blockIdx.x, 7))) &amp;&amp; ((ry.outer.outer + floormod(blockIdx.x, 7)) &lt; 8)) &amp;&amp; (threadIdx.x_1 &lt; 7)), data_3[((((((rc.outer.outer*392) + (floordiv((threadIdx.x_1 + 64), 9)*49)) + (ry.outer.outer*7)) + (floormod(blockIdx.x, 7)*7)) + threadIdx.x_1) - 7)], 0f32, dtype=float32)
+        }
+        for (ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer: int32, 0, 24) {
+          attr [IterVar(threadIdx.x_2: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 64;
+          kernel.shared_1: Buffer(kernel.shared, float32, [1536], [], scope=&quot;shared&quot;)[((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*64) + threadIdx.x_2)] = kernel_3: Buffer(kernel_2, float32, [2359296], [])[((((((floordiv(blockIdx.x, 7)*294912) + (floordiv(((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*8) + floordiv(threadIdx.x_2, 8)), 3)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(((ax0.ax1.fused.ax2.fused.ax3.fused.outer.outer*16) + threadIdx.x_2), 24), 3)*9)) + (ry [...]
+        }
+        for (rc.outer.inner: int32, 0, 4) {
+          for (rx.outer.inner: int32, 0, 3) {
+            let cse_var_1: int32 = ((rc.outer.inner*18) + rx.outer.inner)
+             {
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[cse_var_1]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(cse_var_1 + 9)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(cse_var_1 + 1)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(cse_var_1 + 10)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(cse_var_1 + 2)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(cse_var_1 + 11)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(cse_var_1 + 3)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(cse_var_1 + 12)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(cse_var_1 + 4)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(cse_var_1 + 13)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(cse_var_1 + 5)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(cse_var_1 + 14)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(cse_var_1 + 6)]*kernel.shared_1[(((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner)]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(cse_var_1 + 15)]*kernel.shared_1[((((threadIdx.x*24) + (rc.outer.inner*6)) + rx.outer.inner) + 3)]))
             }
           }
         }
       }
     }
     for (i3.inner: int32, 0, 7) {
-      compute_3: Buffer(compute_2, float32, [25088], [])[(((blockIdx.x*196) + (threadIdx.x*7)) + i3.inner)] = max((conv2d_nchw_1[i3.inner] + bias_3: Buffer(bias_2, float32, [512], [])[((blockIdx.x*4) + floordiv(threadIdx.x, 7))]), 0f32)
-      compute_3[((((blockIdx.x*196) + (threadIdx.x*7)) + i3.inner) + 98)] = max((conv2d_nchw_1[(i3.inner + 7)] + bias_3[(((blockIdx.x*4) + floordiv(threadIdx.x, 7)) + 2)]), 0f32)
+      compute_3: Buffer(compute_2, float32, [25088], [])[((((floordiv(blockIdx.x, 7)*3136) + (threadIdx.x*49)) + (floormod(blockIdx.x, 7)*7)) + i3.inner)] = max((conv2d_nchw_1[i3.inner] + bias_3: Buffer(bias_2, float32, [512], [])[((floordiv(blockIdx.x, 7)*64) + threadIdx.x)]), 0f32)
     }
   }
 }
@@ -706,7 +589,7 @@ cooperative fetching, unrolling and operator fusion.</p>
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.274 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.468 ms
 </pre></div>
 </div>
 </div>
@@ -737,31 +620,31 @@ conv2d_nchw_nn_o_o_o_i, conv2d_nchw_nn_o_o_i = s[conv2d_nchw].split(conv2d_nchw_
 conv2d_nchw_nn_o_o_o_o, conv2d_nchw_nn_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_o_i, factor=1)
 conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=1)
 conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=1)
-conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=2)
-conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=2)
+conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=64)
+conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=1)
 conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=1)
 conv2d_nchw_yy_o_o_i, conv2d_nchw_yy_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_i, factor=1)
-conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=7)
+conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=1)
 conv2d_nchw_yy_o_o_o_o, conv2d_nchw_yy_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_o_i, factor=1)
-conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=7)
-conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=1)
+conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=1)
+conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=7)
 conv2d_nchw_xx_o_o_o_i, conv2d_nchw_xx_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_i, factor=1)
 conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=1)
-conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=4)
-conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=2)
+conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=2)
+conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=4)
 conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=1)
 conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=1)
-conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=3)
-conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=1)
+conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=1)
+conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=3)
 s[conv2d_nchw].reorder(conv2d_nchw_nn_o_o_o_o, conv2d_nchw_ff_o_o_o_o, conv2d_nchw_yy_o_o_o_o, conv2d_nchw_xx_o_o_o_o, conv2d_nchw_nn_o_o_o_i, conv2d_nchw_ff_o_o_o_i, conv2d_nchw_yy_o_o_o_i, conv2d_nchw_xx_o_o_o_i, conv2d_nchw_nn_o_o_i, conv2d_nchw_ff_o_o_i, conv2d_nchw_yy_o_o_i, conv2d_nchw_xx_o_o_i, conv2d_nchw_rc_o_o, conv2d_nchw_ry_o_o, conv2d_nchw_rx_o_o, conv2d_nchw_rc_o_i, conv2d_nchw_ry_o_i, conv2d_nchw_rx_o_i, conv2d_nchw_nn_o_i, conv2d_nchw_ff_o_i, conv2d_nchw_yy_o_i, conv2d_nc [...]
 compute_i0_o_i, compute_i0_i = s[compute].split(compute_i0, factor=1)
 compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
 compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
 compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=1)
-compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=2)
-compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=2)
+compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=64)
+compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
 compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
-compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
+compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=1)
 compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
 compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=7)
 compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=1)
@@ -784,14 +667,14 @@ s[compute].bind(compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused, te.thread
 kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
 kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
 s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=14)
+kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
 s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis(&quot;threadIdx.x&quot;))
 pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
 pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
 s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=14)
+pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
 s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis(&quot;threadIdx.x&quot;))
-s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, &quot;auto_unroll_max_step&quot;, 64)
+s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, &quot;auto_unroll_max_step&quot;, 16)
 s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, &quot;unroll_explicit&quot;, True)
 
 CUDA source code:
@@ -809,124 +692,50 @@ CUDA source code:
   #define int64_t long long
   #define uint64_t unsigned long long
 #endif
-extern &quot;C&quot; __global__ void __launch_bounds__(14) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
-  float conv2d_nchw[14];
-  __shared__ float pad_temp_shared[504];
-  __shared__ float kernel_shared[96];
+extern &quot;C&quot; __global__ void __launch_bounds__(64) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
+  float conv2d_nchw[7];
+  __shared__ float pad_temp_shared[72];
+  __shared__ float kernel_shared[1536];
   conv2d_nchw[0] = 0.000000e+00f;
-  conv2d_nchw[7] = 0.000000e+00f;
   conv2d_nchw[1] = 0.000000e+00f;
-  conv2d_nchw[8] = 0.000000e+00f;
   conv2d_nchw[2] = 0.000000e+00f;
-  conv2d_nchw[9] = 0.000000e+00f;
   conv2d_nchw[3] = 0.000000e+00f;
-  conv2d_nchw[10] = 0.000000e+00f;
   conv2d_nchw[4] = 0.000000e+00f;
-  conv2d_nchw[11] = 0.000000e+00f;
   conv2d_nchw[5] = 0.000000e+00f;
-  conv2d_nchw[12] = 0.000000e+00f;
   conv2d_nchw[6] = 0.000000e+00f;
-  conv2d_nchw[13] = 0.000000e+00f;
   for (int rc_outer_outer = 0; rc_outer_outer &lt; 64; ++rc_outer_outer) {
     for (int ry_outer_outer = 0; ry_outer_outer &lt; 3; ++ry_outer_outer) {
       __syncthreads();
-      pad_temp_shared[((int)threadIdx.x)] = ((((1 &lt;= ((((int)threadIdx.x) / 9) + ry_outer_outer)) &amp;&amp; (1 &lt;= (((int)threadIdx.x) % 9))) &amp;&amp; ((((int)threadIdx.x) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 7)) + (ry_outer_outer * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 14)] = (((1 &lt;= ((((int)threadIdx.x) + 5) % 9)) &amp;&amp; (((((int)threadIdx.x) + 5) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 14) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 28)] = (((1 &lt;= ((((int)threadIdx.x) + 1) % 9)) &amp;&amp; (((((int)threadIdx.x) + 1) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 28) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 42)] = (((((((((int)threadIdx.x) + 42) / 9) + ry_outer_outer) &lt; 8) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 6) % 9))) &amp;&amp; (((((int)threadIdx.x) + 6) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 42) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 56)] = (((((1 &lt;= ((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer)) &amp;&amp; (((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 2) % 9))) &amp;&amp; (((((int)threadIdx.x) + 2) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 56) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 70)] = ((((1 &lt;= (((((int)threadIdx.x) + 7) / 9) + ry_outer_outer)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 7) % 9))) &amp;&amp; (((((int)threadIdx.x) + 7) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 70) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 84)] = (((1 &lt;= ((((int)threadIdx.x) + 3) % 9)) &amp;&amp; (((((int)threadIdx.x) + 3) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 84) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 98)] = (((1 &lt;= ((((int)threadIdx.x) + 8) % 9)) &amp;&amp; (((((int)threadIdx.x) + 8) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 98) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 112)] = (((((((((int)threadIdx.x) + 49) / 9) + ry_outer_outer) &lt; 8) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 4) % 9))) &amp;&amp; (((((int)threadIdx.x) + 4) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 112) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 126)] = ((((1 &lt;= ((((int)threadIdx.x) / 9) + ry_outer_outer)) &amp;&amp; (1 &lt;= (((int)threadIdx.x) % 9))) &amp;&amp; ((((int)threadIdx.x) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 7)) + (ry_outer_outer * 7)) + (((int)threadIdx.x) % 9)) + 90)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 140)] = (((1 &lt;= ((((int)threadIdx.x) + 5) % 9)) &amp;&amp; (((((int)threadIdx.x) + 5) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 140) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 154)] = (((1 &lt;= ((((int)threadIdx.x) + 1) % 9)) &amp;&amp; (((((int)threadIdx.x) + 1) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 154) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 168)] = (((((((((int)threadIdx.x) + 42) / 9) + ry_outer_outer) &lt; 8) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 6) % 9))) &amp;&amp; (((((int)threadIdx.x) + 6) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 168) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 182)] = (((((1 &lt;= ((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer)) &amp;&amp; (((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 2) % 9))) &amp;&amp; (((((int)threadIdx.x) + 2) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 182) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 196)] = ((((1 &lt;= (((((int)threadIdx.x) + 7) / 9) + ry_outer_outer)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 7) % 9))) &amp;&amp; (((((int)threadIdx.x) + 7) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 196) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 210)] = (((1 &lt;= ((((int)threadIdx.x) + 3) % 9)) &amp;&amp; (((((int)threadIdx.x) + 3) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 210) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 224)] = (((1 &lt;= ((((int)threadIdx.x) + 8) % 9)) &amp;&amp; (((((int)threadIdx.x) + 8) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 224) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 238)] = (((((((((int)threadIdx.x) + 49) / 9) + ry_outer_outer) &lt; 8) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 4) % 9))) &amp;&amp; (((((int)threadIdx.x) + 4) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 238) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 252)] = ((((1 &lt;= ((((int)threadIdx.x) / 9) + ry_outer_outer)) &amp;&amp; (1 &lt;= (((int)threadIdx.x) % 9))) &amp;&amp; ((((int)threadIdx.x) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 7)) + (ry_outer_outer * 7)) + (((int)threadIdx.x) % 9)) + 188)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 266)] = (((1 &lt;= ((((int)threadIdx.x) + 5) % 9)) &amp;&amp; (((((int)threadIdx.x) + 5) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 266) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 280)] = (((1 &lt;= ((((int)threadIdx.x) + 1) % 9)) &amp;&amp; (((((int)threadIdx.x) + 1) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 280) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 294)] = (((((((((int)threadIdx.x) + 42) / 9) + ry_outer_outer) &lt; 8) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 6) % 9))) &amp;&amp; (((((int)threadIdx.x) + 6) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 294) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 308)] = (((((1 &lt;= ((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer)) &amp;&amp; (((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 2) % 9))) &amp;&amp; (((((int)threadIdx.x) + 2) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 308) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 322)] = ((((1 &lt;= (((((int)threadIdx.x) + 7) / 9) + ry_outer_outer)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 7) % 9))) &amp;&amp; (((((int)threadIdx.x) + 7) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 322) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 336)] = (((1 &lt;= ((((int)threadIdx.x) + 3) % 9)) &amp;&amp; (((((int)threadIdx.x) + 3) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 336) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 350)] = (((1 &lt;= ((((int)threadIdx.x) + 8) % 9)) &amp;&amp; (((((int)threadIdx.x) + 8) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 350) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 364)] = (((((((((int)threadIdx.x) + 49) / 9) + ry_outer_outer) &lt; 8) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 4) % 9))) &amp;&amp; (((((int)threadIdx.x) + 4) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 364) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 378)] = ((((1 &lt;= ((((int)threadIdx.x) / 9) + ry_outer_outer)) &amp;&amp; (1 &lt;= (((int)threadIdx.x) % 9))) &amp;&amp; ((((int)threadIdx.x) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 7)) + (ry_outer_outer * 7)) + (((int)threadIdx.x) % 9)) + 286)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 392)] = (((1 &lt;= ((((int)threadIdx.x) + 5) % 9)) &amp;&amp; (((((int)threadIdx.x) + 5) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 392) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 406)] = (((1 &lt;= ((((int)threadIdx.x) + 1) % 9)) &amp;&amp; (((((int)threadIdx.x) + 1) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 406) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 420)] = (((((((((int)threadIdx.x) + 42) / 9) + ry_outer_outer) &lt; 8) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 6) % 9))) &amp;&amp; (((((int)threadIdx.x) + 6) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 420) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 434)] = (((((1 &lt;= ((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer)) &amp;&amp; (((((((int)threadIdx.x) + 56) % 63) / 9) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 2) % 9))) &amp;&amp; (((((int)threadIdx.x) + 2) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 434) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 448)] = ((((1 &lt;= (((((int)threadIdx.x) + 7) / 9) + ry_outer_outer)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 7) % 9))) &amp;&amp; (((((int)threadIdx.x) + 7) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 448) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 462)] = (((1 &lt;= ((((int)threadIdx.x) + 3) % 9)) &amp;&amp; (((((int)threadIdx.x) + 3) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 462) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 476)] = (((1 &lt;= ((((int)threadIdx.x) + 8) % 9)) &amp;&amp; (((((int)threadIdx.x) + 8) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 476) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
-      pad_temp_shared[(((int)threadIdx.x) + 490)] = (((((((((int)threadIdx.x) + 49) / 9) + ry_outer_outer) &lt; 8) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) + 4) % 9))) &amp;&amp; (((((int)threadIdx.x) + 4) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 490) / 9) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
-      kernel_shared[((int)threadIdx.x)] = kernel[(((((((int)blockIdx.x) * 18432) + (rc_outer_outer * 72)) + ((((int)threadIdx.x) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
-      kernel_shared[(((int)threadIdx.x) + 14)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 14) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 14) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
-      kernel_shared[(((int)threadIdx.x) + 28)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 28) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) + 4) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
-      kernel_shared[(((int)threadIdx.x) + 42)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 42) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) / 3) + 6) &amp; 7) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
-      kernel_shared[(((int)threadIdx.x) + 56)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 56) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) + 8) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
-      kernel_shared[(((int)threadIdx.x) + 70)] = kernel[((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 70) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 22) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
-      if (((int)threadIdx.x) &lt; 12) {
-        kernel_shared[(((int)threadIdx.x) + 84)] = kernel[(((((((((int)blockIdx.x) * 18432) + (((((int)threadIdx.x) + 84) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((int)threadIdx.x) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 36)];
+      pad_temp_shared[((int)threadIdx.x)] = (((((1 &lt;= (ry_outer_outer + (((int)blockIdx.x) % 7))) &amp;&amp; ((ry_outer_outer + (((int)blockIdx.x) % 7)) &lt; 8)) &amp;&amp; (1 &lt;= (((int)threadIdx.x) % 9))) &amp;&amp; ((((int)threadIdx.x) % 9) &lt; 8)) ? data[((((((rc_outer_outer * 392) + ((((int)threadIdx.x) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
+      if (((int)threadIdx.x) &lt; 8) {
+        pad_temp_shared[(((int)threadIdx.x) + 64)] = ((((1 &lt;= (ry_outer_outer + (((int)blockIdx.x) % 7))) &amp;&amp; ((ry_outer_outer + (((int)blockIdx.x) % 7)) &lt; 8)) &amp;&amp; (((int)threadIdx.x) &lt; 7)) ? data[((((((rc_outer_outer * 392) + (((((int)threadIdx.x) + 64) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + ((int)threadIdx.x)) - 7)] : 0.000000e+00f);
+      }
+      for (int ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer = 0; ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer &lt; 24; ++ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer) {
+        kernel_shared[((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 64) + ((int)threadIdx.x))] = kernel[(((((((((int)blockIdx.x) / 7) * 294912) + ((((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 8) + (((int)threadIdx.x) &gt;&gt; 3)) / 3) * 4608)) + (rc_outer_outer * 72)) + (((((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 16) + ((int)threadIdx.x)) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer) % 3))];
       }
       __syncthreads();
-      for (int rc_outer_inner = 0; rc_outer_inner &lt; 2; ++rc_outer_inner) {
-        for (int rc_inner = 0; rc_inner &lt; 4; ++rc_inner) {
-          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9))] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9))] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 1)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 1)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3))]));
-          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 48)]));
-          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 1)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 1)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 7)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 1)]));
-          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 7)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 49)]));
-          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 7)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 7)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
-          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 8)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 2)]));
-          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((rc_outer_inner * 252) + (rc_inner * 63)) + ((((int)threadIdx.x) % 7) * 9)) + 8)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 24) + (rc_outer_inner * 12)) + (rc_inner * 3)) + 50)]));
+      for (int rc_outer_inner = 0; rc_outer_inner &lt; 4; ++rc_outer_inner) {
+        for (int rx_outer_inner = 0; rx_outer_inner &lt; 3; ++rx_outer_inner) {
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((rc_outer_inner * 18) + rx_outer_inner)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 9)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 1)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 10)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 2)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 11)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 3)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 12)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 4)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 13)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 5)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 14)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 6)] * kernel_shared[(((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner)]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((rc_outer_inner * 18) + rx_outer_inner) + 15)] * kernel_shared[((((((int)threadIdx.x) * 24) + (rc_outer_inner * 6)) + rx_outer_inner) + 3)]));
         }
       }
     }
   }
   for (int i3_inner = 0; i3_inner &lt; 7; ++i3_inner) {
-    compute[(((((int)blockIdx.x) * 196) + (((int)threadIdx.x) * 7)) + i3_inner)] = max((conv2d_nchw[i3_inner] + bias[((((int)blockIdx.x) * 4) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
-    compute[((((((int)blockIdx.x) * 196) + (((int)threadIdx.x) * 7)) + i3_inner) + 98)] = max((conv2d_nchw[(i3_inner + 7)] + bias[(((((int)blockIdx.x) * 4) + (((int)threadIdx.x) / 7)) + 2)]), 0.000000e+00f);
+    compute[(((((((int)blockIdx.x) / 7) * 3136) + (((int)threadIdx.x) * 49)) + ((((int)blockIdx.x) % 7) * 7)) + i3_inner)] = max((conv2d_nchw[i3_inner] + bias[(((((int)blockIdx.x) / 7) * 64) + ((int)threadIdx.x))]), 0.000000e+00f);
   }
 }
 </pre></div>
@@ -963,7 +772,7 @@ In the example below we resume the status and do more 5 trials.</p>
 Get devices for measurement successfully!
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes  46.501 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes  35.119 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/e3e540f3b477c0c52d8eb73e674e8ffd/tune_conv2d_layer_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_conv2d_layer_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
index 647a4d174d..d7e77e6b5d 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
@@ -916,7 +916,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   7.9208       7.9243       7.9282       7.9099       0.0079
+   7.8772       7.8747       7.8854       7.8715       0.0059
 </pre></div>
 </div>
 </div>
@@ -938,7 +938,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  5.461 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  5.720 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-cuda-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/eafe360d52540634c9eea0fa89e804bd/tune_network_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
index e609c5affc..b570ad55b2 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
@@ -935,7 +935,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  761.4538     761.2513     762.2114     760.8987      0.5547
+  767.7132     768.2357     768.4878     766.4161      0.9229
 </pre></div>
 </div>
 </div>
@@ -957,7 +957,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  39.774 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  39.534 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/e416b94ca1090b0897c0f6e0df95b911/tune_network_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_x86.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html b/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
index bfa572f40c..2ec807fffc 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
@@ -633,79 +633,29 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
              placeholder_4: Buffer(placeholder_14: Pointer(float32), float32, [128, 512], []),
              compute: Buffer(compute_2: Pointer(float32), float32, [128, 512], [])}
   buffer_map = {placeholder_5: placeholder, placeholder_6: placeholder_1, placeholder_7: placeholder_2, placeholder_8: placeholder_3, placeholder_9: placeholder_4, compute_1: compute} {
-  for (i0.outer.i1.outer.fused: int32, 0, 32) &quot;parallel&quot; {
-    allocate(compute_3: Pointer(global float32), float32, [2048]), storage_scope = global {
-      for (i.outer.inner: int32, 0, 8) {
-        for (nb_j.inner: int32, 0, 2) {
-          for (i.inner.init: int32, 0, 8) {
-            let cse_var_1: int32 = (((i.outer.inner*256) + (i.inner.init*32)) + (nb_j.inner*16))
-             {
-              compute_4: Buffer(compute_3, float32, [2048], [])[cse_var_1] = 0f32
-              compute_4[(cse_var_1 + 1)] = 0f32
-              compute_4[(cse_var_1 + 2)] = 0f32
-              compute_4[(cse_var_1 + 3)] = 0f32
-              compute_4[(cse_var_1 + 4)] = 0f32
-              compute_4[(cse_var_1 + 5)] = 0f32
-              compute_4[(cse_var_1 + 6)] = 0f32
-              compute_4[(cse_var_1 + 7)] = 0f32
-              compute_4[(cse_var_1 + 8)] = 0f32
-              compute_4[(cse_var_1 + 9)] = 0f32
-              compute_4[(cse_var_1 + 10)] = 0f32
-              compute_4[(cse_var_1 + 11)] = 0f32
-              compute_4[(cse_var_1 + 12)] = 0f32
-              compute_4[(cse_var_1 + 13)] = 0f32
-              compute_4[(cse_var_1 + 14)] = 0f32
-              compute_4[(cse_var_1 + 15)] = 0f32
-            }
+  for (i0.outer: int32, 0, 8) &quot;parallel&quot; {
+    allocate(compute_3: Pointer(global float32), float32, [256]), storage_scope = global;
+    for (i1.outer: int32, 0, 64) {
+      for (i.outer.inner: int32, 0, 2) {
+        for (i.inner.init: int32, 0, 8) {
+          for (j.init: int32, 0, 16) {
+            compute_4: Buffer(compute_3, float32, [256], [])[(((i.outer.inner*128) + (i.inner.init*16)) + j.init)] = 0f32
           }
-          for (elem_idx: int32, 0, let cse_var_2: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner) in (placeholder_15: Buffer(placeholder_13, int32, [33], [])[(cse_var_2 + 1)] - placeholder_15[cse_var_2])) {
-            for (i.inner: int32, 0, 8) {
-              let cse_var_21: int32 = (elem_idx*16)
-              let cse_var_20: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner)
-              let cse_var_19: int32 = (((i.outer.inner*256) + (i.inner*32)) + (nb_j.inner*16))
-              let cse_var_18: int32 = (((floordiv(i0.outer.i1.outer.fused, 16)*16384) + (i.outer.inner*2048)) + (i.inner*256))
-              let cse_var_17: int32 = (cse_var_19 + 9)
-              let cse_var_16: int32 = (cse_var_19 + 8)
-              let cse_var_15: int32 = (cse_var_19 + 7)
-              let cse_var_14: int32 = (cse_var_19 + 6)
-              let cse_var_13: int32 = (cse_var_19 + 5)
-              let cse_var_12: int32 = (cse_var_19 + 4)
-              let cse_var_11: int32 = (cse_var_19 + 3)
-              let cse_var_10: int32 = (cse_var_19 + 2)
-              let cse_var_9: int32 = (cse_var_19 + 15)
-              let cse_var_8: int32 = (cse_var_19 + 14)
-              let cse_var_7: int32 = (cse_var_19 + 13)
-              let cse_var_6: int32 = (cse_var_19 + 12)
-              let cse_var_5: int32 = (cse_var_19 + 11)
-              let cse_var_4: int32 = (cse_var_19 + 10)
-              let cse_var_3: int32 = (cse_var_19 + 1)
-               {
-                compute_4[cse_var_19] = (compute_4[cse_var_19] + (placeholder_16: Buffer(placeholder_11, float32, [78656], [])[((placeholder_15[cse_var_20]*16) + cse_var_21)]*max(placeholder_17: Buffer(placeholder_10, float32, [32768], [])[(cse_var_18 + placeholder_18: Buffer(placeholder_12, int32, [4916], [])[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_3] = (compute_4[cse_var_3] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 1)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_10] = (compute_4[cse_var_10] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 2)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_11] = (compute_4[cse_var_11] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 3)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_12] = (compute_4[cse_var_12] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 4)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_13] = (compute_4[cse_var_13] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 5)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_14] = (compute_4[cse_var_14] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 6)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_15] = (compute_4[cse_var_15] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 7)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_16] = (compute_4[cse_var_16] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 8)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_17] = (compute_4[cse_var_17] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 9)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_4] = (compute_4[cse_var_4] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 10)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_5] = (compute_4[cse_var_5] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 11)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_6] = (compute_4[cse_var_6] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 12)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_7] = (compute_4[cse_var_7] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 13)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_8] = (compute_4[cse_var_8] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 14)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-                compute_4[cse_var_9] = (compute_4[cse_var_9] + (placeholder_16[(((placeholder_15[cse_var_20]*16) + cse_var_21) + 15)]*max(placeholder_17[(cse_var_18 + placeholder_18[(placeholder_15[cse_var_20] + elem_idx)])], 0f32)))
-              }
+        }
+        for (elem_idx: int32, 0, let cse_var_1: int32 = floordiv(i1.outer, 2) in (placeholder_15: Buffer(placeholder_13, int32, [33], [])[(cse_var_1 + 1)] - placeholder_15[cse_var_1])) {
+          for (i.inner: int32, 0, 8) {
+            for (j: int32, 0, 16) {
+              let cse_var_3: int32 = floordiv(i1.outer, 2)
+              let cse_var_2: int32 = (((i.outer.inner*128) + (i.inner*16)) + j)
+              compute_4[cse_var_2] = (compute_4[cse_var_2] + (placeholder_16: Buffer(placeholder_11, float32, [78656], [])[(((placeholder_15[cse_var_3]*16) + (elem_idx*16)) + j)]*max(placeholder_17: Buffer(placeholder_10, float32, [32768], [])[((((i0.outer*4096) + (i.outer.inner*2048)) + (i.inner*256)) + placeholder_18: Buffer(placeholder_12, int32, [4916], [])[(placeholder_15[cse_var_3] + elem_idx)])], 0f32)))
             }
           }
         }
       }
-      for (i0.inner: int32, 0, 64) {
-        for (i1.inner: int32, 0, 32) {
-          let cse_var_22: int32 = ((((floordiv(i0.outer.i1.outer.fused, 16)*32768) + (i0.inner*512)) + (floormod(i0.outer.i1.outer.fused, 16)*32)) + i1.inner)
-          compute_5: Buffer(compute_2, float32, [65536], [])[cse_var_22] = max((compute_4[((i0.inner*32) + i1.inner)] + placeholder_19: Buffer(placeholder_14, float32, [65536], [])[cse_var_22]), 0f32)
-        }
+      for (i0.inner: int32, 0, 16) {
+        let cse_var_5: int32 = (i1.outer*8)
+        let cse_var_4: int32 = (((i0.outer*8192) + (i0.inner*512)) + cse_var_5)
+        compute_5: Buffer(compute_2, float32, [65536], [])[ramp(cse_var_4, 1, 8)] = max((compute_4[ramp((((i0.inner*16) + cse_var_5) - (floordiv(i1.outer, 2)*16)), 1, 8)] + placeholder_19: Buffer(placeholder_14, float32, [65536], [])[ramp(cse_var_4, 1, 8)]), broadcast(0f32, 8))
       }
     }
   }
@@ -743,7 +693,7 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 1.847 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 3.041 ms
 </pre></div>
 </div>
 <div class="admonition note">
diff --git a/docs/how_to/tune_with_autotvm/sg_execution_times.html b/docs/how_to/tune_with_autotvm/sg_execution_times.html
index 748b77b827..8e71af909f 100644
--- a/docs/how_to/tune_with_autotvm/sg_execution_times.html
+++ b/docs/how_to/tune_with_autotvm/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:48.521</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
+<p><strong>00:47.871</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -349,7 +349,7 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></td>
-<td><p>00:48.487</p></td>
+<td><p>00:47.838</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></td>
diff --git a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
index 87b2517d2f..6942ea4644 100644
--- a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
@@ -568,8 +568,7 @@ for this template</p>
 waiting for device...
 device available
 Get devices for measurement successfully!
-No: 1   GFLOPS: 6.13/6.13       result: MeasureResult(costs=(0.037739070750000006,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.1840627193450928, timestamp=1673986430.7595634)       [(&#39;tile_f&#39;, [-1, 1, 4, 16]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 8, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,1175635
-No: 2   GFLOPS: 0.00/6.13       result: Traceback (most recent call last):
+No: 1   GFLOPS: 0.00/0.00       result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -691,8 +690,9 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 512, 1, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 256, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3514729
-No: 3   GFLOPS: 0.00/6.13       result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 2, 128, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 32, 2]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,6250690
+No: 2   GFLOPS: 1.83/1.83       result: MeasureResult(costs=(0.12647833249999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.213264465332031, timestamp=1673995932.0615451) [(&#39;tile_f&#39;, [-1, 16, 4, 8]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 1, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2941333
+No: 3   GFLOPS: 0.00/1.83       result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -814,8 +814,9 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 32, 2, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 2, 4]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2587313
-No: 4   GFLOPS: 0.00/6.13       result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 4, 4, 32]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 256, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7036456
+No: 4   GFLOPS: 27.41/27.41     result: MeasureResult(costs=(0.008446498285714286,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.635021686553955, timestamp=1673995935.0071206)        [(&#39;tile_f&#39;, [-1, 8, 1, 4]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 16, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,5338403
+No: 5   GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -937,8 +938,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 128, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 4]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7429889
-No: 5   GFLOPS: 0.00/6.13       result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 64, 4, 2]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 16, 16]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,6524398
+No: 6   GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -1060,9 +1061,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 256, 2]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,6484379
-No: 6   GFLOPS: 49.72/49.72     result: MeasureResult(costs=(0.004656136590909092,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.4881622791290283, timestamp=1673986438.2100587)       [(&#39;tile_f&#39;, [-1, 8, 1, 8]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,8807619
-No: 7   GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 32, 8]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 128, 2]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3934421
+No: 7   GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -1184,161 +1184,151 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 128, 1, 4]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 2, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7784147
-No: 8   GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
-  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 742, in __call__
-    yield remote, remote.load_module(os.path.split(build_result.filename)[1])
-  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 706, in run_through_rpc
-    costs = time_f(*args).results
-  File &quot;/workspace/python/tvm/runtime/module.py&quot;, line 357, in evaluator
-    blob = feval(*args)
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 8, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 256, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,259721
+No: 8   GFLOPS: 2.51/27.41      result: MeasureResult(costs=(0.09205633525000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=10.663654088973999, timestamp=1673995946.8162959)        [(&#39;tile_f&#39;, [-1, 16, 4, 2]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 8, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3531736
+No: 9   GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
+    func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
+    func = build(s, args, target_host=task.target_host, runtime=runtime)
+  File &quot;/workspace/python/tvm/driver/build_module.py&quot;, line 227, in build
+    input_mod = lower(inputs, args, name=name, binds=binds)
+  File &quot;/workspace/python/tvm/driver/build_module.py&quot;, line 134, in lower
+    return ffi.lower_schedule(inp, args, name, binds, simple_mode)
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
-  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 262, in tvm._ffi._cy3.core.FuncCall
-  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 251, in tvm._ffi._cy3.core.FuncCall3
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 276, in tvm._ffi._cy3.core.FuncCall
   File &quot;tvm/_ffi/_cython/./base.pxi&quot;, line 181, in tvm._ffi._cy3.core.CHECK_CALL
 tvm._ffi.base.TVMError: Traceback (most recent call last):
-  4: TVMFuncCall
+  24: TVMFuncCall
         at ../src/runtime/c_runtime_api.cc:477
-  3: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+  23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
         at ../include/tvm/runtime/packed_func.h:1217
-  2: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
-        at ../src/runtime/rpc/rpc_module.cc:129
-  1: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function&lt;void (tvm::runtime::TVMArgs)&gt; const&amp;)
-        at ../src/runtime/rpc/rpc_endpoint.cc:1012
-  0: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function&lt;void (tvm::runtime::TVMArgs)&gt;)
-        at ../src/runtime/rpc/rpc_endpoint.cc:804
-  File &quot;../src/runtime/rpc/rpc_endpoint.cc&quot;, line 804
-TVMError:
----------------------------------------------------------------
-An error occurred during the execution of TVM.
-For more information, please see: https://tvm.apache.org/docs/errors.html
----------------------------------------------------------------
-  Check failed: (code == RPCCode::kReturn) is false: code=kShutdown
-
-During handling of the above exception, another exception occurred:
-
-Traceback (most recent call last):
-  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 706, in run_through_rpc
-    costs = time_f(*args).results
-  File &quot;/usr/lib/python3.7/contextlib.py&quot;, line 130, in __exit__
-    self.gen.throw(type, value, traceback)
-  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 746, in __call__
-    remote.remove(build_result.filename)
-  File &quot;/workspace/python/tvm/rpc/client.py&quot;, line 144, in remove
-    self._remote_funcs[&quot;remove&quot;] = self.get_function(&quot;tvm.rpc.server.remove&quot;)
-  File &quot;/workspace/python/tvm/rpc/client.py&quot;, line 72, in get_function
-    return self._sess.get_function(name)
-  File &quot;/workspace/python/tvm/runtime/module.py&quot;, line 171, in get_function
-    self.handle, c_str(name), ctypes.c_int(query_imports), ctypes.byref(ret_handle)
-  File &quot;/workspace/python/tvm/_ffi/base.py&quot;, line 348, in check_call
-    raise get_last_ffi_error()
-tvm._ffi.base.TVMError: Traceback (most recent call last):
-  52: 0xffffffffffffffff
-  51: _start
-  50: __libc_start_main
-  49: _Py_UnixMain
-  48: 0x0000000000650da0
-  47: 0x0000000000650afa
-  46: _PyFunction_FastCallDict
-  45: _PyEval_EvalCodeWithName
-  44: _PyEval_EvalFrameDefault
-  43: _PyFunction_FastCallKeywords
-  42: _PyEval_EvalCodeWithName
-  41: _PyEval_EvalFrameDefault
-  40: _PyMethodDef_RawFastCallKeywords
-  39: 0x0000000000546369
-  38: _PyEval_EvalCodeWithName
-  37: _PyEval_EvalFrameDefault
-  36: _PyFunction_FastCallKeywords
-  35: _PyEval_EvalCodeWithName
-  34: _PyEval_EvalFrameDefault
-  33: _PyFunction_FastCallDict
-  32: _PyEval_EvalCodeWithName
-  31: _PyEval_EvalFrameDefault
-  30: _PyObject_FastCallDict
-  29: 0x00000000004c06e1
-  28: _PyFunction_FastCallDict
-  27: _PyEval_EvalFrameDefault
-  26: _PyMethodDescr_FastCallKeywords
-  25: 0x00000000005dcb58
-  24: 0x00000000005dc83f
-  23: 0x00000000004ba127
-  22: _PyEval_EvalFrameDefault
-  21: _PyFunction_FastCallKeywords
-  20: _PyEval_EvalFrameDefault
-  19: _PyFunction_FastCallKeywords
-  18: _PyEval_EvalFrameDefault
-  17: _PyFunction_FastCallKeywords
-  16: _PyEval_EvalCodeWithName
-  15: _PyEval_EvalFrameDefault
-  14: 0x0000000000537c30
-  13: _PyObject_FastCallKeywords
-  12: 0x00007fd1dce0efa2
-  11: _ctypes_callproc
-  10: ffi_call
-  9: ffi_call_unix64
-  8: TVMModGetFunction
-        at ../src/runtime/c_runtime_api.cc:408
-  7: tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, bool)
-        at ../src/runtime/module.cc:66
-  6: tvm::runtime::RPCModuleNode::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, tvm::runtime::ObjectPtr&lt;tvm::runtime::Object&gt; const&amp;)
-        at ../src/runtime/rpc/rpc_module.cc:185
-  5: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;)
-        at ../src/runtime/rpc/rpc_endpoint.cc:1007
-  4: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote&lt;std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;&gt;(tvm::runtime::RPCCode, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;)
-        at ../src/runtime/rpc/rpc_endpoint.h:223
-  3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()&lt;int, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;&gt;(int&amp;&amp;, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;) const
+  22: Call
+        at ../include/tvm/runtime/packed_func.h:1213
+  21: operator()
+        at ../include/tvm/runtime/packed_func.h:1730
+  20: unpack_call&lt;tvm::IRModule, 5, tvm::&lt;lambda(tvm::te::Schedule, const tvm::runtime::Array&lt;tvm::runtime::ObjectRef&gt;&amp;, const tvm::runtime::String&amp;, const tvm::runtime::Map&lt;tvm::te::Tensor, tvm::tir::Buffer&gt;&amp;, bool)&gt; &gt;
+        at ../include/tvm/runtime/packed_func.h:1670
+  19: run&lt;&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  18: run&lt;tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  17: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  16: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  15: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  14: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1645
+  13: operator()
+        at ../src/driver/driver_api.cc:395
+  12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array&lt;tvm::runtime::ObjectRef, void&gt; const&amp;, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, std::unordered_map&lt;tvm::te::Tensor, tvm::tir::Buffer, std::hash&lt;tvm::te::Tensor&gt;, std::equal_to&lt;tvm::te::Tensor&gt;, std::allocator&lt;std::pair&lt;tvm::te::Tensor const, tvm::tir::Buffer&gt; &gt; &gt; const&amp;, tvm::GlobalVarSupply, bool)
+        at ../src/driver/driver_api.cc:381
+  11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array&lt;tvm::transform::Pass, void&gt;)
+        at ../src/driver/driver_api.cc:276
+  10: tvm::transform::Pass::operator()(tvm::IRModule) const
+        at ../src/ir/transform.cc:258
+  9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
+        at ../src/ir/transform.cc:274
+  8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
+        at ../src/ir/transform.cc:454
+  7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
+        at ../src/ir/transform.cc:274
+  6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
+        at ../src/tir/ir/transform.cc:100
+  5: tvm::runtime::TypedPackedFunc&lt;tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)&gt;::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
+        at ../include/tvm/runtime/packed_func.h:1749
+  4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher&lt;tvm::tir::PrimFunc&gt;::run&lt;tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext&gt;(tvm::runtime::PackedFunc const&amp;, tvm::tir::PrimFunc&amp;&amp;, tvm::IRModule&amp;&amp;, tvm::transform::PassContext&amp;&amp;)
+        at ../include/tvm/runtime/packed_func.h:1693
+  3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()&lt;tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext&gt;(tvm::tir::PrimFunc&amp;&amp;, tvm::IRModule&amp;&amp;, tvm::transform::PassContext&amp;&amp;) const
         at ../include/tvm/runtime/packed_func.h:1617
   2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
         at ../include/tvm/runtime/packed_func.h:1217
   1: Call
         at ../include/tvm/runtime/packed_func.h:1213
   0: operator()
-        at ../src/runtime/rpc/rpc_endpoint.cc:684
-  File &quot;../src/runtime/rpc/rpc_endpoint.cc&quot;, line 684
-TVMError:
----------------------------------------------------------------
-An error occurred during the execution of TVM.
-For more information, please see: https://tvm.apache.org/docs/errors.html
----------------------------------------------------------------
-  Check failed: (code == RPCCode::kReturn) is false: code=1
+        at ../src/runtime/c_runtime_api.cc:534
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
+    raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
 
 Traceback (most recent call last):
-  52: 0xffffffffffffffff
-  51: _start
-  50: __libc_start_main
-  49: _Py_UnixMain
-  48: 0x0000000000650da0
-  47: 0x0000000000650afa
-  46: _PyFunction_FastCallDict
-  45: _PyEval_EvalCodeWithName
-  44: _PyEval_EvalFrameDefault
-  43: _PyFunction_FastCallKeywords
-  42: _PyEval_EvalCodeWithName
-  41: _PyEval_EvalFrameDefault
-  40: _PyMethodDef_RawFastCallKeywords
-  39: 0x0000000000546369
-  38: _PyEval_EvalCodeWithName
-  37: _PyEval_EvalFrameDefault
-  36: _PyFunction_FastCallKeywords
-  35: _PyEval_EvalCodeWithName
-  34: _PyEval_EvalFrameDefault
-  33: _PyFunction_FastCallDict
-  32: _PyEval_EvalCodeWithName
-  31: _PyEval_EvalFrameDefault
-  30: _PyObject_FastCallDict
-  29: 0x00000000004c06e1
-  28: _PyFunction_FastCallDict
-  27: _PyEval_EvalFrameDefault
-  26: _PyMethodDescr_FastCallKeywords
-  25: 0x00000000005dcb58
-  24: 0x00000000005dc83f
-  23: 0x00000000004ba127
-  22: _PyEval_EvalFrameDefault
-  21: _PyFunction_FastCallKeywords
-  20: _PyEval_EvalFrameDefault
-  19: _PyFunction_FastCall      [(&#39;tile_f&#39;, [-1, 64, 2, 4]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 4, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2333214
-No: 9   GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
+  24: TVMFuncCall
+        at ../src/runtime/c_runtime_api.cc:477
+  23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+        at ../include/tvm/runtime/packed_func.h:1217
+  22: Call
+        at ../include/tvm/runtime/packed_func.h:1213
+  21: operator()
+        at ../include/tvm/runtime/packed_func.h:1730
+  20: unpack_call&lt;tvm::IRModule, 5, tvm::&lt;lambda(tvm::te::Schedule, const tvm::runtime::Array&lt;tvm::runtime::ObjectRef&gt;&amp;, const tvm::runtime::String&amp;, const tvm::runtime::Map&lt;tvm::te::Tensor, tvm::tir::Buffer&gt;&amp;, bool)&gt; &gt;
+        at ../include/tvm/runtime/packed_func.h:1670
+  19: run&lt;&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  18: run&lt;tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  17: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  16: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  15: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1630
+  14: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
+        at ../include/tvm/runtime/packed_func.h:1645
+  13: operator()
+        at ../src/driver/driver_api.cc:395
+  12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array&lt;tvm::runtime::ObjectRef, void&gt; const&amp;, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, std::unordered_map&lt;tvm::te::Tensor, tvm::tir::Buffer, std::hash&lt;tvm::te::Tensor&gt;, std::equal_to&lt;tvm::te::Tensor&gt;, std::allocator&lt;std::pair&lt;tvm::te::Tensor const, tvm::tir::Buffer&gt; &gt; &gt; const&amp;, tvm::GlobalVarSupply, bool)
+        at ../src/driver/driver_api.cc:381
+  11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array&lt;tvm::transform::Pass, void&gt;)
+        at ../src/driver/driver_api.cc:276
+  10: tvm::transform::Pass::operator()(tvm::IRModule) const
+        at ../src/ir/transform.cc:258
+  9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
+        at ../src/ir/transform.cc:274
+  8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
+        at ../src/ir/transform.cc:454
+  7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
+        at ../src/ir/transform.cc:274
+  6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
+        at ../src/tir/ir/transform.cc:100
+  5: tvm::runtime::TypedPackedFunc&lt;tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)&gt;::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
+        at ../include/tvm/runtime/packed_func.h:1749
+  4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher&lt;tvm::tir::PrimFunc&gt;::run&lt;tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext&gt;(tvm::runtime::PackedFunc const&amp;, tvm::tir::PrimFunc&amp;&amp;, tvm::IRModule&amp;&amp;, tvm::transform::PassContext&amp;&amp;)
+        at ../include/tvm/runtime/packed_func.h:1693
+  3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()&lt;tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext&gt;(tvm::tir::PrimFunc&amp;&amp;, tvm::IRModule&amp;&amp;, tvm::transform::PassContext&amp;&amp;) const
+        at ../include/tvm/runtime/packed_func.h:1617
+  2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+        at ../include/tvm/runtime/packed_func.h:1217
+  1: Call
+        at ../include/tvm/runtime/packed_func.h:1213
+  0: operator()
+        at ../src/runtime/c_runtime_api.cc:534
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
+    raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 8, 8, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 4, 64]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7136924
+No: 10  GFLOPS: 0.00/27.41      result: Traceback (most recent call last):
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 142, in build
+    res = future.result()
+  File &quot;/usr/lib/python3.7/concurrent/futures/_base.py&quot;, line 435, in result
+    return self.__get_result()
+  File &quot;/usr/lib/python3.7/concurrent/futures/_base.py&quot;, line 384, in __get_result
+    raise self._exception
+  File &quot;/usr/lib/python3.7/concurrent/futures/thread.py&quot;, line 57, in run
+    result = self.fn(*self.args, **self.kwargs)
+  File &quot;/workspace/python/tvm/contrib/popen_pool.py&quot;, line 432, in &lt;lambda&gt;
+    worker = lambda *args: self._worker_run(*args)
+  File &quot;/workspace/python/tvm/contrib/popen_pool.py&quot;, line 401, in _worker_run
+    return proc.recv()
+  File &quot;/workspace/python/tvm/contrib/popen_pool.py&quot;, line 309, in recv
+    raise TimeoutError()
+TimeoutError
+
+        [(&#39;tile_f&#39;, [-1, 256, 1, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 4, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7013388
+No: 11  GFLOPS: 189.68/189.68   result: MeasureResult(costs=(0.0012204934777777779,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4920103549957275, timestamp=1673995957.8767965)      [(&#39;tile_f&#39;, [-1, 1, 16, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 4, 8]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2233566
+No: 12  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -1460,8 +1450,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 8, 2, 16]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 256, 2]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2196213
-No: 10  GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 16, 2, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 128]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,3077472
+No: 13  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -1583,9 +1573,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 1, 128]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 64, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7607810
-No: 11  GFLOPS: 3.85/49.72      result: MeasureResult(costs=(0.060141681499999995,), error_no=MeasureErrorNo.NO_ERROR, all_cost=5.4033591747283936, timestamp=1673986448.3987403)       [(&#39;tile_f&#39;, [-1, 2, 2, 16]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 4, 4]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7240151
-No: 12  GFLOPS: 0.00/49.72      result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 32, 4, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 64, 2]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7995484
+No: 14  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -1707,10 +1696,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 128, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,9908849
-No: 13  GFLOPS: 2.32/49.72      result: MeasureResult(costs=(0.09972912575,), error_no=MeasureErrorNo.NO_ERROR, all_cost=11.010488986968994, timestamp=1673986459.577202)       [(&#39;tile_f&#39;, [-1, 8, 1, 2]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,9390758
-No: 14  GFLOPS: 63.21/63.21     result: MeasureResult(costs=(0.003662555392857143,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.0892367362976074, timestamp=1673986460.3195736)       [(&#39;tile_f&#39;, [-1, 16, 4, 2]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 16, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,1759196
-No: 15  GFLOPS: 0.00/63.21      result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 128, 2]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 4]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,69617
+No: 15  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -1832,8 +1819,9 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 16, 16]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 1, 256]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,8897642
-No: 16  GFLOPS: 0.00/63.21      result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 2, 1, 8]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 512]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,9678157
+No: 16  GFLOPS: 96.04/189.68    result: MeasureResult(costs=(0.002410495928571429,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.7908122539520264, timestamp=1673995960.8565032)       [(&#39;tile_f&#39;, [-1, 1, 32, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 4]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,6075340
+No: 17  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -1955,9 +1943,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 512, 1]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 4, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,9338174
-No: 17  GFLOPS: 148.02/148.02   result: MeasureResult(costs=(0.001563994515625,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.1416263580322266, timestamp=1673986461.6723034)  [(&#39;tile_f&#39;, [-1, 2, 32, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 1]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4259991
-No: 18  GFLOPS: 0.00/148.02     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 4, 2, 64]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 16, 8]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2822366
+No: 18  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -2079,8 +2066,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 64, 4, 2]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 1, 32]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,10403438
-No: 19  GFLOPS: 0.00/148.02     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 4, 32, 2]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 256, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,1773072
+No: 19  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -2202,8 +2189,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 2, 8, 4]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 32, 8]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,9601142
-No: 20  GFLOPS: 0.00/148.02     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 2, 4, 64]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 2, 128]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,8889308
+No: 20  GFLOPS: 0.00/189.68     result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 592, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 544, in _build_func_common
@@ -2325,7 +2312,7 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 875, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 64, 8]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 4, 32]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,6538563
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 2, 8]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 2, 128]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,3082123
 </pre></div>
 </div>
 <p>Finally we can inspect the best config from log file, check correctness,
@@ -2364,9 +2351,9 @@ and measure running time.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Finish loading 20 records
 
 Best config:
-[(&#39;tile_f&#39;, [-1, 2, 32, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 1]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4259991
+[(&#39;tile_f&#39;, [-1, 1, 16, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 4, 8]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2233566
 Finish loading 20 records
-Time cost of this operator: 0.001954
+Time cost of this operator: 0.001633
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autotvm-tune-conv2d-cuda-py">
diff --git a/docs/how_to/work_with_microtvm/micro_autotune.html b/docs/how_to/work_with_microtvm/micro_autotune.html
index 6e8f0ad2fc..c6d10799bc 100644
--- a/docs/how_to/work_with_microtvm/micro_autotune.html
+++ b/docs/how_to/work_with_microtvm/micro_autotune.html
@@ -646,10 +646,10 @@ the tuned operator.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build without Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  309.6     98.712   (1, 2, 10, 10, 3)  2       1        [309.6]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       3.078     0.981    (1, 6, 10, 10)     1       1        [3.078]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.961     0.307    (1, 1, 10, 10, 3)  1       1        [0.961]
-Total_time                                    -                                             313.639   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  310.7     98.738   (1, 2, 10, 10, 3)  2       1        [310.7]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       3.019     0.959    (1, 6, 10, 10)     1       1        [3.019]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.952     0.303    (1, 1, 10, 10, 3)  1       1        [0.952]
+Total_time                                    -                                             314.671   -        -                  -       -        -
 </pre></div>
 </div>
 </div>
@@ -701,10 +701,10 @@ Total_time                                    -
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build with Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  135.9     97.966   (1, 6, 10, 10, 1)  2       1        [135.9]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.853     1.336    (1, 6, 10, 10)     1       1        [1.853]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.968     0.698    (1, 1, 10, 10, 3)  1       1        [0.968]
-Total_time                                    -                                             138.721   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.9     97.425   (1, 6, 10, 10, 1)  2       1        [102.9]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.766     1.672    (1, 6, 10, 10)     1       1        [1.766]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.953     0.903    (1, 1, 10, 10, 3)  1       1        [0.953]
+Total_time                                    -                                             105.62    -        -                  -       -        -
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-autotune-py">
diff --git a/docs/how_to/work_with_microtvm/micro_pytorch.html b/docs/how_to/work_with_microtvm/micro_pytorch.html
index ec162d3dc5..05c745d21b 100644
--- a/docs/how_to/work_with_microtvm/micro_pytorch.html
+++ b/docs/how_to/work_with_microtvm/micro_pytorch.html
@@ -453,7 +453,8 @@ download a cat image and preprocess it to use as the model input.</p>
 Downloading: &quot;https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
 
   0%|          | 0.00/3.42M [00:00&lt;?, ?B/s]
-100%|##########| 3.42M/3.42M [00:00&lt;00:00, 83.4MB/s]
+ 61%|######    | 2.09M/3.42M [00:00&lt;00:00, 20.0MB/s]
+100%|##########| 3.42M/3.42M [00:00&lt;00:00, 31.2MB/s]
 /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
   return LooseVersion(torch_ver) &gt; ver
 /venv/apache-tvm-py3.7/lib/python3.7/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -577,7 +578,7 @@ via the host <cite>main.cc`</cite> or if a Zephyr emulated board is selected as
 Torch top-1 id: 282, class name: tiger cat
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  10.184 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  8.976 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/12b9ecc04c41abaa12022061771821d1/micro_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_train.html b/docs/how_to/work_with_microtvm/micro_train.html
index 20f1359270..97eb9ac35a 100644
--- a/docs/how_to/work_with_microtvm/micro_train.html
+++ b/docs/how_to/work_with_microtvm/micro_train.html
@@ -523,7 +523,7 @@ take about <strong>2 minutes</strong> to download the Stanford Cars, while COCO
 <a href="https://docs.python.org/3/library/shutil.html#shutil.move" title="shutil.move" class="sphx-glr-backref-module-shutil sphx-glr-backref-type-py-function"><span class="n">shutil</span><span class="o">.</span><span class="n">move</span></a><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx-glr-backref-typ [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmpvq_95_9v/images/random&#39;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmppqdyvab_/images/random&#39;
 </pre></div>
 </div>
 </div>
@@ -583,8 +583,8 @@ objects to other stuff? We can display some examples from our datasets using <co
     <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmpvq_95_9v/images/target contains 8144 images
-/tmp/tmpvq_95_9v/images/random contains 5000 images
+<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmppqdyvab_/images/target contains 8144 images
+/tmp/tmppqdyvab_/images/random contains 5000 images
 </pre></div>
 </div>
 </div>
@@ -696,13 +696,13 @@ the time on our validation set).</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Epoch 1/3
-328/328 - 48s - loss: 0.2171 - accuracy: 0.9241 - val_loss: 0.1121 - val_accuracy: 0.9607 - 48s/epoch - 145ms/step
+328/328 - 47s - loss: 0.2152 - accuracy: 0.9245 - val_loss: 0.2124 - val_accuracy: 0.9177 - 47s/epoch - 143ms/step
 Epoch 2/3
-328/328 - 44s - loss: 0.0964 - accuracy: 0.9655 - val_loss: 0.1093 - val_accuracy: 0.9664 - 44s/epoch - 134ms/step
+328/328 - 43s - loss: 0.0930 - accuracy: 0.9663 - val_loss: 0.1170 - val_accuracy: 0.9566 - 43s/epoch - 132ms/step
 Epoch 3/3
-328/328 - 44s - loss: 0.0665 - accuracy: 0.9755 - val_loss: 0.1519 - val_accuracy: 0.9562 - 44s/epoch - 134ms/step
+328/328 - 43s - loss: 0.0655 - accuracy: 0.9754 - val_loss: 0.1277 - val_accuracy: 0.9547 - 43s/epoch - 133ms/step
 
-&lt;keras.callbacks.History object at 0x7fc25315b4d0&gt;
+&lt;keras.callbacks.History object at 0x7ff441784690&gt;
 </pre></div>
 </div>
 </div>
@@ -962,7 +962,7 @@ as intended.</p>
 <p>From here, we could modify the model to read live images from the camera - we have another
 Arduino tutorial for how to do that <a class="reference external" href="https://github.com/guberti/tvm-arduino-demos/tree/master/examples/person_detection">on GitHub</a>. Alternatively, we could also
 <a class="reference external" href="https://tvm.apache.org/docs/how_to/work_with_microtvm/micro_autotune.html">use TVM’s autotuning capabilities</a> to dramatically improve the model’s performance.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  47.189 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes  16.733 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-train-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/b52cec46baf4f78d6bcd94cbe269c8a6/micro_train.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_train.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/sg_execution_times.html b/docs/how_to/work_with_microtvm/sg_execution_times.html
index 6e6e392026..f8836d59ad 100644
--- a/docs/how_to/work_with_microtvm/sg_execution_times.html
+++ b/docs/how_to/work_with_microtvm/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-microtvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>07:03.423</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
+<p><strong>07:31.018</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -349,30 +349,30 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_train.html#sphx-glr-how-to-work-with-microtvm-micro-train-py"><span class="std std-ref">Training Vision Models for microTVM on Arduino</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_train.py</span></code>)</p></td>
-<td><p>04:47.189</p></td>
+<td><p>05:16.733</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_pytorch.html#sphx-glr-how-to-work-with-microtvm-micro-pytorch-py"><span class="std std-ref">microTVM PyTorch Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_pytorch.py</span></code>)</p></td>
-<td><p>01:10.184</p></td>
+<td><p>01:08.976</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">Autotuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
-<td><p>00:53.168</p></td>
+<td><p>00:52.269</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_aot.html#sphx-glr-how-to-work-with-microtvm-micro-aot-py"><span class="std std-ref">microTVM Host-Driven AoT</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_aot.py</span></code>)</p></td>
-<td><p>00:08.906</p></td>
+<td><p>00:09.140</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">microTVM with TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></td>
-<td><p>00:03.976</p></td>
+<td><p>00:03.900</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="micro_tvmc.html#sphx-glr-how-to-work-with-microtvm-micro-tvmc-py"><span class="std std-ref">Executing a Tiny Model with TVMC Micro</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tvmc.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="micro_tvmc.html#sphx-glr-how-to-work-with-microtvm-micro-tvmc-py"><span class="std std-ref">Executing a Tiny Model with TVMC Micro</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tvmc.py</span></code>)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
diff --git a/docs/how_to/work_with_relay/sg_execution_times.html b/docs/how_to/work_with_relay/sg_execution_times.html
index 34334ece7c..4aa465b3f3 100644
--- a/docs/how_to/work_with_relay/sg_execution_times.html
+++ b/docs/how_to/work_with_relay/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-relay-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:46.014</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
+<p><strong>00:45.127</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -349,15 +349,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="using_pipeline_executor.html#sphx-glr-how-to-work-with-relay-using-pipeline-executor-py"><span class="std std-ref">Using Pipeline Executor in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_pipeline_executor.py</span></code>)</p></td>
-<td><p>00:33.664</p></td>
+<td><p>00:32.834</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></td>
-<td><p>00:10.785</p></td>
+<td><p>00:10.520</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></td>
-<td><p>00:01.560</p></td>
+<td><p>00:01.767</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/intrin_math.html b/docs/how_to/work_with_schedules/intrin_math.html
index 17aeaa2efa..5464574ef0 100644
--- a/docs/how_to/work_with_schedules/intrin_math.html
+++ b/docs/how_to/work_with_schedules/intrin_math.html
@@ -535,7 +535,7 @@ The following example customizes CUDA lowering rule for <code class="code docuti
 <a href="../../reference/api/python/ir.html#tvm.ir.register_intrin_lowering" title="tvm.ir.register_intrin_lowering" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-function"><span class="n">register_intrin_lowering</span></a><span class="p">(</span><span class="s2">&quot;tir.exp&quot;</span><span class="p">,</span> <span class="n">target</span><span class="o">=</span><span class="s2">&quot;cuda&quot;</span><span class="p">,</span> <span class="n">f</span><span class="o">= [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7fc03de62680&gt;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7ff4419b33b0&gt;
 </pre></div>
 </div>
 <p>Register the rule to TVM with override option to override existing rule.
diff --git a/docs/how_to/work_with_schedules/sg_execution_times.html b/docs/how_to/work_with_schedules/sg_execution_times.html
index 5071eff9ba..ce26168de8 100644
--- a/docs/how_to/work_with_schedules/sg_execution_times.html
+++ b/docs/how_to/work_with_schedules/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-schedules-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:07.829</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
+<p><strong>00:06.213</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -349,27 +349,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></td>
-<td><p>00:05.217</p></td>
+<td><p>00:03.706</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></td>
-<td><p>00:01.220</p></td>
+<td><p>00:01.141</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></td>
-<td><p>00:00.594</p></td>
+<td><p>00:00.586</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></td>
-<td><p>00:00.572</p></td>
+<td><p>00:00.561</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></td>
-<td><p>00:00.119</p></td>
+<td><p>00:00.115</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></td>
-<td><p>00:00.052</p></td>
+<td><p>00:00.049</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/tensorize.html b/docs/how_to/work_with_schedules/tensorize.html
index 2d6ee92492..e3052ea411 100644
--- a/docs/how_to/work_with_schedules/tensorize.html
+++ b/docs/how_to/work_with_schedules/tensorize.html
@@ -587,7 +587,7 @@ The importing needs to happen before the tensorized GEMV being executed.</p>
              B: Buffer(B_2: Pointer(float32), float32, [512, 64], []),
              C: Buffer(C_2: Pointer(float32), float32, [1024, 512], [])}
   buffer_map = {A_1: A, B_1: B, C_1: C} {
-  attr [IterVar(i: int32, (nullptr), &quot;DataPar&quot;, &quot;&quot;)] &quot;pragma_import_llvm&quot; = &quot;; ModuleID = &#39;/tmp/tmpu8koxk36/input0.cc&#39;\nsource_filename = \&quot;/tmp/tmpu8koxk36/input0.cc\&quot;\ntarget datalayout = \&quot;e-m:e-i64:64-f80:128-n8:16:32:64-S128\&quot;\ntarget triple = \&quot;x86_64-pc-linux-gnu\&quot;\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n  %7 = allo [...]
+  attr [IterVar(i: int32, (nullptr), &quot;DataPar&quot;, &quot;&quot;)] &quot;pragma_import_llvm&quot; = &quot;; ModuleID = &#39;/tmp/tmpl9766ior/input0.cc&#39;\nsource_filename = \&quot;/tmp/tmpl9766ior/input0.cc\&quot;\ntarget datalayout = \&quot;e-m:e-i64:64-f80:128-n8:16:32:64-S128\&quot;\ntarget triple = \&quot;x86_64-pc-linux-gnu\&quot;\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n  %7 = allo [...]
   for (i, 0, 1024) {
     for (j.outer: int32, 0, 32) {
       @tir.call_extern(&quot;gemv_update&quot;, @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), C_2, ((i*512) + (j.outer*16)), 16, 2, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), A_2, (i*64), 64, 1, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), B_2, (j.outer*1024), 1024, 1, dtype=handle), 16, 64, 64, dtype=int32)
diff --git a/docs/install/nnpack.html b/docs/install/nnpack.html
index 1ef28de467..23d2181e9d 100644
--- a/docs/install/nnpack.html
+++ b/docs/install/nnpack.html
@@ -229,7 +229,17 @@
               <p class="caption" role="heading"><span class="caption-text">Getting Started</span></p>
 <ul class="current">
 <li class="toctree-l1 current"><a class="reference internal" href="index.html">Installing TVM</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="from_source.html">Install from Source</a></li>
+<li class="toctree-l2 current"><a class="reference internal" href="from_source.html">Install from Source</a><ul class="current">
+<li class="toctree-l3"><a class="reference internal" href="from_source.html#developers-get-source-from-github">Developers: Get Source from Github</a></li>
+<li class="toctree-l3"><a class="reference internal" href="from_source.html#build-the-shared-library">Build the Shared Library</a></li>
+<li class="toctree-l3"><a class="reference internal" href="from_source.html#python-package-installation">Python Package Installation</a></li>
+<li class="toctree-l3 current"><a class="reference internal" href="from_source.html#install-contrib-libraries">Install Contrib Libraries</a><ul class="current">
+<li class="toctree-l4 current"><a class="current reference internal" href="#">NNPACK Contrib Installation</a></li>
+</ul>
+</li>
+<li class="toctree-l3"><a class="reference internal" href="from_source.html#enable-c-tests">Enable C++ Tests</a></li>
+</ul>
+</li>
 <li class="toctree-l2"><a class="reference internal" href="docker.html">Docker Images</a></li>
 <li class="toctree-l2 current"><a class="current reference internal" href="#">NNPACK Contrib Installation</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#conditions">Conditions</a></li>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator-members.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator-members.html
index fd4cf7dfb4..caf09bbe9d 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator-members.html
@@ -78,33 +78,32 @@ $(function() {
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a4ce54511e556a30567e5d5876c81c91d">DefaultHexagon</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a15a0354263735c53c4b7419153da7c87">DefaultLLVM</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#af8fca919396df4557beeacfce9be0ef2">DefaultMicro</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a8473324dbcbe078f021a58219a2cb687">DefaultVNNI</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a17d8d5ad92691f9e18e3e0ae8ef69e4f">defined</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#acd04bb22a6861e9952c344ee8547411f">DowncastNoCheck</a>(ObjectRef ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#ade6fc51af24708ee525c45a304ba342e">FApply</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#afc0d122e314d403b9d1abff9664deb1f">FAsString</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a8a88bb65f31f21894e25c443c0756d7b">FClone</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a22e5bb9d64dbc773bb9263b70882239e">FFIClearAfterMove</a>(ObjectRef *ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#aef9bdcd9ecc168cccb807de472d29630">FInitializeWithTuneContext</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aadbc0886ffa80162ff31eefd0431ba09">get</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae423057ecf93c18714d17f53cd1d318f">get_mutable</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aed593996e4076632450de8fde776707c">GetDataPtr</a>(const ObjectRef &amp;ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a2f706028c59f1c2d5a87ae58785b79c9">MutateComputeLocation</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#acb242cfc6875055d75f7ea7adcfa9c14">MutateParallel</a>(int64_t max_jobs_per_core)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a008b237e2c944cc25c123ef412dcd397">MutateThreadBinding</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a84ed21cbc627ff6dd49f983a05113696">MutateTileSize</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a5bedfb467944180740728c76ba39312f">MutateUnroll</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa07c1f6d66a438ea950637d13ed09471">ObjectRef</a>()=default</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a6a7dd7404edf1c26f8dbd9bd92d03a02">ObjectRef</a>(ObjectPtr&lt; Object &gt; data)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">explicit</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa1bd13a7185cb4b2b6bdde49416e8aa4">operator!=</a>(const ObjectRef &amp;other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a3deeeac5827a88f375b8c6ae1039c219">operator-&gt;</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4744bf4a1b48f202d41b51dc5e08e6ee">operator&lt;</a>(const ObjectRef &amp;other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#affdf1b8cdb36e140de7b3ad7064e4617">operator==</a>(const ObjectRef &amp;other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#ad47720eb4ce8167fd82c64b5b17d53f6">PyMutator</a>(FInitializeWithTuneContext f_initialize_with_tune_context, FApply f_apply, FClone f_clone, FAsString f_as_string)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae31a5b9f40781d60a2901994ead700e8">same_as</a>(const ObjectRef &amp;other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a7c1529cf73f979a4c4fa12f8fcc3588c">TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS</a>(Mutator, ObjectRef, MutatorNode)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a0ae0da21d247cd87ea94fe3777c4405e">use_count</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a17d8d5ad92691f9e18e3e0ae8ef69e4f">defined</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#acd04bb22a6861e9952c344ee8547411f">DowncastNoCheck</a>(ObjectRef ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#ade6fc51af24708ee525c45a304ba342e">FApply</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#afc0d122e314d403b9d1abff9664deb1f">FAsString</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a8a88bb65f31f21894e25c443c0756d7b">FClone</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a22e5bb9d64dbc773bb9263b70882239e">FFIClearAfterMove</a>(ObjectRef *ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#aef9bdcd9ecc168cccb807de472d29630">FInitializeWithTuneContext</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aadbc0886ffa80162ff31eefd0431ba09">get</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae423057ecf93c18714d17f53cd1d318f">get_mutable</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aed593996e4076632450de8fde776707c">GetDataPtr</a>(const ObjectRef &amp;ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a2f706028c59f1c2d5a87ae58785b79c9">MutateComputeLocation</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#acb242cfc6875055d75f7ea7adcfa9c14">MutateParallel</a>(int64_t max_jobs_per_core)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a008b237e2c944cc25c123ef412dcd397">MutateThreadBinding</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a84ed21cbc627ff6dd49f983a05113696">MutateTileSize</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a5bedfb467944180740728c76ba39312f">MutateUnroll</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa07c1f6d66a438ea950637d13ed09471">ObjectRef</a>()=default</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a6a7dd7404edf1c26f8dbd9bd92d03a02">ObjectRef</a>(ObjectPtr&lt; Object &gt; data)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">explicit</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa1bd13a7185cb4b2b6bdde49416e8aa4">operator!=</a>(const ObjectRef &amp;other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a3deeeac5827a88f375b8c6ae1039c219">operator-&gt;</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4744bf4a1b48f202d41b51dc5e08e6ee">operator&lt;</a>(const ObjectRef &amp;other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#affdf1b8cdb36e140de7b3ad7064e4617">operator==</a>(const ObjectRef &amp;other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#ad47720eb4ce8167fd82c64b5b17d53f6">PyMutator</a>(FInitializeWithTuneContext f_initialize_with_tune_context, FApply f_apply, FClone f_clone, FAsString f_as_string)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae31a5b9f40781d60a2901994ead700e8">same_as</a>(const ObjectRef &amp;other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a7c1529cf73f979a4c4fa12f8fcc3588c">TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS</a>(Mutator, ObjectRef, MutatorNode)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a0ae0da21d247cd87ea94fe3777c4405e">use_count</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
 </table></div><!-- contents -->
 <!-- start footer part -->
 <hr class="footer"/><address class="footer"><small>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator.html
index 70b0ea0185..fc806b6ad9 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator.html
@@ -79,13 +79,13 @@ $(function() {
 <div class="dynheader">
 Inheritance diagram for tvm::meta_schedule::Mutator:</div>
 <div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg" width="218" height="654"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg" width="218" height="639"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
 </div>
 </div>
 <div class="dynheader">
 Collaboration diagram for tvm::meta_schedule::Mutator:</div>
 <div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg" width="218" height="942"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg" width="218" height="927"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
 </div>
 </div>
 <table class="memberdecls">
@@ -169,9 +169,6 @@ Static Public Member Functions</h2></td></tr>
 <tr class="memitem:a15a0354263735c53c4b7419153da7c87"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Map.html">Map</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a>, <a class="el" href="classtvm_1_1FloatImm.html">FloatImm</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a15a0354263735c53c4b7419153da7c87">DefaultLLVM</a> () [...]
 <tr class="memdesc:a15a0354263735c53c4b7419153da7c87"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default mutators for LLVM.  <a href="#a15a0354263735c53c4b7419153da7c87">More...</a><br /></td></tr>
 <tr class="separator:a15a0354263735c53c4b7419153da7c87"><td class="memSeparator" colspan="2">&#160;</td></tr>
-<tr class="memitem:a8473324dbcbe078f021a58219a2cb687"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Map.html">Map</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a>, <a class="el" href="classtvm_1_1FloatImm.html">FloatImm</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a8473324dbcbe078f021a58219a2cb687">DefaultVNNI</a> () [...]
-<tr class="memdesc:a8473324dbcbe078f021a58219a2cb687"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default mutators for x86 VNNI.  <a href="#a8473324dbcbe078f021a58219a2cb687">More...</a><br /></td></tr>
-<tr class="separator:a8473324dbcbe078f021a58219a2cb687"><td class="memSeparator" colspan="2">&#160;</td></tr>
 <tr class="memitem:a6eb9b1298865cdeb5a8247a4e14454e3"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Map.html">Map</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a>, <a class="el" href="classtvm_1_1FloatImm.html">FloatImm</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a6eb9b1298865cdeb5a8247a4e14454e3">DefaultCUDA</a> () [...]
 <tr class="memdesc:a6eb9b1298865cdeb5a8247a4e14454e3"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default mutators for CUDA.  <a href="#a6eb9b1298865cdeb5a8247a4e14454e3">More...</a><br /></td></tr>
 <tr class="separator:a6eb9b1298865cdeb5a8247a4e14454e3"><td class="memSeparator" colspan="2">&#160;</td></tr>
@@ -427,33 +424,6 @@ Additional Inherited Members</h2></td></tr>
 
 <p>Create default mutators for Micro. </p>
 
-</div>
-</div>
-<a id="a8473324dbcbe078f021a58219a2cb687"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#a8473324dbcbe078f021a58219a2cb687">&#9670;&nbsp;</a></span>DefaultVNNI()</h2>
-
-<div class="memitem">
-<div class="memproto">
-<table class="mlabels">
-  <tr>
-  <td class="mlabels-left">
-      <table class="memname">
-        <tr>
-          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Map.html">Map</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a>, <a class="el" href="classtvm_1_1FloatImm.html">FloatImm</a>, void&gt; tvm::meta_schedule::Mutator::DefaultVNNI </td>
-          <td>(</td>
-          <td class="paramname"></td><td>)</td>
-          <td></td>
-        </tr>
-      </table>
-  </td>
-  <td class="mlabels-right">
-<span class="mlabels"><span class="mlabel">static</span></span>  </td>
-  </tr>
-</table>
-</div><div class="memdoc">
-
-<p>Create default mutators for x86 VNNI. </p>
-
 </div>
 </div>
 <a id="a2f706028c59f1c2d5a87ae58785b79c9"></a>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg
index 3e388ea5ce..1fbb25a704 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg
@@ -4,30 +4,29 @@
 <!-- Generated by graphviz version 2.40.1 (20161225.0304)
  -->
 <!-- Title: tvm::meta_schedule::Mutator Pages: 1 -->
-<svg width="163pt" height="706pt"
- viewBox="0.00 0.00 163.00 706.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 702)">
+<svg width="163pt" height="695pt"
+ viewBox="0.00 0.00 163.00 695.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 691)">
 <title>tvm::meta_schedule::Mutator</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-702 159,-702 159,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-691 159,-691 159,4 -4,4"/>
 <!-- Node2 -->
 <g id="node1" class="node">
 <title>Node2</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-211.5 155,-211.5 155,-.5 0,-.5"/>
-<text text-anchor="start" x="8" y="-199.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
-<text text-anchor="middle" x="77.5" y="-188.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::Mutator</text>
-<polyline fill="none" stroke="#000000" points="0,-181.5 155,-181.5 "/>
-<text text-anchor="middle" x="77.5" y="-169.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="0,-162.5 155,-162.5 "/>
-<text text-anchor="start" x="8" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
-<text text-anchor="start" x="8" y="-139.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
-<text text-anchor="start" x="8" y="-128.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateTileSize()</text>
-<text text-anchor="start" x="8" y="-117.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateParallel()</text>
-<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateUnroll()</text>
-<text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateComputeLocation()</text>
-<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateThreadBinding()</text>
-<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PyMutator()</text>
-<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultLLVM()</text>
-<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultVNNI()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-200.5 155,-200.5 155,-.5 0,-.5"/>
+<text text-anchor="start" x="8" y="-188.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
+<text text-anchor="middle" x="77.5" y="-177.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::Mutator</text>
+<polyline fill="none" stroke="#000000" points="0,-170.5 155,-170.5 "/>
+<text text-anchor="middle" x="77.5" y="-158.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="0,-151.5 155,-151.5 "/>
+<text text-anchor="start" x="8" y="-139.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
+<text text-anchor="start" x="8" y="-128.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
+<text text-anchor="start" x="8" y="-117.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateTileSize()</text>
+<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateParallel()</text>
+<text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateUnroll()</text>
+<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateComputeLocation()</text>
+<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateThreadBinding()</text>
+<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PyMutator()</text>
+<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultLLVM()</text>
 <text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultCUDA()</text>
 <text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultCUDATensorCore()</text>
 <text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultHexagon()</text>
@@ -37,66 +36,66 @@
 <g id="node2" class="node">
 <title>Node3</title>
 <g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1ObjectRef.html" target="_top" xlink:title="Base class of all object reference. ">
-<polygon fill="#ffffff" stroke="#000000" points="10.5,-249.5 10.5,-471.5 144.5,-471.5 144.5,-249.5 10.5,-249.5"/>
-<text text-anchor="middle" x="77.5" y="-459.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
-<polyline fill="none" stroke="#000000" points="10.5,-452.5 144.5,-452.5 "/>
-<text text-anchor="start" x="18.5" y="-440.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<polyline fill="none" stroke="#000000" points="10.5,-433.5 144.5,-433.5 "/>
-<text text-anchor="start" x="18.5" y="-421.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
+<polygon fill="#ffffff" stroke="#000000" points="10.5,-238.5 10.5,-460.5 144.5,-460.5 144.5,-238.5 10.5,-238.5"/>
+<text text-anchor="middle" x="77.5" y="-448.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
+<polyline fill="none" stroke="#000000" points="10.5,-441.5 144.5,-441.5 "/>
+<text text-anchor="start" x="18.5" y="-429.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<polyline fill="none" stroke="#000000" points="10.5,-422.5 144.5,-422.5 "/>
 <text text-anchor="start" x="18.5" y="-410.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="18.5" y="-399.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
-<text text-anchor="start" x="18.5" y="-388.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
-<text text-anchor="start" x="18.5" y="-377.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
-<text text-anchor="start" x="18.5" y="-366.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&lt;()</text>
-<text text-anchor="start" x="18.5" y="-355.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
-<text text-anchor="start" x="18.5" y="-344.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="18.5" y="-333.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
-<text text-anchor="start" x="18.5" y="-322.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="18.5" y="-311.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
-<text text-anchor="start" x="18.5" y="-300.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
-<text text-anchor="start" x="18.5" y="-289.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
-<text text-anchor="start" x="18.5" y="-278.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
-<text text-anchor="start" x="18.5" y="-267.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
-<text text-anchor="start" x="18.5" y="-256.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
+<text text-anchor="start" x="18.5" y="-399.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
+<text text-anchor="start" x="18.5" y="-388.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
+<text text-anchor="start" x="18.5" y="-377.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
+<text text-anchor="start" x="18.5" y="-366.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
+<text text-anchor="start" x="18.5" y="-355.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&lt;()</text>
+<text text-anchor="start" x="18.5" y="-344.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
+<text text-anchor="start" x="18.5" y="-333.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="18.5" y="-322.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
+<text text-anchor="start" x="18.5" y="-311.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="18.5" y="-300.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
+<text text-anchor="start" x="18.5" y="-289.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
+<text text-anchor="start" x="18.5" y="-278.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
+<text text-anchor="start" x="18.5" y="-267.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
+<text text-anchor="start" x="18.5" y="-256.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
+<text text-anchor="start" x="18.5" y="-245.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
 </a>
 </g>
 </g>
 <!-- Node3&#45;&gt;Node2 -->
 <g id="edge1" class="edge">
 <title>Node3&#45;&gt;Node2</title>
-<path fill="none" stroke="#191970" d="M77.5,-239.0023C77.5,-229.9086 77.5,-220.7756 77.5,-211.7897"/>
-<polygon fill="none" stroke="#191970" points="74.0001,-239.2428 77.5,-249.2428 81.0001,-239.2429 74.0001,-239.2428"/>
+<path fill="none" stroke="#191970" d="M77.5,-228.4182C77.5,-219.1346 77.5,-209.8256 77.5,-200.698"/>
+<polygon fill="none" stroke="#191970" points="74.0001,-228.4721 77.5,-238.4721 81.0001,-228.4721 74.0001,-228.4721"/>
 </g>
 <!-- Node4 -->
 <g id="node3" class="node">
 <title>Node4</title>
 <g id="a_node3"><a xlink:href="classtvm_1_1runtime_1_1ObjectPtr.html" target="_top" xlink:title="{tvm::runtime::ObjectPtr\l\&lt; tvm::runtime::Object \&gt;\n||+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ~ObjectPtr()\l+ swap()\l+ get()\l+ operator&#45;\&gt;()\land 11 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="7.5,-519.5 7.5,-697.5 147.5,-697.5 147.5,-519.5 7.5,-519.5"/>
-<text text-anchor="start" x="15.5" y="-685.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
-<text text-anchor="middle" x="77.5" y="-674.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">&lt; tvm::runtime::Object &gt;</text>
-<polyline fill="none" stroke="#000000" points="7.5,-667.5 147.5,-667.5 "/>
-<text text-anchor="middle" x="77.5" y="-655.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="7.5,-648.5 147.5,-648.5 "/>
-<text text-anchor="start" x="15.5" y="-636.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<polygon fill="#ffffff" stroke="#000000" points="7.5,-508.5 7.5,-686.5 147.5,-686.5 147.5,-508.5 7.5,-508.5"/>
+<text text-anchor="start" x="15.5" y="-674.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
+<text text-anchor="middle" x="77.5" y="-663.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">&lt; tvm::runtime::Object &gt;</text>
+<polyline fill="none" stroke="#000000" points="7.5,-656.5 147.5,-656.5 "/>
+<text text-anchor="middle" x="77.5" y="-644.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="7.5,-637.5 147.5,-637.5 "/>
 <text text-anchor="start" x="15.5" y="-625.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
 <text text-anchor="start" x="15.5" y="-614.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
 <text text-anchor="start" x="15.5" y="-603.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
 <text text-anchor="start" x="15.5" y="-592.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
 <text text-anchor="start" x="15.5" y="-581.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="15.5" y="-570.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
-<text text-anchor="start" x="15.5" y="-559.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
-<text text-anchor="start" x="15.5" y="-548.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="15.5" y="-537.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
-<text text-anchor="start" x="15.5" y="-526.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
+<text text-anchor="start" x="15.5" y="-570.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="15.5" y="-559.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
+<text text-anchor="start" x="15.5" y="-548.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
+<text text-anchor="start" x="15.5" y="-537.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="15.5" y="-526.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
+<text text-anchor="start" x="15.5" y="-515.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
 </a>
 </g>
 </g>
 <!-- Node4&#45;&gt;Node3 -->
 <g id="edge2" class="edge">
 <title>Node4&#45;&gt;Node3</title>
-<path fill="none" stroke="#404040" d="M77.5,-519.3167C77.5,-507.8765 77.5,-496.0062 77.5,-484.1402"/>
-<polygon fill="none" stroke="#404040" points="77.5001,-483.7944 73.5,-477.7944 77.5,-471.7944 81.5,-477.7943 77.5001,-483.7944"/>
-<text text-anchor="middle" x="97" y="-493" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
+<path fill="none" stroke="#404040" d="M77.5,-508.3167C77.5,-496.8765 77.5,-485.0062 77.5,-473.1402"/>
+<polygon fill="none" stroke="#404040" points="77.5001,-472.7944 73.5,-466.7944 77.5,-460.7944 81.5,-466.7943 77.5001,-472.7944"/>
+<text text-anchor="middle" x="97" y="-482" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
 </g>
 </g>
 </svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg
index 2ab76f556b..3e2c85df76 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg
@@ -4,30 +4,29 @@
 <!-- Generated by graphviz version 2.40.1 (20161225.0304)
  -->
 <!-- Title: tvm::meta_schedule::Mutator Pages: 1 -->
-<svg width="163pt" height="490pt"
- viewBox="0.00 0.00 163.00 490.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 486)">
+<svg width="163pt" height="479pt"
+ viewBox="0.00 0.00 163.00 479.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 475)">
 <title>tvm::meta_schedule::Mutator</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-486 159,-486 159,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-475 159,-475 159,4 -4,4"/>
 <!-- Node0 -->
 <g id="node1" class="node">
 <title>Node0</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-211.5 155,-211.5 155,-.5 0,-.5"/>
-<text text-anchor="start" x="8" y="-199.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
-<text text-anchor="middle" x="77.5" y="-188.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::Mutator</text>
-<polyline fill="none" stroke="#000000" points="0,-181.5 155,-181.5 "/>
-<text text-anchor="middle" x="77.5" y="-169.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="0,-162.5 155,-162.5 "/>
-<text text-anchor="start" x="8" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
-<text text-anchor="start" x="8" y="-139.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
-<text text-anchor="start" x="8" y="-128.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateTileSize()</text>
-<text text-anchor="start" x="8" y="-117.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateParallel()</text>
-<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateUnroll()</text>
-<text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateComputeLocation()</text>
-<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateThreadBinding()</text>
-<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PyMutator()</text>
-<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultLLVM()</text>
-<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultVNNI()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-200.5 155,-200.5 155,-.5 0,-.5"/>
+<text text-anchor="start" x="8" y="-188.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
+<text text-anchor="middle" x="77.5" y="-177.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::Mutator</text>
+<polyline fill="none" stroke="#000000" points="0,-170.5 155,-170.5 "/>
+<text text-anchor="middle" x="77.5" y="-158.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="0,-151.5 155,-151.5 "/>
+<text text-anchor="start" x="8" y="-139.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
+<text text-anchor="start" x="8" y="-128.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
+<text text-anchor="start" x="8" y="-117.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateTileSize()</text>
+<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateParallel()</text>
+<text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateUnroll()</text>
+<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateComputeLocation()</text>
+<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateThreadBinding()</text>
+<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PyMutator()</text>
+<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultLLVM()</text>
 <text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultCUDA()</text>
 <text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultCUDATensorCore()</text>
 <text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ DefaultHexagon()</text>
@@ -37,36 +36,36 @@
 <g id="node2" class="node">
 <title>Node1</title>
 <g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1ObjectRef.html" target="_top" xlink:title="Base class of all object reference. ">
-<polygon fill="#ffffff" stroke="#000000" points="10.5,-248.5 10.5,-481.5 144.5,-481.5 144.5,-248.5 10.5,-248.5"/>
-<text text-anchor="middle" x="77.5" y="-469.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
-<polyline fill="none" stroke="#000000" points="10.5,-462.5 144.5,-462.5 "/>
-<text text-anchor="start" x="18.5" y="-450.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<text text-anchor="start" x="18.5" y="-439.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># data_</text>
-<polyline fill="none" stroke="#000000" points="10.5,-432.5 144.5,-432.5 "/>
-<text text-anchor="start" x="18.5" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
+<polygon fill="#ffffff" stroke="#000000" points="10.5,-237.5 10.5,-470.5 144.5,-470.5 144.5,-237.5 10.5,-237.5"/>
+<text text-anchor="middle" x="77.5" y="-458.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
+<polyline fill="none" stroke="#000000" points="10.5,-451.5 144.5,-451.5 "/>
+<text text-anchor="start" x="18.5" y="-439.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<text text-anchor="start" x="18.5" y="-428.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># data_</text>
+<polyline fill="none" stroke="#000000" points="10.5,-421.5 144.5,-421.5 "/>
 <text text-anchor="start" x="18.5" y="-409.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="18.5" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
-<text text-anchor="start" x="18.5" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
-<text text-anchor="start" x="18.5" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
-<text text-anchor="start" x="18.5" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&lt;()</text>
-<text text-anchor="start" x="18.5" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
-<text text-anchor="start" x="18.5" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="18.5" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
-<text text-anchor="start" x="18.5" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="18.5" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
-<text text-anchor="start" x="18.5" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
-<text text-anchor="start" x="18.5" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
-<text text-anchor="start" x="18.5" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
-<text text-anchor="start" x="18.5" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
-<text text-anchor="start" x="18.5" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
+<text text-anchor="start" x="18.5" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
+<text text-anchor="start" x="18.5" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
+<text text-anchor="start" x="18.5" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
+<text text-anchor="start" x="18.5" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
+<text text-anchor="start" x="18.5" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&lt;()</text>
+<text text-anchor="start" x="18.5" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
+<text text-anchor="start" x="18.5" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="18.5" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
+<text text-anchor="start" x="18.5" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="18.5" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
+<text text-anchor="start" x="18.5" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
+<text text-anchor="start" x="18.5" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
+<text text-anchor="start" x="18.5" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
+<text text-anchor="start" x="18.5" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
+<text text-anchor="start" x="18.5" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node0 -->
 <g id="edge1" class="edge">
 <title>Node1&#45;&gt;Node0</title>
-<path fill="none" stroke="#191970" d="M77.5,-238.3319C77.5,-229.417 77.5,-220.4843 77.5,-211.7027"/>
-<polygon fill="none" stroke="#191970" points="74.0001,-238.3804 77.5,-248.3805 81.0001,-238.3805 74.0001,-238.3804"/>
+<path fill="none" stroke="#191970" d="M77.5,-227.2283C77.5,-218.3287 77.5,-209.4293 77.5,-200.7056"/>
+<polygon fill="none" stroke="#191970" points="74.0001,-227.2668 77.5,-237.2668 81.0001,-227.2669 74.0001,-227.2668"/>
 </g>
 </g>
 </svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc-members.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc-members.html
index 6c7ad6a157..ff099f30ef 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc-members.html
@@ -73,12 +73,12 @@ $(function() {
   <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a2d76fa1fb628ff276a284e61123589c5">as</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa5c355fbb7d2f7402ee360dba8a52cdd">ContainerType</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ac261cdb80487fb29ac42b28678f8cbef">data_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a799e989283bbfa92471829ab23179df5">DefaultCUDA</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a48dc2532ac0a7970cfcf1d482473a631">DefaultCUDATensorCore</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#ae4b33fac30e9420d0a0287ab44c37a98">DefaultHexagon</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a540ba92c0e373ff6872c736e3a2ca1b7">DefaultLLVM</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a83c92e6d1f474a65115e7c4a1216e631">DefaultMicro</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#ad8e2da27bbe3f41d69742d87a3232c4d">DefaultVNNI</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a4fe2775d916e99f27815aac6df46fd0c">DefaultCPUTensorization</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a799e989283bbfa92471829ab23179df5">DefaultCUDA</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a48dc2532ac0a7970cfcf1d482473a631">DefaultCUDATensorCore</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#ae4b33fac30e9420d0a0287ab44c37a98">DefaultHexagon</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a540ba92c0e373ff6872c736e3a2ca1b7">DefaultLLVM</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a83c92e6d1f474a65115e7c4a1216e631">DefaultMicro</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a17d8d5ad92691f9e18e3e0ae8ef69e4f">defined</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a91f7ba380cf0f400d8a3fced900f8522">DisallowAsyncStridedMemCopy</a>(bool merge_async_commit_queue_scope=true)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#af3d76d03f0c508b985f7050f0e18732d">DisallowDynamicLoop</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc.html
index 6f840c9507..2d2546ca84 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc.html
@@ -184,9 +184,9 @@ Static Public Member Functions</h2></td></tr>
 <tr class="memitem:a540ba92c0e373ff6872c736e3a2ca1b7"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a540ba92c0e373ff6872c736e3a2ca1b7">DefaultLLVM</a> ()</td></tr>
 <tr class="memdesc:a540ba92c0e373ff6872c736e3a2ca1b7"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default postprocessors for LLVM.  <a href="#a540ba92c0e373ff6872c736e3a2ca1b7">More...</a><br /></td></tr>
 <tr class="separator:a540ba92c0e373ff6872c736e3a2ca1b7"><td class="memSeparator" colspan="2">&#160;</td></tr>
-<tr class="memitem:ad8e2da27bbe3f41d69742d87a3232c4d"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#ad8e2da27bbe3f41d69742d87a3232c4d">DefaultVNNI</a> ()</td></tr>
-<tr class="memdesc:ad8e2da27bbe3f41d69742d87a3232c4d"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default postprocessors for x86 VNNI.  <a href="#ad8e2da27bbe3f41d69742d87a3232c4d">More...</a><br /></td></tr>
-<tr class="separator:ad8e2da27bbe3f41d69742d87a3232c4d"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a4fe2775d916e99f27815aac6df46fd0c"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a4fe2775d916e99f27815aac6df46fd0c">DefaultCPUTensorization</a> ()</td></tr>
+<tr class="memdesc:a4fe2775d916e99f27815aac6df46fd0c"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default postprocessors for x86 (AVX512 and VNNI)  <a href="#a4fe2775d916e99f27815aac6df46fd0c">More...</a><br /></td></tr>
+<tr class="separator:a4fe2775d916e99f27815aac6df46fd0c"><td class="memSeparator" colspan="2">&#160;</td></tr>
 <tr class="memitem:a799e989283bbfa92471829ab23179df5"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a799e989283bbfa92471829ab23179df5">DefaultCUDA</a> ()</td></tr>
 <tr class="memdesc:a799e989283bbfa92471829ab23179df5"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default postprocessors for CUDA.  <a href="#a799e989283bbfa92471829ab23179df5">More...</a><br /></td></tr>
 <tr class="separator:a799e989283bbfa92471829ab23179df5"><td class="memSeparator" colspan="2">&#160;</td></tr>
@@ -309,8 +309,8 @@ Additional Inherited Members</h2></td></tr>
 </div>
 </div>
 <h2 class="groupheader">Member Function Documentation</h2>
-<a id="a799e989283bbfa92471829ab23179df5"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#a799e989283bbfa92471829ab23179df5">&#9670;&nbsp;</a></span>DefaultCUDA()</h2>
+<a id="a4fe2775d916e99f27815aac6df46fd0c"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a4fe2775d916e99f27815aac6df46fd0c">&#9670;&nbsp;</a></span>DefaultCPUTensorization()</h2>
 
 <div class="memitem">
 <div class="memproto">
@@ -319,7 +319,7 @@ Additional Inherited Members</h2></td></tr>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultCUDA </td>
+          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultCPUTensorization </td>
           <td>(</td>
           <td class="paramname"></td><td>)</td>
           <td></td>
@@ -332,12 +332,12 @@ Additional Inherited Members</h2></td></tr>
 </table>
 </div><div class="memdoc">
 
-<p>Create default postprocessors for CUDA. </p>
+<p>Create default postprocessors for x86 (AVX512 and VNNI) </p>
 
 </div>
 </div>
-<a id="a48dc2532ac0a7970cfcf1d482473a631"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#a48dc2532ac0a7970cfcf1d482473a631">&#9670;&nbsp;</a></span>DefaultCUDATensorCore()</h2>
+<a id="a799e989283bbfa92471829ab23179df5"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a799e989283bbfa92471829ab23179df5">&#9670;&nbsp;</a></span>DefaultCUDA()</h2>
 
 <div class="memitem">
 <div class="memproto">
@@ -346,7 +346,7 @@ Additional Inherited Members</h2></td></tr>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultCUDATensorCore </td>
+          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultCUDA </td>
           <td>(</td>
           <td class="paramname"></td><td>)</td>
           <td></td>
@@ -359,12 +359,12 @@ Additional Inherited Members</h2></td></tr>
 </table>
 </div><div class="memdoc">
 
-<p>Create default postprocessors for CUDA with TensorCore. </p>
+<p>Create default postprocessors for CUDA. </p>
 
 </div>
 </div>
-<a id="ae4b33fac30e9420d0a0287ab44c37a98"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#ae4b33fac30e9420d0a0287ab44c37a98">&#9670;&nbsp;</a></span>DefaultHexagon()</h2>
+<a id="a48dc2532ac0a7970cfcf1d482473a631"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a48dc2532ac0a7970cfcf1d482473a631">&#9670;&nbsp;</a></span>DefaultCUDATensorCore()</h2>
 
 <div class="memitem">
 <div class="memproto">
@@ -373,7 +373,7 @@ Additional Inherited Members</h2></td></tr>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultHexagon </td>
+          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultCUDATensorCore </td>
           <td>(</td>
           <td class="paramname"></td><td>)</td>
           <td></td>
@@ -386,12 +386,12 @@ Additional Inherited Members</h2></td></tr>
 </table>
 </div><div class="memdoc">
 
-<p>Create default postprocessors for Hexagon. </p>
+<p>Create default postprocessors for CUDA with TensorCore. </p>
 
 </div>
 </div>
-<a id="a540ba92c0e373ff6872c736e3a2ca1b7"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#a540ba92c0e373ff6872c736e3a2ca1b7">&#9670;&nbsp;</a></span>DefaultLLVM()</h2>
+<a id="ae4b33fac30e9420d0a0287ab44c37a98"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#ae4b33fac30e9420d0a0287ab44c37a98">&#9670;&nbsp;</a></span>DefaultHexagon()</h2>
 
 <div class="memitem">
 <div class="memproto">
@@ -400,7 +400,7 @@ Additional Inherited Members</h2></td></tr>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultLLVM </td>
+          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultHexagon </td>
           <td>(</td>
           <td class="paramname"></td><td>)</td>
           <td></td>
@@ -413,12 +413,12 @@ Additional Inherited Members</h2></td></tr>
 </table>
 </div><div class="memdoc">
 
-<p>Create default postprocessors for LLVM. </p>
+<p>Create default postprocessors for Hexagon. </p>
 
 </div>
 </div>
-<a id="a83c92e6d1f474a65115e7c4a1216e631"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#a83c92e6d1f474a65115e7c4a1216e631">&#9670;&nbsp;</a></span>DefaultMicro()</h2>
+<a id="a540ba92c0e373ff6872c736e3a2ca1b7"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a540ba92c0e373ff6872c736e3a2ca1b7">&#9670;&nbsp;</a></span>DefaultLLVM()</h2>
 
 <div class="memitem">
 <div class="memproto">
@@ -427,7 +427,7 @@ Additional Inherited Members</h2></td></tr>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultMicro </td>
+          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultLLVM </td>
           <td>(</td>
           <td class="paramname"></td><td>)</td>
           <td></td>
@@ -440,12 +440,12 @@ Additional Inherited Members</h2></td></tr>
 </table>
 </div><div class="memdoc">
 
-<p>Create default postprocessors for Micro. </p>
+<p>Create default postprocessors for LLVM. </p>
 
 </div>
 </div>
-<a id="ad8e2da27bbe3f41d69742d87a3232c4d"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#ad8e2da27bbe3f41d69742d87a3232c4d">&#9670;&nbsp;</a></span>DefaultVNNI()</h2>
+<a id="a83c92e6d1f474a65115e7c4a1216e631"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a83c92e6d1f474a65115e7c4a1216e631">&#9670;&nbsp;</a></span>DefaultMicro()</h2>
 
 <div class="memitem">
 <div class="memproto">
@@ -454,7 +454,7 @@ Additional Inherited Members</h2></td></tr>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultVNNI </td>
+          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a>, void&gt; tvm::meta_schedule::Postproc::DefaultMicro </td>
           <td>(</td>
           <td class="paramname"></td><td>)</td>
           <td></td>
@@ -467,7 +467,7 @@ Additional Inherited Members</h2></td></tr>
 </table>
 </div><div class="memdoc">
 
-<p>Create default postprocessors for x86 VNNI. </p>
+<p>Create default postprocessors for Micro. </p>
 
 </div>
 </div>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule-members.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule-members.html
index 90e2758c4b..eb5ad9a54f 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule-members.html
@@ -83,7 +83,7 @@ $(function() {
   <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#acd4de1f7ace3a34603f8832ae1b3180b">DefaultHexagon</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a031b6dcad67f1d985aa30adb13e2b6e8">DefaultLLVM</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#ad181358bf6ca1951f0038f0691308bee">DefaultMicro</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#ab4b54d01446fee31cbcb1235bf8926cf">DefaultVNNI</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a5342931a76e2269970f132d0921e2f45">DefaultX86</a>(const String &amp;type)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a17d8d5ad92691f9e18e3e0ae8ef69e4f">defined</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#acd04bb22a6861e9952c344ee8547411f">DowncastNoCheck</a>(ObjectRef ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a2c558d23de2ff6bf298bc7167a210859">FApply</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"></td></tr>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule.html
index f952956840..ba71b8af1e 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule.html
@@ -193,9 +193,9 @@ Static Public Member Functions</h2></td></tr>
 <tr class="memitem:a031b6dcad67f1d985aa30adb13e2b6e8"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a031b6dcad67f1d985aa30adb13e2b6e8">DefaultLLVM</a> ()</td></tr>
 <tr class="memdesc:a031b6dcad67f1d985aa30adb13e2b6e8"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default schedule rules for LLVM.  <a href="#a031b6dcad67f1d985aa30adb13e2b6e8">More...</a><br /></td></tr>
 <tr class="separator:a031b6dcad67f1d985aa30adb13e2b6e8"><td class="memSeparator" colspan="2">&#160;</td></tr>
-<tr class="memitem:ab4b54d01446fee31cbcb1235bf8926cf"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#ab4b54d01446fee31cbcb1235bf8926cf">DefaultVNNI</a> ()</td></tr>
-<tr class="memdesc:ab4b54d01446fee31cbcb1235bf8926cf"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default schedule rules for x86 VNNI.  <a href="#ab4b54d01446fee31cbcb1235bf8926cf">More...</a><br /></td></tr>
-<tr class="separator:ab4b54d01446fee31cbcb1235bf8926cf"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a5342931a76e2269970f132d0921e2f45"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a5342931a76e2269970f132d0921e2f45">DefaultX86</a> (const <a class="el" href="classtvm_1_1runtim [...]
+<tr class="memdesc:a5342931a76e2269970f132d0921e2f45"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default schedule rules for x86 (AVX512 and VNNI)  <a href="#a5342931a76e2269970f132d0921e2f45">More...</a><br /></td></tr>
+<tr class="separator:a5342931a76e2269970f132d0921e2f45"><td class="memSeparator" colspan="2">&#160;</td></tr>
 <tr class="memitem:a77ab3dd14cbfcec7ed059559f7afc372"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt; <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a>, void &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a77ab3dd14cbfcec7ed059559f7afc372">DefaultCUDA</a> ()</td></tr>
 <tr class="memdesc:a77ab3dd14cbfcec7ed059559f7afc372"><td class="mdescLeft">&#160;</td><td class="mdescRight">Create default schedule rules for CUDA.  <a href="#a77ab3dd14cbfcec7ed059559f7afc372">More...</a><br /></td></tr>
 <tr class="separator:a77ab3dd14cbfcec7ed059559f7afc372"><td class="memSeparator" colspan="2">&#160;</td></tr>
@@ -697,8 +697,8 @@ Additional Inherited Members</h2></td></tr>
 
 </div>
 </div>
-<a id="ab4b54d01446fee31cbcb1235bf8926cf"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#ab4b54d01446fee31cbcb1235bf8926cf">&#9670;&nbsp;</a></span>DefaultVNNI()</h2>
+<a id="a5342931a76e2269970f132d0921e2f45"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a5342931a76e2269970f132d0921e2f45">&#9670;&nbsp;</a></span>DefaultX86()</h2>
 
 <div class="memitem">
 <div class="memproto">
@@ -707,9 +707,10 @@ Additional Inherited Members</h2></td></tr>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a>, void&gt; tvm::meta_schedule::ScheduleRule::DefaultVNNI </td>
+          <td class="memname">static <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a>, void&gt; tvm::meta_schedule::ScheduleRule::DefaultX86 </td>
           <td>(</td>
-          <td class="paramname"></td><td>)</td>
+          <td class="paramtype">const <a class="el" href="classtvm_1_1runtime_1_1String.html">String</a> &amp;&#160;</td>
+          <td class="paramname"><em>type</em></td><td>)</td>
           <td></td>
         </tr>
       </table>
@@ -720,7 +721,7 @@ Additional Inherited Members</h2></td></tr>
 </table>
 </div><div class="memdoc">
 
-<p>Create default schedule rules for x86 VNNI. </p>
+<p>Create default schedule rules for x86 (AVX512 and VNNI) </p>
 
 </div>
 </div>
diff --git a/docs/reference/api/doxygen/functions_d.html b/docs/reference/api/doxygen/functions_d.html
index 600117c447..6e5369a734 100644
--- a/docs/reference/api/doxygen/functions_d.html
+++ b/docs/reference/api/doxygen/functions_d.html
@@ -174,6 +174,9 @@ $(function() {
 <li>default_primitive_virtual_device
 : <a class="el" href="classtvm_1_1CompilationConfigNode.html#abe4569cf32c57b710be99b50e7118876">tvm::CompilationConfigNode</a>
 </li>
+<li>DefaultCPUTensorization()
+: <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a4fe2775d916e99f27815aac6df46fd0c">tvm::meta_schedule::Postproc</a>
+</li>
 <li>DefaultCUDA()
 : <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a6eb9b1298865cdeb5a8247a4e14454e3">tvm::meta_schedule::Mutator</a>
 , <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a799e989283bbfa92471829ab23179df5">tvm::meta_schedule::Postproc</a>
@@ -202,10 +205,8 @@ $(function() {
 , <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a83c92e6d1f474a65115e7c4a1216e631">tvm::meta_schedule::Postproc</a>
 , <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#ad181358bf6ca1951f0038f0691308bee">tvm::meta_schedule::ScheduleRule</a>
 </li>
-<li>DefaultVNNI()
-: <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a8473324dbcbe078f021a58219a2cb687">tvm::meta_schedule::Mutator</a>
-, <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#ad8e2da27bbe3f41d69742d87a3232c4d">tvm::meta_schedule::Postproc</a>
-, <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#ab4b54d01446fee31cbcb1235bf8926cf">tvm::meta_schedule::ScheduleRule</a>
+<li>DefaultX86()
+: <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a5342931a76e2269970f132d0921e2f45">tvm::meta_schedule::ScheduleRule</a>
 </li>
 <li>DefEqual()
 : <a class="el" href="classtvm_1_1SEqualReducer.html#a62ba4c55928d4886853f9c33f4147340">tvm::SEqualReducer</a>
@@ -327,7 +328,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1DiagnosticContext.html#a95a504685fb72779a8b63abb3e2923ea">tvm::DiagnosticContext</a>
 </li>
 <li>DiagnosticRenderer()
-: <a class="el" href="classtvm_1_1DiagnosticRenderer.html#aee223ebb9e5a875795e6536503e155ad">tvm::DiagnosticRenderer</a>
+: <a class="el" href="classtvm_1_1DiagnosticRenderer.html#a118215b25d3747423a3fa6af989b32df">tvm::DiagnosticRenderer</a>
 </li>
 <li>diagnostics
 : <a class="el" href="classtvm_1_1DiagnosticContextNode.html#ada207669f235f6aa8dbf310583a92339">tvm::DiagnosticContextNode</a>
@@ -339,7 +340,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1DictAttrs.html#a3999d7e2b942c8f9993f6d51cb8f3ded">tvm::DictAttrs</a>
 </li>
 <li>DictDoc()
-: <a class="el" href="classtvm_1_1script_1_1printer_1_1DictDoc.html#a8cedc24d34db6c6a185912bb41df562d">tvm::script::printer::DictDoc</a>
+: <a class="el" href="classtvm_1_1script_1_1printer_1_1DictDoc.html#a60961545e317ab265c56f2c905db88b9">tvm::script::printer::DictDoc</a>
 </li>
 <li>difference_type
 : <a class="el" href="classtvm_1_1runtime_1_1IterAdapter.html#aa4c2c9d77272b79dad3c3a4ff392d186">tvm::runtime::IterAdapter&lt; Converter, TIter &gt;</a>
diff --git a/docs/reference/api/doxygen/functions_func_d.html b/docs/reference/api/doxygen/functions_func_d.html
index df751984e5..d188e0a442 100644
--- a/docs/reference/api/doxygen/functions_func_d.html
+++ b/docs/reference/api/doxygen/functions_func_d.html
@@ -99,6 +99,9 @@ $(function() {
 , <a class="el" href="classtvm_1_1meta__schedule_1_1MeasureCallback.html#a88ce90c3501edf83c42196f29920029f">tvm::meta_schedule::MeasureCallback</a>
 , <a class="el" href="classtvm_1_1VirtualDevice.html#a73364da6471b4634fb14abf10ce42f3c">tvm::VirtualDevice</a>
 </li>
+<li>DefaultCPUTensorization()
+: <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a4fe2775d916e99f27815aac6df46fd0c">tvm::meta_schedule::Postproc</a>
+</li>
 <li>DefaultCUDA()
 : <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a6eb9b1298865cdeb5a8247a4e14454e3">tvm::meta_schedule::Mutator</a>
 , <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a799e989283bbfa92471829ab23179df5">tvm::meta_schedule::Postproc</a>
@@ -127,10 +130,8 @@ $(function() {
 , <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a83c92e6d1f474a65115e7c4a1216e631">tvm::meta_schedule::Postproc</a>
 , <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#ad181358bf6ca1951f0038f0691308bee">tvm::meta_schedule::ScheduleRule</a>
 </li>
-<li>DefaultVNNI()
-: <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a8473324dbcbe078f021a58219a2cb687">tvm::meta_schedule::Mutator</a>
-, <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#ad8e2da27bbe3f41d69742d87a3232c4d">tvm::meta_schedule::Postproc</a>
-, <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#ab4b54d01446fee31cbcb1235bf8926cf">tvm::meta_schedule::ScheduleRule</a>
+<li>DefaultX86()
+: <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a5342931a76e2269970f132d0921e2f45">tvm::meta_schedule::ScheduleRule</a>
 </li>
 <li>DefEqual()
 : <a class="el" href="classtvm_1_1SEqualReducer.html#a62ba4c55928d4886853f9c33f4147340">tvm::SEqualReducer</a>
diff --git a/docs/reference/api/doxygen/functions_m.html b/docs/reference/api/doxygen/functions_m.html
index 23a706ef21..e62547588e 100644
--- a/docs/reference/api/doxygen/functions_m.html
+++ b/docs/reference/api/doxygen/functions_m.html
@@ -344,7 +344,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1DiagnosticContextNode.html#adea7e38a6e47cbab7fb5639f208aa536">tvm::DiagnosticContextNode</a>
 </li>
 <li>Module()
-: <a class="el" href="classtvm_1_1runtime_1_1Module.html#abfbc619b3b3166d63ec52e399c24bed9">tvm::runtime::Module</a>
+: <a class="el" href="classtvm_1_1runtime_1_1Module.html#abd1380b3f813c2b6acefca3aaef425f4">tvm::runtime::Module</a>
 , <a class="el" href="classtvm_1_1runtime_1_1ModuleNode.html#a21f639900c480510650969df9c74d17d">tvm::runtime::ModuleNode</a>
 </li>
 <li>module_handle
diff --git a/docs/reference/api/doxygen/functions_s.html b/docs/reference/api/doxygen/functions_s.html
index 37055f0b39..adc33c9652 100644
--- a/docs/reference/api/doxygen/functions_s.html
+++ b/docs/reference/api/doxygen/functions_s.html
@@ -1024,7 +1024,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1script_1_1printer_1_1StmtDoc.html#adec8d59e41d8a4093fb310089bf2c3ba">tvm::script::printer::StmtDoc</a>
 </li>
 <li>StmtNode()
-: <a class="el" href="classtvm_1_1tir_1_1StmtNode.html#a67693c4e97ae49890ea74605fe1b1f74">tvm::tir::StmtNode</a>
+: <a class="el" href="classtvm_1_1tir_1_1StmtNode.html#a79e21b14d3ab57209577bf4a8f694a87">tvm::tir::StmtNode</a>
 </li>
 <li>stmts
 : <a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1TIRFrameNode.html#a13776bb5c2e5403138fbee06d4fdad40">tvm::script::ir_builder::tir::TIRFrameNode</a>
@@ -1135,7 +1135,7 @@ $(function() {
 , <a class="el" href="classtvm_1_1tir_1_1BufferNode.html#ac18ddd10b79a30ae57d3a8283686259d">tvm::tir::BufferNode</a>
 </li>
 <li>String()
-: <a class="el" href="classtvm_1_1runtime_1_1String.html#acf549b3c43142639879e0fc31ea5cd77">tvm::runtime::String</a>
+: <a class="el" href="classtvm_1_1runtime_1_1String.html#a02fca36e3ff55cc1e83635b02a11fca3">tvm::runtime::String</a>
 , <a class="el" href="classtvm_1_1runtime_1_1StringObj_1_1FromStd.html#a7fb804f7dc96dd9f705c84095f37f1ca">tvm::runtime::StringObj::FromStd</a>
 , <a class="el" href="classtvm_1_1runtime_1_1StringObj.html#a7fb804f7dc96dd9f705c84095f37f1ca">tvm::runtime::StringObj</a>
 </li>
diff --git a/docs/reference/api/doxygen/functions_t.html b/docs/reference/api/doxygen/functions_t.html
index 6a737ffccb..d2835c070c 100644
--- a/docs/reference/api/doxygen/functions_t.html
+++ b/docs/reference/api/doxygen/functions_t.html
@@ -1428,7 +1428,7 @@ $(function() {
 , <a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::ObjectPtr&lt; T &gt;</a>
 , <a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::ObjectRef</a>
 , <a class="el" href="classtvm_1_1runtime_1_1TVMPODValue__.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::TVMPODValue_</a>
-, <a class="el" href="classtvm_1_1runtime_1_1TVMRetValue.html#ac4a3850c0989e7c2d5cd8e0f096d0997">tvm::runtime::TVMRetValue</a>
+, <a class="el" href="classtvm_1_1runtime_1_1TVMRetValue.html#a77455a8fe7d27b90a01a64f1cd28e9ec">tvm::runtime::TVMRetValue</a>
 , <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::TypedPackedFunc&lt; R(Args...)&gt;</a>
 </li>
 <li>type
@@ -1500,7 +1500,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1TypedEnvFunc_3_01R_07Args_8_8_8_08_4.html#a41a6b9014d0feeb628ca7edfd0d26f0b">tvm::TypedEnvFunc&lt; R(Args...)&gt;</a>
 </li>
 <li>TypedPackedFunc()
-: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#af45a2ceff92e6f6c394ea766a45027a0">tvm::runtime::TypedPackedFunc&lt; R(Args...)&gt;</a>
+: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#a6b346a6d0b601eff5a100c7a207e9c86">tvm::runtime::TypedPackedFunc&lt; R(Args...)&gt;</a>
 </li>
 <li>TypeIndex2Key()
 : <a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">tvm::runtime::Object</a>
diff --git a/docs/reference/api/doxygen/functions_u.html b/docs/reference/api/doxygen/functions_u.html
index 9051d7808e..aee008c4c1 100644
--- a/docs/reference/api/doxygen/functions_u.html
+++ b/docs/reference/api/doxygen/functions_u.html
@@ -122,7 +122,7 @@ $(function() {
 , <a class="el" href="classtvm_1_1auto__scheduler_1_1CostModelNode.html#ae35b2b678760b8da57a43d3ae9c24da5">tvm::auto_scheduler::CostModelNode</a>
 , <a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#a2d7849df6c7dbe93bf363c1d9f860a26">tvm::auto_scheduler::PythonBasedModelNode</a>
 , <a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModelNode.html#a7febac6c05d8e2d407f466467769ee32">tvm::auto_scheduler::RandomModelNode</a>
-, <a class="el" href="classtvm_1_1IRModuleNode.html#a94a93385e64ce844299729af6a573015">tvm::IRModuleNode</a>
+, <a class="el" href="classtvm_1_1IRModuleNode.html#abdd8936c6fca33ef9b7c086f8fd58f84">tvm::IRModuleNode</a>
 , <a class="el" href="classtvm_1_1meta__schedule_1_1CostModelNode.html#a1bba32eba84db583fe90d1a5bce085f1">tvm::meta_schedule::CostModelNode</a>
 , <a class="el" href="classtvm_1_1meta__schedule_1_1PyCostModelNode.html#a970b00b0eb1bf6b88eea2711b58c4d1d">tvm::meta_schedule::PyCostModelNode</a>
 </li>
diff --git a/docs/reference/api/doxygen/mutator_8h_source.html b/docs/reference/api/doxygen/mutator_8h_source.html
index cc62ab8701..4de1b9c425 100644
--- a/docs/reference/api/doxygen/mutator_8h_source.html
+++ b/docs/reference/api/doxygen/mutator_8h_source.html
@@ -66,7 +66,7 @@ $(function() {
 <div class="title">mutator.h</div>  </div>
 </div><!--header-->
 <div class="contents">
-<a href="mutator_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more con [...]
+<a href="mutator_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more con [...]
 <div class="ttc" id="trace_8h_html"><div class="ttname"><a href="trace_8h.html">trace.h</a></div></div>
 <div class="ttc" id="optional_8h_html"><div class="ttname"><a href="optional_8h.html">optional.h</a></div><div class="ttdoc">Runtime Optional container types. </div></div>
 <div class="ttc" id="random__engine_8h_html"><div class="ttname"><a href="random__engine_8h.html">random_engine.h</a></div><div class="ttdoc">Random number generator. It provides a generic interface consistent with std::uniform_random_bit_gene...</div></div>
@@ -74,9 +74,9 @@ $(function() {
 <div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1MutatorNode_html_aa81faa50840d255a832cf6fdf078f8dd"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1MutatorNode.html#aa81faa50840d255a832cf6fdf078f8dd">tvm::meta_schedule::MutatorNode::Apply</a></div><div class="ttdeci">virtual Optional&lt; tir::Trace &gt; Apply(const tir::Trace &amp;trace, support::LinearCongruentialEngine::TRandState *rand_state)=0</div><div class="ttdoc">Apply the mutator function to the given trace. </div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1MutatorNode_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1MutatorNode.html">tvm::meta_schedule::MutatorNode</a></div><div class="ttdoc">Mutator is designed to mutate the trace to explore the design space. </div><div class="ttdef"><b>Definition:</b> mutator.h:38</div></div>
-<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_a70b3d67fe3074d54d13c7b1dc43e186e"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#a70b3d67fe3074d54d13c7b1dc43e186e">tvm::meta_schedule::PyMutatorNode::f_apply</a></div><div class="ttdeci">FApply f_apply</div><div class="ttdoc">The packed function to the Apply function. </div><div class="ttdef"><b>Definition:</b> mutator.h:158</div></div>
+<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_a70b3d67fe3074d54d13c7b1dc43e186e"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#a70b3d67fe3074d54d13c7b1dc43e186e">tvm::meta_schedule::PyMutatorNode::f_apply</a></div><div class="ttdeci">FApply f_apply</div><div class="ttdoc">The packed function to the Apply function. </div><div class="ttdef"><b>Definition:</b> mutator.h:156</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1MutatorNode_html_a2826ab3e526c41fbcb17700a58b9a592"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1MutatorNode.html#a2826ab3e526c41fbcb17700a58b9a592">tvm::meta_schedule::MutatorNode::TVM_DECLARE_BASE_OBJECT_INFO</a></div><div class="ttdeci">TVM_DECLARE_BASE_OBJECT_INFO(MutatorNode, Object)</div></div>
-<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_a2b9b6129b0660c684b07c2f505021f2f"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#a2b9b6129b0660c684b07c2f505021f2f">tvm::meta_schedule::PyMutatorNode::f_initialize_with_tune_context</a></div><div class="ttdeci">FInitializeWithTuneContext f_initialize_with_tune_context</div><div class="ttdoc">The packed function to the InitializeWithTuneContext function. </div><div class="ttdef"><b>Defini [...]
+<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_a2b9b6129b0660c684b07c2f505021f2f"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#a2b9b6129b0660c684b07c2f505021f2f">tvm::meta_schedule::PyMutatorNode::f_initialize_with_tune_context</a></div><div class="ttdeci">FInitializeWithTuneContext f_initialize_with_tune_context</div><div class="ttdoc">The packed function to the InitializeWithTuneContext function. </div><div class="ttdef"><b>Defini [...]
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1MutatorNode_html_a267b4657b2116142d4635ff53fbedf8c"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1MutatorNode.html#a267b4657b2116142d4635ff53fbedf8c">tvm::meta_schedule::MutatorNode::~MutatorNode</a></div><div class="ttdeci">virtual ~MutatorNode()=default</div><div class="ttdoc">Virtual destructor. </div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Object_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></div><div class="ttdoc">base class of all object containers. </div><div class="ttdef"><b>Definition:</b> object.h:167</div></div>
 <div class="ttc" id="object_8h_html_aaaa3dc5b6dc33f84b2d28f9a81267212"><div class="ttname"><a href="object_8h.html#aaaa3dc5b6dc33f84b2d28f9a81267212">TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS</a></div><div class="ttdeci">#define TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)</div><div class="ttdef"><b>Definition:</b> object.h:744</div></div>
@@ -86,10 +86,10 @@ $(function() {
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1Mutator_html_afc0d122e314d403b9d1abff9664deb1f"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1Mutator.html#afc0d122e314d403b9d1abff9664deb1f">tvm::meta_schedule::Mutator::FAsString</a></div><div class="ttdeci">runtime::TypedPackedFunc&lt; String()&gt; FAsString</div><div class="ttdoc">Get the mutator as string with name. </div><div class="ttdef"><b>Definition:</b> mutator.h:98</div></div>
 <div class="ttc" id="classtvm_1_1AttrVisitor_html"><div class="ttname"><a href="classtvm_1_1AttrVisitor.html">tvm::AttrVisitor</a></div><div class="ttdoc">Visitor class to get the attributes of an AST/IR node. The content is going to be called for each fie...</div><div class="ttdef"><b>Definition:</b> reflection.h:52</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1Mutator_html_ade6fc51af24708ee525c45a304ba342e"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1Mutator.html#ade6fc51af24708ee525c45a304ba342e">tvm::meta_schedule::Mutator::FApply</a></div><div class="ttdeci">runtime::TypedPackedFunc&lt; Optional&lt; tir::Trace &gt;(const tir::Trace &amp;, support::LinearCongruentialEngine::TRandState rand_state)&gt; FApply</div><div class="ttdoc">Apply the mutator function to the given trace. [...]
-<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_a72f6abc5491f220b4bc68ca399cd4d22"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#a72f6abc5491f220b4bc68ca399cd4d22">tvm::meta_schedule::PyMutatorNode::VisitAttrs</a></div><div class="ttdeci">void VisitAttrs(tvm::AttrVisitor *v)</div><div class="ttdef"><b>Definition:</b> mutator.h:164</div></div>
-<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html">tvm::meta_schedule::PyMutatorNode</a></div><div class="ttdoc">The mutator with customized methods on the python-side. </div><div class="ttdef"><b>Definition:</b> mutator.h:149</div></div>
+<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_a72f6abc5491f220b4bc68ca399cd4d22"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#a72f6abc5491f220b4bc68ca399cd4d22">tvm::meta_schedule::PyMutatorNode::VisitAttrs</a></div><div class="ttdeci">void VisitAttrs(tvm::AttrVisitor *v)</div><div class="ttdef"><b>Definition:</b> mutator.h:162</div></div>
+<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html">tvm::meta_schedule::PyMutatorNode</a></div><div class="ttdoc">The mutator with customized methods on the python-side. </div><div class="ttdef"><b>Definition:</b> mutator.h:147</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1TypedPackedFunc_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1TypedPackedFunc.html">tvm::runtime::TypedPackedFunc</a></div><div class="ttdoc">Please refer to TypedPackedFunc&lt;R(Args..)&gt;. </div><div class="ttdef"><b>Definition:</b> packed_func.h:60</div></div>
-<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_ac4684cd645c50ab256c21100f2e175d0"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#ac4684cd645c50ab256c21100f2e175d0">tvm::meta_schedule::PyMutatorNode::f_clone</a></div><div class="ttdeci">FClone f_clone</div><div class="ttdoc">The packed function to the Clone function. </div><div class="ttdef"><b>Definition:</b> mutator.h:160</div></div>
+<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_ac4684cd645c50ab256c21100f2e175d0"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#ac4684cd645c50ab256c21100f2e175d0">tvm::meta_schedule::PyMutatorNode::f_clone</a></div><div class="ttdeci">FClone f_clone</div><div class="ttdoc">The packed function to the Clone function. </div><div class="ttdef"><b>Definition:</b> mutator.h:158</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1MutatorNode_html_a6ff8406b5ebe26f20fce634e567b50b7"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1MutatorNode.html#a6ff8406b5ebe26f20fce634e567b50b7">tvm::meta_schedule::MutatorNode::InitializeWithTuneContext</a></div><div class="ttdeci">virtual void InitializeWithTuneContext(const TuneContext &amp;context)=0</div><div class="ttdoc">Initialize the design space generator with tuning context. </div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="tir_2schedule_2schedule_8h_html"><div class="ttname"><a href="tir_2schedule_2schedule_8h.html">schedule.h</a></div></div>
@@ -103,7 +103,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1Mutator_html_aef9bdcd9ecc168cccb807de472d29630"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1Mutator.html#aef9bdcd9ecc168cccb807de472d29630">tvm::meta_schedule::Mutator::FInitializeWithTuneContext</a></div><div class="ttdeci">runtime::TypedPackedFunc&lt; void(const TuneContext &amp;)&gt; FInitializeWithTuneContext</div><div class="ttdoc">The function type of InitializeWithTuneContext method. </div><div class="ttdef"><b>Defi [...]
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1Mutator_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></div><div class="ttdoc">Managed reference to MutatorNode. </div><div class="ttdef"><b>Definition:</b> mutator.h:75</div></div>
 <div class="ttc" id="packed__func_8h_html"><div class="ttname"><a href="packed__func_8h.html">packed_func.h</a></div><div class="ttdoc">Type-erased function used across TVM API. </div></div>
-<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_a602a946a0cc8cbd733be06d7dcd19344"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#a602a946a0cc8cbd733be06d7dcd19344">tvm::meta_schedule::PyMutatorNode::f_as_string</a></div><div class="ttdeci">FAsString f_as_string</div><div class="ttdoc">The packed function to the AsString function. </div><div class="ttdef"><b>Definition:</b> mutator.h:162</div></div>
+<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyMutatorNode_html_a602a946a0cc8cbd733be06d7dcd19344"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#a602a946a0cc8cbd733be06d7dcd19344">tvm::meta_schedule::PyMutatorNode::f_as_string</a></div><div class="ttdeci">FAsString f_as_string</div><div class="ttdoc">The packed function to the AsString function. </div><div class="ttdef"><b>Definition:</b> mutator.h:160</div></div>
 </div><!-- fragment --></div><!-- contents -->
 <!-- start footer part -->
 <hr class="footer"/><address class="footer"><small>
diff --git a/docs/reference/api/doxygen/postproc_8h_source.html b/docs/reference/api/doxygen/postproc_8h_source.html
index 009a66d6b7..cd1650d3ca 100644
--- a/docs/reference/api/doxygen/postproc_8h_source.html
+++ b/docs/reference/api/doxygen/postproc_8h_source.html
@@ -66,7 +66,7 @@ $(function() {
 <div class="title">postproc.h</div>  </div>
 </div><!--header-->
 <div class="contents">
-<a href="postproc_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more co [...]
+<a href="postproc_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more co [...]
 <div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1Postproc_html_a54da13f2c14d0df15478e61386cf1a3a"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1Postproc.html#a54da13f2c14d0df15478e61386cf1a3a">tvm::meta_schedule::Postproc::FInitializeWithTuneContext</a></div><div class="ttdeci">runtime::TypedPackedFunc&lt; void(const TuneContext &amp;)&gt; FInitializeWithTuneContext</div><div class="ttdoc">The function type of InitializeWithTuneContext method. </div><div class="ttdef"><b>D [...]
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1Postproc_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></div><div class="ttdoc">Managed reference to PostprocNode. </div><div class="ttdef"><b>Definition:</b> postproc.h:72</div></div>
diff --git a/docs/reference/api/doxygen/schedule__rule_8h_source.html b/docs/reference/api/doxygen/schedule__rule_8h_source.html
index 0bf61370bc..713f0306b9 100644
--- a/docs/reference/api/doxygen/schedule__rule_8h_source.html
+++ b/docs/reference/api/doxygen/schedule__rule_8h_source.html
@@ -66,7 +66,7 @@ $(function() {
 <div class="title">schedule_rule.h</div>  </div>
 </div><!--header-->
 <div class="contents">
-<a href="schedule__rule_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or m [...]
+<a href="schedule__rule_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or m [...]
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1ScheduleRuleNode_html_a5de55e66ecb7a81ce105d37a41ce45e7"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1ScheduleRuleNode.html#a5de55e66ecb7a81ce105d37a41ce45e7">tvm::meta_schedule::ScheduleRuleNode::InitializeWithTuneContext</a></div><div class="ttdeci">virtual void InitializeWithTuneContext(const TuneContext &amp;context)=0</div><div class="ttdoc">Initialize the design space generator with tuning context. </div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1ScheduleRuleNode_html_a8505847517d6f194e4b1679a0b46b147"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1ScheduleRuleNode.html#a8505847517d6f194e4b1679a0b46b147">tvm::meta_schedule::ScheduleRuleNode::Clone</a></div><div class="ttdeci">virtual ScheduleRule Clone() const =0</div><div class="ttdoc">Deep clone the schedule rule. </div></div>
 <div class="ttc" id="optional_8h_html"><div class="ttname"><a href="optional_8h.html">optional.h</a></div><div class="ttdoc">Runtime Optional container types. </div></div>
diff --git a/docs/reference/api/doxygen/search/all_10.js b/docs/reference/api/doxygen/search/all_10.js
index f5528be69c..1e9853269b 100644
--- a/docs/reference/api/doxygen/search/all_10.js
+++ b/docs/reference/api/doxygen/search/all_10.js
@@ -33,7 +33,7 @@ var searchData=
   ['onehotattrs',['OneHotAttrs',['../structtvm_1_1relay_1_1OneHotAttrs.html',1,'tvm::relay']]],
   ['onesided',['onesided',['../structtvm_1_1relay_1_1StftAttrs.html#a23bb87eed8fca94613a4e2d8d7f22858',1,'tvm::relay::StftAttrs']]],
   ['oobchecker',['OOBChecker',['../namespacetvm_1_1tir_1_1transform.html#aea27d24b6e7852652d258268d8537b66',1,'tvm::tir::transform']]],
-  ['op',['Op',['../classtvm_1_1Op.html',1,'tvm::Op'],['../classtvm_1_1auto__scheduler_1_1StageNode.html#a97824d055f598a0dc93d601d9881797e',1,'tvm::auto_scheduler::StageNode::op()'],['../classtvm_1_1relay_1_1CallPatternNode.html#af0827599611846bb2952ffbfe3a9a60e',1,'tvm::relay::CallPatternNode::op()'],['../classtvm_1_1relay_1_1CallNode.html#ade66944f5a2f064e4eb07ad9f9438306',1,'tvm::relay::CallNode::op()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#aa16a3e7e4030a69da0def6465d65e7 [...]
+  ['op',['Op',['../classtvm_1_1Op.html',1,'tvm::Op'],['../classtvm_1_1OpAttrMap.html#a2c31e8a3c11caeb061d69db14ebb0e95',1,'tvm::OpAttrMap::Op()'],['../classtvm_1_1auto__scheduler_1_1StageNode.html#a97824d055f598a0dc93d601d9881797e',1,'tvm::auto_scheduler::StageNode::op()'],['../classtvm_1_1relay_1_1CallPatternNode.html#af0827599611846bb2952ffbfe3a9a60e',1,'tvm::relay::CallPatternNode::op()'],['../classtvm_1_1relay_1_1CallNode.html#ade66944f5a2f064e4eb07ad9f9438306',1,'tvm::relay::CallNod [...]
   ['op_2eh',['op.h',['../ir_2op_8h.html',1,'(Global Namespace)'],['../relay_2op_8h.html',1,'(Global Namespace)'],['../tir_2op_8h.html',1,'(Global Namespace)']]],
   ['op2stage_5fcache_5f',['op2stage_cache_',['../classtvm_1_1te_1_1ScheduleNode.html#adbc8bfb6812add2173dcc7a6adb85d5c',1,'tvm::te::ScheduleNode']]],
   ['op_5fattr_5ftypes_2eh',['op_attr_types.h',['../relay_2op__attr__types_8h.html',1,'(Global Namespace)'],['../tir_2op__attr__types_8h.html',1,'(Global Namespace)']]],
diff --git a/docs/reference/api/doxygen/search/all_11.js b/docs/reference/api/doxygen/search/all_11.js
index c1d55270f7..197265282c 100644
--- a/docs/reference/api/doxygen/search/all_11.js
+++ b/docs/reference/api/doxygen/search/all_11.js
@@ -165,7 +165,7 @@ var searchData=
   ['predict_5ffunc',['predict_func',['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#aa051c804bc592d7f4f1a5b5710f73595',1,'tvm::auto_scheduler::PythonBasedModelNode']]],
   ['predict_5fstage_5ffunc',['predict_stage_func',['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#a380809fbb5d4d68b9ec744e3a5015fe6',1,'tvm::auto_scheduler::PythonBasedModelNode']]],
   ['predictstages',['PredictStages',['../classtvm_1_1auto__scheduler_1_1CostModelNode.html#a213222251099444874698d2e9ff18adc',1,'tvm::auto_scheduler::CostModelNode::PredictStages()'],['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#a1f9975c4bdd61793b806663a61a9a703',1,'tvm::auto_scheduler::PythonBasedModelNode::PredictStages()']]],
-  ['prefetch',['Prefetch',['../classtvm_1_1tir_1_1Prefetch.html',1,'tvm::tir::Prefetch'],['../classtvm_1_1tir_1_1Prefetch.html#af462f85dad4268685e3113b6b009d1b2',1,'tvm::tir::Prefetch::Prefetch()'],['../classtvm_1_1te_1_1Stage.html#a611327890918fb641a8e65396ab9c5f6',1,'tvm::te::Stage::prefetch()'],['../namespacetvm_1_1tir_1_1builtin.html#a543f1fc334d2bc830add972895a03f17',1,'tvm::tir::builtin::prefetch()'],['../namespacetvm_1_1script_1_1ir__builder_1_1tir.html#aeb707d56c770edb33ebf73da27 [...]
+  ['prefetch',['Prefetch',['../classtvm_1_1tir_1_1Prefetch.html',1,'tvm::tir::Prefetch'],['../classtvm_1_1te_1_1Stage.html#a611327890918fb641a8e65396ab9c5f6',1,'tvm::te::Stage::prefetch()'],['../classtvm_1_1tir_1_1Prefetch.html#af462f85dad4268685e3113b6b009d1b2',1,'tvm::tir::Prefetch::Prefetch()'],['../namespacetvm_1_1tir_1_1builtin.html#a543f1fc334d2bc830add972895a03f17',1,'tvm::tir::builtin::prefetch()'],['../namespacetvm_1_1script_1_1ir__builder_1_1tir.html#aeb707d56c770edb33ebf73da27 [...]
   ['prefetch_5fdata',['prefetch_data',['../classtvm_1_1te_1_1IterVarAttrNode.html#a0cd129334ac1bc8d6461fb06be67e731',1,'tvm::te::IterVarAttrNode']]],
   ['prefetch_5foffset',['prefetch_offset',['../classtvm_1_1te_1_1IterVarAttrNode.html#a2a4a8e201e6caefeecffd4a7647866fd',1,'tvm::te::IterVarAttrNode']]],
   ['prefetch_5fscope',['prefetch_scope',['../namespacetvm_1_1tir_1_1attr.html#ac95fbd1c09a60b10c7a5d07f6c4b68a6',1,'tvm::tir::attr']]],
diff --git a/docs/reference/api/doxygen/search/all_13.js b/docs/reference/api/doxygen/search/all_13.js
index 0350a3fa4b..20a85e2f10 100644
--- a/docs/reference/api/doxygen/search/all_13.js
+++ b/docs/reference/api/doxygen/search/all_13.js
@@ -12,7 +12,7 @@ var searchData=
   ['randomcomputelocation',['RandomComputeLocation',['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a1bf485537817533eaf711226f687778c',1,'tvm::meta_schedule::ScheduleRule']]],
   ['randommodel',['RandomModel',['../classtvm_1_1auto__scheduler_1_1RandomModel.html',1,'tvm::auto_scheduler::RandomModel'],['../classtvm_1_1auto__scheduler_1_1RandomModel.html#aa456abf1dc91cbf76935189424d8954f',1,'tvm::auto_scheduler::RandomModel::RandomModel()'],['../classtvm_1_1auto__scheduler_1_1RandomModel.html#ac2b355e61135f2ff57d4f96fe2fba845',1,'tvm::auto_scheduler::RandomModel::RandomModel(::tvm::runtime::ObjectPtr&lt;::tvm::runtime::Object &gt; n)']]],
   ['randommodelnode',['RandomModelNode',['../classtvm_1_1auto__scheduler_1_1RandomModelNode.html',1,'tvm::auto_scheduler']]],
-  ['range',['Range',['../classtvm_1_1Range.html',1,'tvm::Range'],['../classtvm_1_1auto__scheduler_1_1IteratorNode.html#a2751c3164971b3154ffc506e3aebaf91',1,'tvm::auto_scheduler::IteratorNode::range()'],['../classtvm_1_1Range.html#a9d58cccc53897fee0c80ab1437da1f0f',1,'tvm::Range::Range()']]],
+  ['range',['Range',['../classtvm_1_1Range.html',1,'tvm::Range'],['../classtvm_1_1Range.html#a9d58cccc53897fee0c80ab1437da1f0f',1,'tvm::Range::Range()'],['../classtvm_1_1auto__scheduler_1_1IteratorNode.html#a2751c3164971b3154ffc506e3aebaf91',1,'tvm::auto_scheduler::IteratorNode::range()']]],
   ['rangenode',['RangeNode',['../classtvm_1_1RangeNode.html',1,'tvm::RangeNode'],['../classtvm_1_1RangeNode.html#ab845f7ed4ed85e360b730df3450d1aab',1,'tvm::RangeNode::RangeNode()'],['../classtvm_1_1RangeNode.html#a4bbc33969cb484c20306da1d2b9fa1fd',1,'tvm::RangeNode::RangeNode(PrimExpr min, PrimExpr extent, Span span=Span())']]],
   ['ranges',['ranges',['../classtvm_1_1arith_1_1IntConstraintsNode.html#ab23d4d806766c88b0df69dbfb5ebd63c',1,'tvm::arith::IntConstraintsNode']]],
   ['rate',['rate',['../structtvm_1_1relay_1_1DropoutAttrs.html#a0b5a52c24a1be53dbb122a1df9fe22af',1,'tvm::relay::DropoutAttrs']]],
@@ -84,7 +84,7 @@ var searchData=
   ['registerconfigoption',['RegisterConfigOption',['../classtvm_1_1transform_1_1PassContext.html#a6f1d1040cc97320414b4690203f87919',1,'tvm::transform::PassContext']]],
   ['registergenericfunc',['RegisterGenericFunc',['../classtvm_1_1GenericFunc.html#a909acecbf2f34f847a34e587a4570dce',1,'tvm::GenericFunc']]],
   ['registerorget',['RegisterOrGet',['../classtvm_1_1OpRegEntry.html#a39a4d3e7f905eb4e29ca464bcedb05bd',1,'tvm::OpRegEntry::RegisterOrGet()'],['../classtvm_1_1relay_1_1ExecutorRegEntry.html#a03347a2b68269b853a7c0399994951ef',1,'tvm::relay::ExecutorRegEntry::RegisterOrGet()'],['../classtvm_1_1relay_1_1RuntimeRegEntry.html#ae8b479159ccd8b35b75950fcda58dd9d',1,'tvm::relay::RuntimeRegEntry::RegisterOrGet()'],['../classtvm_1_1TargetTagRegEntry.html#a07e0631600484dc0985ca62b1620461c',1,'tvm::T [...]
-  ['registry',['Registry',['../classtvm_1_1ReflectionVTable_1_1Registry.html',1,'tvm::ReflectionVTable::Registry'],['../classtvm_1_1runtime_1_1Registry.html',1,'tvm::runtime::Registry'],['../classtvm_1_1ReflectionVTable_1_1Registry.html#ac8f4637640aa9dffed745303a4cfa827',1,'tvm::ReflectionVTable::Registry::Registry()'],['../structTVMMutableFuncRegistry.html#acc1fcd6554c627c1bf3b3c00e1120e9b',1,'TVMMutableFuncRegistry::registry()'],['../structTVMModule.html#a6db21005b9e983207b341e65af4c4a [...]
+  ['registry',['Registry',['../classtvm_1_1ReflectionVTable_1_1Registry.html',1,'tvm::ReflectionVTable::Registry'],['../classtvm_1_1runtime_1_1Registry.html',1,'tvm::runtime::Registry'],['../structTVMMutableFuncRegistry.html#acc1fcd6554c627c1bf3b3c00e1120e9b',1,'TVMMutableFuncRegistry::registry()'],['../structTVMModule.html#a6db21005b9e983207b341e65af4c4ab7',1,'TVMModule::registry()'],['../classtvm_1_1ReflectionVTable_1_1Registry.html#ac8f4637640aa9dffed745303a4cfa827',1,'tvm::Reflection [...]
   ['registry_2eh',['registry.h',['../registry_8h.html',1,'']]],
   ['regname',['RegName',['../namespacetvm_1_1runtime_1_1vm.html#a3bbbf700719e9dc3dda2bc25210c18ae',1,'tvm::runtime::vm']]],
   ['reindex',['ReIndex',['../classtvm_1_1tir_1_1ScheduleNode.html#a9e36a8a0e37a76e55068dd534e28c8c5',1,'tvm::tir::ScheduleNode']]],
@@ -149,7 +149,7 @@ var searchData=
   ['reserve',['reserve',['../classtvm_1_1runtime_1_1Array.html#a1a7727b86efaf35c58a5198ab1c139c8',1,'tvm::runtime::Array']]],
   ['reserveglobalvar',['ReserveGlobalVar',['../classtvm_1_1GlobalVarSupplyNode.html#a29185b94238fc62c928346a004c43b16',1,'tvm::GlobalVarSupplyNode']]],
   ['reservename',['ReserveName',['../classtvm_1_1NameSupplyNode.html#a9feb960ebeeee03fb9c5105655a8da17',1,'tvm::NameSupplyNode']]],
-  ['reset',['Reset',['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a73b14ea360a9902c291d5bf6e97636cd',1,'tvm::auto_scheduler::ProgramMeasurerNode::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html#ae6279154fe70e9eb85937b51e70a4bf8',1,'tvm::runtime::micro_rpc::Unframer::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#a44ff9650ecca8785e33c25c369d2570a',1,'tvm::runtime::micro_rpc::Framer::Reset()'],['../classtvm_1_1tir_1_1StmtSRefNode.html#a0a81 [...]
+  ['reset',['reset',['../classtvm_1_1runtime_1_1NDArray.html#af2a8ccab95d432d1ecad7a389e11bcd3',1,'tvm::runtime::NDArray::reset()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#ac4461465ba0e785794794e0405c96590',1,'tvm::runtime::ObjectPtr::reset()'],['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a73b14ea360a9902c291d5bf6e97636cd',1,'tvm::auto_scheduler::ProgramMeasurerNode::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html#ae6279154fe70e9eb85937b51e70a4bf8',1, [...]
   ['reset_5fattr',['reset_attr',['../classtvm_1_1OpRegEntry.html#a67628f8d3d6dea5b0a47e462c06b7790',1,'tvm::OpRegEntry']]],
   ['resetthreadpool',['ResetThreadPool',['../namespacetvm_1_1runtime_1_1threading.html#aafdb21c00248ff146b614a7e888b4fd7',1,'tvm::runtime::threading']]],
   ['reshape',['reshape',['../namespacetvm_1_1topi.html#a3aad65f2505802109ba7d05359ce9005',1,'tvm::topi']]],
@@ -198,7 +198,7 @@ var searchData=
   ['rewritetensorize',['RewriteTensorize',['../classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff',1,'tvm::meta_schedule::Postproc']]],
   ['rewriteunboundblock',['RewriteUnboundBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980',1,'tvm::meta_schedule::Postproc']]],
   ['rewriteunsafeselect',['RewriteUnsafeSelect',['../namespacetvm_1_1tir_1_1transform.html#a4fe43327c4454dd05b6e925577443f49',1,'tvm::tir::transform']]],
-  ['rfactor',['RFactor',['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()'],['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()']]],
+  ['rfactor',['rfactor',['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()']]],
   ['rfactorstep',['RfactorStep',['../classtvm_1_1auto__scheduler_1_1RfactorStep.html',1,'tvm::auto_scheduler::RfactorStep'],['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a26e6f85b55307f18fab4469e3bd4be0c',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(int stage_id, int iter_id, int factor_iter_id)'],['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a95575c21441177634178245ab562cb4f',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(dmlc::JSONReader *reader)']]],
   ['rfactorstepnode',['RfactorStepNode',['../classtvm_1_1auto__scheduler_1_1RfactorStepNode.html',1,'tvm::auto_scheduler']]],
   ['rhs',['rhs',['../classtvm_1_1relay_1_1ClauseNode.html#a93217eeea15c1f7c1a659da3da86d3bd',1,'tvm::relay::ClauseNode::rhs()'],['../classtvm_1_1script_1_1printer_1_1AssignDocNode.html#a436fcace00d445213fc367ece59c4067',1,'tvm::script::printer::AssignDocNode::rhs()'],['../classtvm_1_1script_1_1printer_1_1ForDocNode.html#aa72614136675287310ea08520f596642',1,'tvm::script::printer::ForDocNode::rhs()'],['../classtvm_1_1script_1_1printer_1_1ScopeDocNode.html#abf3636ac2820118a3d48f2fea32b2b0b' [...]
diff --git a/docs/reference/api/doxygen/search/all_14.js b/docs/reference/api/doxygen/search/all_14.js
index 3c85ae6b2c..5265cba45a 100644
--- a/docs/reference/api/doxygen/search/all_14.js
+++ b/docs/reference/api/doxygen/search/all_14.js
@@ -179,7 +179,7 @@ var searchData=
   ['setvalue_3c_20uint64_5ft_20_3e',['SetValue&lt; uint64_t &gt;',['../namespacetvm_1_1detail.html#acb3382242cbf538f64edae13e4ec5a84',1,'tvm::detail']]],
   ['shallowcopy',['ShallowCopy',['../classtvm_1_1IRModuleNode.html#a86bbdc4b857ce5958a2b5f29e1d6fcb6',1,'tvm::IRModuleNode']]],
   ['shallowcopyirmodule',['ShallowCopyIRModule',['../classtvm_1_1IRModule.html#aea8b821cf92cf525bd87bf15f5d31889',1,'tvm::IRModule']]],
-  ['shape',['Shape',['../classtvm_1_1runtime_1_1NDArray.html#ad273c7bc59b73fb026fd64fc764cbebc',1,'tvm::runtime::NDArray::Shape()'],['../classtvm_1_1TensorTypeNode.html#a98fa347833e4504dd6f8056d9863a708',1,'tvm::TensorTypeNode::shape()'],['../classtvm_1_1meta__schedule_1_1TensorInfoNode.html#ac16d3b10f7c68eefb27e55e865bb304c',1,'tvm::meta_schedule::TensorInfoNode::shape()'],['../structtvm_1_1relay_1_1InitOpAttrs.html#aaaec76cc5ea9a543c4ea174a6b38bf5e',1,'tvm::relay::InitOpAttrs::shape()' [...]
+  ['shape',['shape',['../classtvm_1_1TensorTypeNode.html#a98fa347833e4504dd6f8056d9863a708',1,'tvm::TensorTypeNode::shape()'],['../classtvm_1_1meta__schedule_1_1TensorInfoNode.html#ac16d3b10f7c68eefb27e55e865bb304c',1,'tvm::meta_schedule::TensorInfoNode::shape()'],['../structtvm_1_1relay_1_1InitOpAttrs.html#aaaec76cc5ea9a543c4ea174a6b38bf5e',1,'tvm::relay::InitOpAttrs::shape()'],['../classtvm_1_1relay_1_1ShapePatternNode.html#a749813cbbd38f8021a7df897d527d6e0',1,'tvm::relay::ShapePattern [...]
   ['shape_5f',['shape_',['../classtvm_1_1runtime_1_1NDArray_1_1ContainerBase.html#aa5597a1760c9f8c9d1fd51584b1283fb',1,'tvm::runtime::NDArray::ContainerBase']]],
   ['shape_5fbackward_5frule',['shape_backward_rule',['../classtvm_1_1tir_1_1BijectiveLayoutNode.html#a0befdd0a2371c0d12970e8ac6623b59b',1,'tvm::tir::BijectiveLayoutNode']]],
   ['shape_5fcount',['shape_count',['../structTVMGraphExecutorGraphAttr.html#a182b228582f1186f2a15de50a25b3375',1,'TVMGraphExecutorGraphAttr']]],
@@ -257,7 +257,7 @@ var searchData=
   ['solvelinearequations',['SolveLinearEquations',['../namespacetvm_1_1arith.html#ae0290f04432523ab8e5f76edde80071a',1,'tvm::arith']]],
   ['solvelinearinequalities',['SolveLinearInequalities',['../namespacetvm_1_1arith.html#ac59d63560e04431f108e81457b212fdc',1,'tvm::arith']]],
   ['sorted',['sorted',['../structtvm_1_1relay_1_1UniqueAttrs.html#aef434799646533ec9d796393ba01db44',1,'tvm::relay::UniqueAttrs']]],
-  ['source',['Source',['../classtvm_1_1parser_1_1Source.html',1,'tvm::parser::Source'],['../classtvm_1_1arith_1_1IterMarkNode.html#a8b885a675c88e5a5d142fa68bcba048a',1,'tvm::arith::IterMarkNode::source()'],['../classtvm_1_1arith_1_1IterSplitExprNode.html#a7a129dc9b432359a07c1a1e286c3c66f',1,'tvm::arith::IterSplitExprNode::source()'],['../classtvm_1_1parser_1_1SourceNode.html#a51cc3c98e4cdacf0ffdc643c848e09af',1,'tvm::parser::SourceNode::source()'],['../classtvm_1_1tir_1_1ReduceNode.html# [...]
+  ['source',['Source',['../classtvm_1_1parser_1_1Source.html',1,'tvm::parser::Source'],['../classtvm_1_1parser_1_1Source.html#a0ef9f726abcc6c4c9e81b3a257055df8',1,'tvm::parser::Source::Source()'],['../classtvm_1_1arith_1_1IterMarkNode.html#a8b885a675c88e5a5d142fa68bcba048a',1,'tvm::arith::IterMarkNode::source()'],['../classtvm_1_1arith_1_1IterSplitExprNode.html#a7a129dc9b432359a07c1a1e286c3c66f',1,'tvm::arith::IterSplitExprNode::source()'],['../classtvm_1_1parser_1_1SourceNode.html#a51cc [...]
   ['source_5fmap',['source_map',['../classtvm_1_1IRModuleNode.html#a49470c0bfb4b85d9eda7576a837b7031',1,'tvm::IRModuleNode::source_map()'],['../classtvm_1_1parser_1_1SourceMapNode.html#ae22bc1181b066f17f8938868ef22610a',1,'tvm::parser::SourceMapNode::source_map()']]],
   ['source_5fmap_2eh',['source_map.h',['../source__map_8h.html',1,'']]],
   ['source_5fname',['source_name',['../classtvm_1_1DiagnosticBuilder.html#a92d320e1ede24fe5ff47862365002691',1,'tvm::DiagnosticBuilder::source_name()'],['../classtvm_1_1SpanNode.html#ad573167f93facbfbee19983b08bbba3d',1,'tvm::SpanNode::source_name()'],['../classtvm_1_1parser_1_1SourceNode.html#a8d4c50a18eb3e99b14d73d7db2a52af3',1,'tvm::parser::SourceNode::source_name()']]],
@@ -274,7 +274,7 @@ var searchData=
   ['spacegeneratornode',['SpaceGeneratorNode',['../classtvm_1_1meta__schedule_1_1SpaceGeneratorNode.html',1,'tvm::meta_schedule']]],
   ['spacegeneratorunion',['SpaceGeneratorUnion',['../classtvm_1_1meta__schedule_1_1SpaceGenerator.html#a44828204c6ae3b7f390b9a9c3fdb9aa7',1,'tvm::meta_schedule::SpaceGenerator']]],
   ['spacetobatchndattrs',['SpaceToBatchNDAttrs',['../structtvm_1_1relay_1_1SpaceToBatchNDAttrs.html',1,'tvm::relay']]],
-  ['span',['Span',['../classtvm_1_1Span.html',1,'tvm::Span'],['../classtvm_1_1support_1_1Span.html',1,'tvm::support::Span&lt; T, W &gt;'],['../classtvm_1_1Span.html#a5216631b639e8c802263d87d3fe9e5f6',1,'tvm::Span::Span()'],['../classtvm_1_1support_1_1Span.html#a77653730a2542edf93b7c4413a72f3ec',1,'tvm::support::Span::Span(T *begin, int num_elements)'],['../classtvm_1_1support_1_1Span.html#a3c22dd06856e7029e7107adf38eb72f5',1,'tvm::support::Span::Span(T *begin, T *end)'],['../classtvm_1_1 [...]
+  ['span',['Span',['../classtvm_1_1Span.html',1,'tvm::Span'],['../classtvm_1_1support_1_1Span.html',1,'tvm::support::Span&lt; T, W &gt;'],['../classtvm_1_1AffineTypeNode.html#aa45c91e3c8ebcff609d10f6a921f3fa2',1,'tvm::AffineTypeNode::span()'],['../classtvm_1_1DiagnosticNode.html#af5469f228f87711ad8bd3f4f78f3bb54',1,'tvm::DiagnosticNode::span()'],['../classtvm_1_1DiagnosticBuilder.html#a52d9cc3cb33e655c5d82af47daa74c66',1,'tvm::DiagnosticBuilder::span()'],['../classtvm_1_1CompileError.htm [...]
   ['span_2eh',['span.h',['../ir_2span_8h.html',1,'(Global Namespace)'],['../support_2span_8h.html',1,'(Global Namespace)']]],
   ['spannode',['SpanNode',['../classtvm_1_1SpanNode.html',1,'tvm::SpanNode'],['../namespacetvm_1_1relay.html#a7d0fa6578e97d0d64b08865f94f04827',1,'tvm::relay::SpanNode()']]],
   ['sparse_5flhs',['sparse_lhs',['../structtvm_1_1relay_1_1SparseDenseAttrs.html#ae52d5465cb3421f342607abcc1cb1d5c',1,'tvm::relay::SparseDenseAttrs']]],
@@ -338,7 +338,7 @@ var searchData=
   ['startmessage',['StartMessage',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#acd512b977c6dd888f90c4fd6d2b9500f',1,'tvm::runtime::micro_rpc::Session']]],
   ['startpacket',['StartPacket',['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#ade10d3bd3a26e3b7af881ae134e9a998',1,'tvm::runtime::micro_rpc::Framer']]],
   ['startsession',['StartSession',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#a15d3f9ecb8b22bf2d330f6f0a16c5239',1,'tvm::runtime::micro_rpc::Session']]],
-  ['state',['State',['../classtvm_1_1auto__scheduler_1_1State.html',1,'tvm::auto_scheduler::State'],['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()'],['../classtvm_1_1auto__scheduler_1_1MeasureInputNode.html#afb23aaf6133189687d2541ec6e1352f4',1,'tvm::auto_scheduler::MeasureInputNode::state()'],['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()']]],
+  ['state',['State',['../classtvm_1_1auto__scheduler_1_1State.html',1,'tvm::auto_scheduler::State'],['../classtvm_1_1auto__scheduler_1_1MeasureInputNode.html#afb23aaf6133189687d2541ec6e1352f4',1,'tvm::auto_scheduler::MeasureInputNode::state()'],['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()'],['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()']]],
   ['state_2eh',['state.h',['../state_8h.html',1,'']]],
   ['state_5fplaceholder',['state_placeholder',['../classtvm_1_1te_1_1ScanOpNode.html#a69105f6a84dd4fb912a16bfaa68aebf6',1,'tvm::te::ScanOpNode']]],
   ['statenode',['StateNode',['../classtvm_1_1auto__scheduler_1_1StateNode.html',1,'tvm::auto_scheduler']]],
diff --git a/docs/reference/api/doxygen/search/all_15.js b/docs/reference/api/doxygen/search/all_15.js
index b36767b639..7b276e6369 100644
--- a/docs/reference/api/doxygen/search/all_15.js
+++ b/docs/reference/api/doxygen/search/all_15.js
@@ -78,7 +78,7 @@ var searchData=
   ['te',['te',['../namespacetvm_1_1te.html',1,'tvm']]],
   ['tempexpr',['TempExpr',['../classtvm_1_1relay_1_1TempExpr.html',1,'tvm::relay']]],
   ['tempexprnode',['TempExprNode',['../classtvm_1_1relay_1_1TempExprNode.html',1,'tvm::relay']]],
-  ['tensor',['Tensor',['../classtvm_1_1te_1_1Tensor.html',1,'tvm::te::Tensor'],['../classtvm_1_1te_1_1Tensor.html#afc8d8e74d1c840359661b39514d6fecf',1,'tvm::te::Tensor::Tensor()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a22de469ea5521ba12e14f1e8181bae56',1,'tvm::runtime::vm::Instruction::tensor()']]],
+  ['tensor',['Tensor',['../classtvm_1_1te_1_1Tensor.html',1,'tvm::te::Tensor'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a22de469ea5521ba12e14f1e8181bae56',1,'tvm::runtime::vm::Instruction::tensor()'],['../classtvm_1_1te_1_1Tensor.html#afc8d8e74d1c840359661b39514d6fecf',1,'tvm::te::Tensor::Tensor()']]],
   ['tensor_2eh',['tensor.h',['../tensor_8h.html',1,'']]],
   ['tensor_5fintrin',['tensor_intrin',['../classtvm_1_1te_1_1IterVarAttrNode.html#a6a0d96bbebfd716f851b2ad01738cb3f',1,'tvm::te::IterVarAttrNode']]],
   ['tensor_5fintrin_2eh',['tensor_intrin.h',['../tensor__intrin_8h.html',1,'']]],
@@ -216,7 +216,7 @@ var searchData=
   ['tuningoptionsnode',['TuningOptionsNode',['../classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html',1,'tvm::auto_scheduler']]],
   ['tuningrecord',['TuningRecord',['../classtvm_1_1meta__schedule_1_1TuningRecord.html',1,'tvm::meta_schedule::TuningRecord'],['../classtvm_1_1meta__schedule_1_1TuningRecord.html#aa4699af50f91bda306e6c199766c4757',1,'tvm::meta_schedule::TuningRecord::TuningRecord()']]],
   ['tuningrecordnode',['TuningRecordNode',['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html',1,'tvm::meta_schedule']]],
-  ['tuple',['Tuple',['../classtvm_1_1relay_1_1Tuple.html',1,'tvm::relay::Tuple'],['../classtvm_1_1relay_1_1Tuple.html#a284e236318986fd385a02aa68bd3e938',1,'tvm::relay::Tuple::Tuple()'],['../classtvm_1_1runtime_1_1ADT.html#a871e902541f0a7e550e74ae0c621994c',1,'tvm::runtime::ADT::Tuple()'],['../classtvm_1_1relay_1_1TupleGetItemPatternNode.html#a1fdd79b2fbbf3d7a14cea7e7efc38574',1,'tvm::relay::TupleGetItemPatternNode::tuple()'],['../classtvm_1_1relay_1_1TupleGetItemNode.html#aade4882f84d828 [...]
+  ['tuple',['Tuple',['../classtvm_1_1relay_1_1Tuple.html',1,'tvm::relay::Tuple'],['../classtvm_1_1relay_1_1TupleGetItemPatternNode.html#a1fdd79b2fbbf3d7a14cea7e7efc38574',1,'tvm::relay::TupleGetItemPatternNode::tuple()'],['../classtvm_1_1relay_1_1TupleGetItemNode.html#aade4882f84d828975c689b5c6b1b68e6',1,'tvm::relay::TupleGetItemNode::tuple()'],['../classtvm_1_1relay_1_1Tuple.html#a284e236318986fd385a02aa68bd3e938',1,'tvm::relay::Tuple::Tuple()'],['../classtvm_1_1runtime_1_1ADT.html#a871 [...]
   ['tupleaffinetype',['TupleAffineType',['../classtvm_1_1TupleAffineType.html',1,'tvm::TupleAffineType'],['../classtvm_1_1TupleAffineType.html#afced247570984fed7386c147d02efb79',1,'tvm::TupleAffineType::TupleAffineType()']]],
   ['tupleaffinetypenode',['TupleAffineTypeNode',['../classtvm_1_1TupleAffineTypeNode.html',1,'tvm']]],
   ['tupledoc',['TupleDoc',['../classtvm_1_1script_1_1printer_1_1TupleDoc.html',1,'tvm::script::printer::TupleDoc'],['../classtvm_1_1script_1_1printer_1_1TupleDoc.html#ac3ec09b672b619376fa70cead671de78',1,'tvm::script::printer::TupleDoc::TupleDoc()'],['../classtvm_1_1script_1_1printer_1_1TupleDoc.html#a78ef6fe46a358a34df8cf8c797ce3d6e',1,'tvm::script::printer::TupleDoc::TupleDoc(Array&lt; ExprDoc &gt; elements)']]],
diff --git a/docs/reference/api/doxygen/search/all_16.js b/docs/reference/api/doxygen/search/all_16.js
index 67ee3a7a61..a36241c4ed 100644
--- a/docs/reference/api/doxygen/search/all_16.js
+++ b/docs/reference/api/doxygen/search/all_16.js
@@ -40,7 +40,7 @@ var searchData=
   ['unionregion',['UnionRegion',['../namespacetvm_1_1arith.html#ad27c4f216e41eb8e81296fb7ec4b9453',1,'tvm::arith']]],
   ['unionregionlowerbound',['UnionRegionLowerBound',['../namespacetvm_1_1arith.html#a4c3dedfa4cba4ad39c953eb51eb83e4d',1,'tvm::arith']]],
   ['unipolar',['unipolar',['../structtvm_1_1relay_1_1BinaryConv2DAttrs.html#a7e0ad68dce226079b769a678aa01dc49',1,'tvm::relay::BinaryConv2DAttrs::unipolar()'],['../structtvm_1_1relay_1_1BinaryDenseAttrs.html#af21cdb9dac67ab9ecea5a19642658d8a',1,'tvm::relay::BinaryDenseAttrs::unipolar()']]],
-  ['unique',['unique',['../classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7',1,'tvm::runtime::Object::unique()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#af95c6c6fcd89da0f62b93f1167b72314',1,'tvm::runtime::ObjectPtr::unique()'],['../classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7',1,'tvm::runtime::ObjectRef::unique()'],['../classtvm_1_1VirtualDeviceCache.html#a25ba1351484aa58a2cc7cef8f8e4423c',1,'tvm::VirtualDeviceCache::Unique()']]],
+  ['unique',['Unique',['../classtvm_1_1VirtualDeviceCache.html#a25ba1351484aa58a2cc7cef8f8e4423c',1,'tvm::VirtualDeviceCache::Unique()'],['../classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7',1,'tvm::runtime::Object::unique()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#af95c6c6fcd89da0f62b93f1167b72314',1,'tvm::runtime::ObjectPtr::unique()'],['../classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7',1,'tvm::runtime::ObjectRef::unique()']]],
   ['uniqueattrs',['UniqueAttrs',['../structtvm_1_1relay_1_1UniqueAttrs.html',1,'tvm::relay']]],
   ['uniqueglobalfor',['UniqueGlobalFor',['../classtvm_1_1GlobalVarSupplyNode.html#af67bad5d9d93381c440a7886cbef430a',1,'tvm::GlobalVarSupplyNode']]],
   ['unit_5fbits',['unit_bits',['../classtvm_1_1MemoryInfoNode.html#a505c2f2dd0dd0c28a12b9113e2176a8d',1,'tvm::MemoryInfoNode']]],
@@ -51,7 +51,7 @@ var searchData=
   ['unravel_5findex',['unravel_index',['../namespacetvm_1_1topi.html#a8811a02532bbe3047986bf1a8449ac0e',1,'tvm::topi']]],
   ['unroll',['Unroll',['../classtvm_1_1tir_1_1ScheduleNode.html#a84ec742f6295f59390592a6d0d90a552',1,'tvm::tir::ScheduleNode::Unroll()'],['../classtvm_1_1auto__scheduler_1_1State.html#aa68a9d2e226bae38a36e4be4af1d1ae4',1,'tvm::auto_scheduler::State::unroll()'],['../classtvm_1_1te_1_1Stage.html#af83ad8672660403504f472228b044b33',1,'tvm::te::Stage::unroll()'],['../namespacetvm_1_1script_1_1ir__builder_1_1tir.html#acd41556b0c4088d0f309ef5495aaebe3',1,'tvm::script::ir_builder::tir::Unroll()']]],
   ['unrollloop',['UnrollLoop',['../namespacetvm_1_1tir_1_1transform.html#ab2f279e91071fa96a1edb24fa004ea6a',1,'tvm::tir::transform']]],
-  ['update',['update',['../classtvm_1_1te_1_1ScanOpNode.html#ace2bf7e43cd4197324ec6363626fc60a',1,'tvm::te::ScanOpNode::update()'],['../classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html#a5ae0699196c4bbc754bbdd4c3a6c7ca7',1,'tvm::arith::ConstIntBoundAnalyzer::Update()'],['../classtvm_1_1arith_1_1ModularSetAnalyzer.html#a04156fac580981f3005af3b8e676720d',1,'tvm::arith::ModularSetAnalyzer::Update()'],['../classtvm_1_1arith_1_1RewriteSimplifier.html#a5e6752c0702dc2d3e4235797d9d3ac7b',1,'tvm::a [...]
+  ['update',['Update',['../classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html#a5ae0699196c4bbc754bbdd4c3a6c7ca7',1,'tvm::arith::ConstIntBoundAnalyzer::Update()'],['../classtvm_1_1arith_1_1ModularSetAnalyzer.html#a04156fac580981f3005af3b8e676720d',1,'tvm::arith::ModularSetAnalyzer::Update()'],['../classtvm_1_1arith_1_1RewriteSimplifier.html#a5e6752c0702dc2d3e4235797d9d3ac7b',1,'tvm::arith::RewriteSimplifier::Update()'],['../classtvm_1_1arith_1_1CanonicalSimplifier.html#a790c032e12c7d93e9e940 [...]
   ['update_5ffunc',['update_func',['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#ade9364c152a36501d4f24fa4f0111519',1,'tvm::auto_scheduler::PythonBasedModelNode']]],
   ['updatecostmodel',['UpdateCostModel',['../classtvm_1_1meta__schedule_1_1MeasureCallback.html#afdf5503c6e6f53767de132d91a7b53f9',1,'tvm::meta_schedule::MeasureCallback']]],
   ['updateiters',['UpdateIters',['../classtvm_1_1auto__scheduler_1_1AttachMap.html#ab45b991ef2bcfb1bc191601aac42e778',1,'tvm::auto_scheduler::AttachMap']]],
diff --git a/docs/reference/api/doxygen/search/all_5.js b/docs/reference/api/doxygen/search/all_5.js
index 18821c63b3..283eee6e47 100644
--- a/docs/reference/api/doxygen/search/all_5.js
+++ b/docs/reference/api/doxygen/search/all_5.js
@@ -52,6 +52,7 @@ var searchData=
   ['default_5fprimitive_5fvirtual_5fdevice',['default_primitive_virtual_device',['../classtvm_1_1CompilationConfigNode.html#abe4569cf32c57b710be99b50e7118876',1,'tvm::CompilationConfigNode']]],
   ['default_5fschedule',['default_schedule',['../namespacetvm_1_1topi_1_1generic.html#ae10c7793be021c3da437aeb2f79d8d2e',1,'tvm::topi::generic::default_schedule()'],['../namespacetvm_1_1topi_1_1x86.html#a8df4b07cd29b24d5c1323df91892fad4',1,'tvm::topi::x86::default_schedule()']]],
   ['default_5fschedule_5fauto_5finline',['default_schedule_auto_inline',['../namespacetvm_1_1topi_1_1generic.html#a1b7888cf36fa1da754ec65303a2dbbfb',1,'tvm::topi::generic::default_schedule_auto_inline()'],['../namespacetvm_1_1topi_1_1x86.html#af70d13cc92e434e9bce17cf76f4ef4f8',1,'tvm::topi::x86::default_schedule_auto_inline()']]],
+  ['defaultcputensorization',['DefaultCPUTensorization',['../classtvm_1_1meta__schedule_1_1Postproc.html#a4fe2775d916e99f27815aac6df46fd0c',1,'tvm::meta_schedule::Postproc']]],
   ['defaultcuda',['DefaultCUDA',['../classtvm_1_1meta__schedule_1_1Mutator.html#a6eb9b1298865cdeb5a8247a4e14454e3',1,'tvm::meta_schedule::Mutator::DefaultCUDA()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#a799e989283bbfa92471829ab23179df5',1,'tvm::meta_schedule::Postproc::DefaultCUDA()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a77ab3dd14cbfcec7ed059559f7afc372',1,'tvm::meta_schedule::ScheduleRule::DefaultCUDA()']]],
   ['defaultcudatensorcore',['DefaultCUDATensorCore',['../classtvm_1_1meta__schedule_1_1Mutator.html#af612e614b9550f83d7cc30e0a431df2a',1,'tvm::meta_schedule::Mutator::DefaultCUDATensorCore()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#a48dc2532ac0a7970cfcf1d482473a631',1,'tvm::meta_schedule::Postproc::DefaultCUDATensorCore()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a2abd71c2f3600573784d855d3cd63814',1,'tvm::meta_schedule::ScheduleRule::DefaultCUDATensorCore()']]],
   ['defaulthexagon',['DefaultHexagon',['../classtvm_1_1meta__schedule_1_1Mutator.html#a4ce54511e556a30567e5d5876c81c91d',1,'tvm::meta_schedule::Mutator::DefaultHexagon()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#ae4b33fac30e9420d0a0287ab44c37a98',1,'tvm::meta_schedule::Postproc::DefaultHexagon()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#acd4de1f7ace3a34603f8832ae1b3180b',1,'tvm::meta_schedule::ScheduleRule::DefaultHexagon()']]],
@@ -59,7 +60,7 @@ var searchData=
   ['defaultllvm',['DefaultLLVM',['../classtvm_1_1meta__schedule_1_1Mutator.html#a15a0354263735c53c4b7419153da7c87',1,'tvm::meta_schedule::Mutator::DefaultLLVM()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#a540ba92c0e373ff6872c736e3a2ca1b7',1,'tvm::meta_schedule::Postproc::DefaultLLVM()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a031b6dcad67f1d985aa30adb13e2b6e8',1,'tvm::meta_schedule::ScheduleRule::DefaultLLVM()']]],
   ['defaultmicro',['DefaultMicro',['../classtvm_1_1meta__schedule_1_1Mutator.html#af8fca919396df4557beeacfce9be0ef2',1,'tvm::meta_schedule::Mutator::DefaultMicro()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#a83c92e6d1f474a65115e7c4a1216e631',1,'tvm::meta_schedule::Postproc::DefaultMicro()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#ad181358bf6ca1951f0038f0691308bee',1,'tvm::meta_schedule::ScheduleRule::DefaultMicro()']]],
   ['defaulttimer',['DefaultTimer',['../namespacetvm_1_1runtime.html#ab69f2cbb94a9c579ee870ca7f186cf10',1,'tvm::runtime']]],
-  ['defaultvnni',['DefaultVNNI',['../classtvm_1_1meta__schedule_1_1Mutator.html#a8473324dbcbe078f021a58219a2cb687',1,'tvm::meta_schedule::Mutator::DefaultVNNI()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#ad8e2da27bbe3f41d69742d87a3232c4d',1,'tvm::meta_schedule::Postproc::DefaultVNNI()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#ab4b54d01446fee31cbcb1235bf8926cf',1,'tvm::meta_schedule::ScheduleRule::DefaultVNNI()']]],
+  ['defaultx86',['DefaultX86',['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a5342931a76e2269970f132d0921e2f45',1,'tvm::meta_schedule::ScheduleRule']]],
   ['defequal',['DefEqual',['../classtvm_1_1SEqualReducer.html#a62ba4c55928d4886853f9c33f4147340',1,'tvm::SEqualReducer']]],
   ['deferfail',['DeferFail',['../classtvm_1_1SEqualReducer_1_1Handler.html#aa59c1a7a863c81f2a903795b1a96f986',1,'tvm::SEqualReducer::Handler::DeferFail()'],['../classtvm_1_1SEqualHandlerDefault.html#a916706dd76898d8ff4e381233c609d14',1,'tvm::SEqualHandlerDefault::DeferFail()']]],
   ['defhash',['DefHash',['../classtvm_1_1SHashReducer.html#a74260485bd50d1bfa52ded457a6a7777',1,'tvm::SHashReducer']]],
diff --git a/docs/reference/api/doxygen/search/all_e.js b/docs/reference/api/doxygen/search/all_e.js
index 31583b1eb3..b21b8d914d 100644
--- a/docs/reference/api/doxygen/search/all_e.js
+++ b/docs/reference/api/doxygen/search/all_e.js
@@ -174,7 +174,7 @@ var searchData=
   ['microtvmruntimegetoutput',['MicroTVMRuntimeGetOutput',['../microtvm__runtime_8h.html#a76129be7b6de972791a3f9a1b312acfa',1,'microtvm_runtime.h']]],
   ['microtvmruntimerun',['MicroTVMRuntimeRun',['../microtvm__runtime_8h.html#ac43a544f675dd716e8c279c3e41f6e45',1,'microtvm_runtime.h']]],
   ['microtvmruntimesetinput',['MicroTVMRuntimeSetInput',['../microtvm__runtime_8h.html#aa593edc600f4356f2b560702aa01b113',1,'microtvm_runtime.h']]],
-  ['min',['Min',['../classtvm_1_1tir_1_1Min.html',1,'tvm::tir::Min'],['../classtvm_1_1tir_1_1Min.html#a3a4403aec40029a5206e22cd334e356b',1,'tvm::tir::Min::Min()'],['../classtvm_1_1RangeNode.html#a43d2fb12bb61cf05936a1972d0158b49',1,'tvm::RangeNode::min()'],['../classtvm_1_1tir_1_1ForNode.html#a1d1aa2006328bd84e4911f6d43ceca5c',1,'tvm::tir::ForNode::min()'],['../classtvm_1_1arith_1_1IntSet.html#ae5517de2862e93a801224eed98a57001',1,'tvm::arith::IntSet::min()'],['../classtvm_1_1support_1_1L [...]
+  ['min',['Min',['../classtvm_1_1tir_1_1Min.html',1,'tvm::tir::Min'],['../classtvm_1_1RangeNode.html#a43d2fb12bb61cf05936a1972d0158b49',1,'tvm::RangeNode::min()'],['../classtvm_1_1tir_1_1ForNode.html#a1d1aa2006328bd84e4911f6d43ceca5c',1,'tvm::tir::ForNode::min()'],['../classtvm_1_1arith_1_1IntSet.html#ae5517de2862e93a801224eed98a57001',1,'tvm::arith::IntSet::min()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#aec5f11b588fa3a12294a46c945c34411',1,'tvm::support::LinearCongrue [...]
   ['min_5frepeat_5fms',['min_repeat_ms',['../classtvm_1_1auto__scheduler_1_1ProgramRunnerNode.html#a39a865216db9ed6f57dfb22160cae1ff',1,'tvm::auto_scheduler::ProgramRunnerNode']]],
   ['min_5fvalue',['min_value',['../classtvm_1_1arith_1_1ConstIntBoundNode.html#a0761897bf16ab73b848bf360e9b195a3',1,'tvm::arith::ConstIntBoundNode::min_value()'],['../namespacetvm.html#a3b37fa55ea93d6868751a2441996b072',1,'tvm::min_value()']]],
   ['minimum',['minimum',['../namespacetvm_1_1topi.html#a7ac1dc0d99ce93090a4cdf90ab19d4b8',1,'tvm::topi::minimum(const tvm::PrimExpr &amp;a, const tvm::PrimExpr &amp;b)'],['../namespacetvm_1_1topi.html#a0e19dc06a2b1ecbb83b0942fdf836169',1,'tvm::topi::minimum(const tvm::te::Tensor &amp;A, const tvm::te::Tensor &amp;B, std::string name=&quot;T_&quot; &quot;minimum&quot;, std::string tag=kBroadcast)'],['../namespacetvm_1_1topi.html#a28d4ef4b3426bff237215ce356dd5681',1,'tvm::topi::minimum(con [...]
@@ -192,7 +192,7 @@ var searchData=
   ['mixedmodulepassmanager',['MixedModulePassManager',['../namespacetvm.html#abc01352eff102d4902632d097adc0e08',1,'tvm']]],
   ['mma_5ffill',['mma_fill',['../namespacetvm_1_1tir_1_1builtin.html#a307667c449c54cef747d781771f79bab',1,'tvm::tir::builtin']]],
   ['mma_5fstore',['mma_store',['../namespacetvm_1_1tir_1_1builtin.html#a772fb68f083e71e635c50bb503903f22',1,'tvm::tir::builtin']]],
-  ['mod',['Mod',['../classtvm_1_1tir_1_1Mod.html',1,'tvm::tir::Mod'],['../classtvm_1_1meta__schedule_1_1BuilderInputNode.html#ab2fb058ca54af03b5bc47bf4fac23cf7',1,'tvm::meta_schedule::BuilderInputNode::mod()'],['../classtvm_1_1meta__schedule_1_1WorkloadNode.html#a3929f2761c168c25de6be2247b913911',1,'tvm::meta_schedule::WorkloadNode::mod()'],['../classtvm_1_1meta__schedule_1_1ExtractedTaskNode.html#a50c40aa8beb57d0f31c36ef360042be6',1,'tvm::meta_schedule::ExtractedTaskNode::mod()'],['../c [...]
+  ['mod',['Mod',['../classtvm_1_1tir_1_1Mod.html',1,'tvm::tir::Mod'],['../classtvm_1_1tir_1_1Mod.html#a8bb56b57ed569d8f357c4439fd8a2f13',1,'tvm::tir::Mod::Mod()'],['../classtvm_1_1meta__schedule_1_1BuilderInputNode.html#ab2fb058ca54af03b5bc47bf4fac23cf7',1,'tvm::meta_schedule::BuilderInputNode::mod()'],['../classtvm_1_1meta__schedule_1_1WorkloadNode.html#a3929f2761c168c25de6be2247b913911',1,'tvm::meta_schedule::WorkloadNode::mod()'],['../classtvm_1_1meta__schedule_1_1ExtractedTaskNode.ht [...]
   ['mod_5fname',['mod_name',['../structTVMMetadata.html#a32e45fcae0f9328e944a35a885d94276',1,'TVMMetadata::mod_name()'],['../classtvm_1_1runtime_1_1metadata_1_1MetadataNode.html#a1c05bb5eb88b5d55b3abeeb2de263191',1,'tvm::runtime::metadata::MetadataNode::mod_name()']]],
   ['mode',['mode',['../structtvm_1_1relay_1_1MirrorPadAttrs.html#af5381d72f1d9c9abcb9d2e522966ad86',1,'tvm::relay::MirrorPadAttrs::mode()'],['../structtvm_1_1relay_1_1SubPixelAttrs.html#a6f0822aa1ad7672a18ab73c64e83fa99',1,'tvm::relay::SubPixelAttrs::mode()'],['../structtvm_1_1relay_1_1ScatterNDAttrs.html#ab13eeaa700fe7e41666ac04179e0fd62',1,'tvm::relay::ScatterNDAttrs::mode()'],['../structtvm_1_1relay_1_1TakeAttrs.html#a0bf9d25ced9bfc91e766494e5f641e70',1,'tvm::relay::TakeAttrs::mode()' [...]
   ['modnode',['ModNode',['../classtvm_1_1tir_1_1ModNode.html',1,'tvm::tir']]],
@@ -200,7 +200,7 @@ var searchData=
   ['modularset',['ModularSet',['../classtvm_1_1arith_1_1ModularSet.html',1,'tvm::arith::ModularSet'],['../classtvm_1_1arith_1_1ModularSet.html#a9f54896d98169246c6a24cc338fde500',1,'tvm::arith::ModularSet::ModularSet()']]],
   ['modularsetanalyzer',['ModularSetAnalyzer',['../classtvm_1_1arith_1_1ModularSetAnalyzer.html',1,'tvm::arith']]],
   ['modularsetnode',['ModularSetNode',['../classtvm_1_1arith_1_1ModularSetNode.html',1,'tvm::arith']]],
-  ['module',['Module',['../classtvm_1_1runtime_1_1Module.html',1,'tvm::runtime::Module'],['../classtvm_1_1runtime_1_1ModuleNode.html#a21f639900c480510650969df9c74d17d',1,'tvm::runtime::ModuleNode::Module()'],['../classtvm_1_1runtime_1_1Module.html#abfbc619b3b3166d63ec52e399c24bed9',1,'tvm::runtime::Module::Module()'],['../classtvm_1_1runtime_1_1Module.html#abd1380b3f813c2b6acefca3aaef425f4',1,'tvm::runtime::Module::Module(ObjectPtr&lt; Object &gt; n)'],['../classtvm_1_1DiagnosticContextN [...]
+  ['module',['Module',['../classtvm_1_1runtime_1_1Module.html',1,'tvm::runtime::Module'],['../classtvm_1_1DiagnosticContextNode.html#adea7e38a6e47cbab7fb5639f208aa536',1,'tvm::DiagnosticContextNode::module()'],['../classtvm_1_1runtime_1_1ModuleNode.html#a21f639900c480510650969df9c74d17d',1,'tvm::runtime::ModuleNode::Module()'],['../classtvm_1_1runtime_1_1Module.html#abfbc619b3b3166d63ec52e399c24bed9',1,'tvm::runtime::Module::Module()'],['../classtvm_1_1runtime_1_1Module.html#abd1380b3f81 [...]
   ['module_2eh',['module.h',['../ir_2module_8h.html',1,'(Global Namespace)'],['../runtime_2crt_2module_8h.html',1,'(Global Namespace)'],['../runtime_2module_8h.html',1,'(Global Namespace)']]],
   ['module_5fhandle',['module_handle',['../structTVMAotExecutor.html#a0d4158663d39f79d88d2bc0355c9f1eb',1,'TVMAotExecutor']]],
   ['moduleinternal',['ModuleInternal',['../classtvm_1_1runtime_1_1ModuleNode.html#a2b490c1acecd166b5824e4e96f17c64e',1,'tvm::runtime::ModuleNode']]],
diff --git a/docs/reference/api/doxygen/search/functions_10.js b/docs/reference/api/doxygen/search/functions_10.js
index 66872da64d..f5f5398777 100644
--- a/docs/reference/api/doxygen/search/functions_10.js
+++ b/docs/reference/api/doxygen/search/functions_10.js
@@ -67,7 +67,7 @@ var searchData=
   ['pragmastep',['PragmaStep',['../classtvm_1_1auto__scheduler_1_1PragmaStep.html#a9f3ec96f3e561a14d8d9235c4d46e2eb',1,'tvm::auto_scheduler::PragmaStep::PragmaStep(int stage_id, int iter_id, String pragma_type)'],['../classtvm_1_1auto__scheduler_1_1PragmaStep.html#a7692c2a9934af1f36b218840034a88d5',1,'tvm::auto_scheduler::PragmaStep::PragmaStep(dmlc::JSONReader *reader)']]],
   ['predict',['Predict',['../classtvm_1_1auto__scheduler_1_1CostModelNode.html#aa337ec72401a957a68b6eb4a96472a2c',1,'tvm::auto_scheduler::CostModelNode::Predict()'],['../classtvm_1_1auto__scheduler_1_1RandomModelNode.html#a09f1d81fd9d9f93fca5f2008ab6054ba',1,'tvm::auto_scheduler::RandomModelNode::Predict()'],['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#af16befe722e718fea23727469fecea1c',1,'tvm::auto_scheduler::PythonBasedModelNode::Predict()'],['../classtvm_1_1meta__sche [...]
   ['predictstages',['PredictStages',['../classtvm_1_1auto__scheduler_1_1CostModelNode.html#a213222251099444874698d2e9ff18adc',1,'tvm::auto_scheduler::CostModelNode::PredictStages()'],['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#a1f9975c4bdd61793b806663a61a9a703',1,'tvm::auto_scheduler::PythonBasedModelNode::PredictStages()']]],
-  ['prefetch',['Prefetch',['../classtvm_1_1tir_1_1Prefetch.html#af462f85dad4268685e3113b6b009d1b2',1,'tvm::tir::Prefetch::Prefetch()'],['../classtvm_1_1te_1_1Stage.html#a611327890918fb641a8e65396ab9c5f6',1,'tvm::te::Stage::prefetch()'],['../namespacetvm_1_1tir_1_1builtin.html#a543f1fc334d2bc830add972895a03f17',1,'tvm::tir::builtin::prefetch()'],['../namespacetvm_1_1script_1_1ir__builder_1_1tir.html#aeb707d56c770edb33ebf73da27ebc1b9',1,'tvm::script::ir_builder::tir::Prefetch()']]],
+  ['prefetch',['prefetch',['../classtvm_1_1te_1_1Stage.html#a611327890918fb641a8e65396ab9c5f6',1,'tvm::te::Stage::prefetch()'],['../classtvm_1_1tir_1_1Prefetch.html#af462f85dad4268685e3113b6b009d1b2',1,'tvm::tir::Prefetch::Prefetch()'],['../namespacetvm_1_1tir_1_1builtin.html#a543f1fc334d2bc830add972895a03f17',1,'tvm::tir::builtin::prefetch()'],['../namespacetvm_1_1script_1_1ir__builder_1_1tir.html#aeb707d56c770edb33ebf73da27ebc1b9',1,'tvm::script::ir_builder::tir::Prefetch()']]],
   ['prefetchnode',['PrefetchNode',['../classtvm_1_1tir_1_1PrefetchNode.html#acaaa5e89462c7edf3019df4283ec74db',1,'tvm::tir::PrefetchNode::PrefetchNode()=default'],['../classtvm_1_1tir_1_1PrefetchNode.html#a73ef244c364b9c7efaee36e6bec746e7',1,'tvm::tir::PrefetchNode::PrefetchNode(Buffer buffer, Array&lt; Range &gt; bounds, Span span=Span())']]],
   ['prefix',['Prefix',['../structtvm_1_1script_1_1printer_1_1Default.html#abdc0ab77dcae93384cb88d8d123e6a7f',1,'tvm::script::printer::Default']]],
   ['preloadmeasuredstates',['PreloadMeasuredStates',['../classtvm_1_1auto__scheduler_1_1PreloadMeasuredStates.html#a67daf1ccd25a208fdf8d001f9a31d86b',1,'tvm::auto_scheduler::PreloadMeasuredStates::PreloadMeasuredStates()'],['../classtvm_1_1auto__scheduler_1_1SearchPolicyNode.html#abc2529d0b1cd485876e48037dd19dde1',1,'tvm::auto_scheduler::SearchPolicyNode::PreloadMeasuredStates()']]],
diff --git a/docs/reference/api/doxygen/search/functions_12.js b/docs/reference/api/doxygen/search/functions_12.js
index 3c20e5ee6a..060d0e1181 100644
--- a/docs/reference/api/doxygen/search/functions_12.js
+++ b/docs/reference/api/doxygen/search/functions_12.js
@@ -73,7 +73,7 @@ var searchData=
   ['reserve',['reserve',['../classtvm_1_1runtime_1_1Array.html#a1a7727b86efaf35c58a5198ab1c139c8',1,'tvm::runtime::Array']]],
   ['reserveglobalvar',['ReserveGlobalVar',['../classtvm_1_1GlobalVarSupplyNode.html#a29185b94238fc62c928346a004c43b16',1,'tvm::GlobalVarSupplyNode']]],
   ['reservename',['ReserveName',['../classtvm_1_1NameSupplyNode.html#a9feb960ebeeee03fb9c5105655a8da17',1,'tvm::NameSupplyNode']]],
-  ['reset',['Reset',['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a73b14ea360a9902c291d5bf6e97636cd',1,'tvm::auto_scheduler::ProgramMeasurerNode::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html#ae6279154fe70e9eb85937b51e70a4bf8',1,'tvm::runtime::micro_rpc::Unframer::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#a44ff9650ecca8785e33c25c369d2570a',1,'tvm::runtime::micro_rpc::Framer::Reset()'],['../classtvm_1_1tir_1_1StmtSRefNode.html#a0a81 [...]
+  ['reset',['reset',['../classtvm_1_1runtime_1_1NDArray.html#af2a8ccab95d432d1ecad7a389e11bcd3',1,'tvm::runtime::NDArray::reset()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#ac4461465ba0e785794794e0405c96590',1,'tvm::runtime::ObjectPtr::reset()'],['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a73b14ea360a9902c291d5bf6e97636cd',1,'tvm::auto_scheduler::ProgramMeasurerNode::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html#ae6279154fe70e9eb85937b51e70a4bf8',1, [...]
   ['reset_5fattr',['reset_attr',['../classtvm_1_1OpRegEntry.html#a67628f8d3d6dea5b0a47e462c06b7790',1,'tvm::OpRegEntry']]],
   ['resetthreadpool',['ResetThreadPool',['../namespacetvm_1_1runtime_1_1threading.html#aafdb21c00248ff146b614a7e888b4fd7',1,'tvm::runtime::threading']]],
   ['reshape',['reshape',['../namespacetvm_1_1topi.html#a3aad65f2505802109ba7d05359ce9005',1,'tvm::topi']]],
@@ -98,7 +98,7 @@ var searchData=
   ['rewritetensorize',['RewriteTensorize',['../classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff',1,'tvm::meta_schedule::Postproc']]],
   ['rewriteunboundblock',['RewriteUnboundBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980',1,'tvm::meta_schedule::Postproc']]],
   ['rewriteunsafeselect',['RewriteUnsafeSelect',['../namespacetvm_1_1tir_1_1transform.html#a4fe43327c4454dd05b6e925577443f49',1,'tvm::tir::transform']]],
-  ['rfactor',['RFactor',['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()'],['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()']]],
+  ['rfactor',['rfactor',['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()']]],
   ['rfactorstep',['RfactorStep',['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a26e6f85b55307f18fab4469e3bd4be0c',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(int stage_id, int iter_id, int factor_iter_id)'],['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a95575c21441177634178245ab562cb4f',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(dmlc::JSONReader *reader)']]],
   ['right_5fshift',['right_shift',['../namespacetvm.html#ae8ecc0382685a855187bede0c97d93e6',1,'tvm::right_shift(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.html#af49dde9dfdeea62e8ad3a6d8db53de0b',1,'tvm::right_shift(const PrimExpr &amp;a, int b, Span span=Span())'],['../namespacetvm.html#a98ff4361d0a24570f8dc32d03cde972a',1,'tvm::right_shift(int a, const PrimExpr &amp;b, Span span=Span())'],['../namespacetvm_1_1topi.html#a9673b9caffb46404b566c3f04a492dfe',1,'tvm::topi:: [...]
   ['rocblas_5fbatch_5fmatmul',['rocblas_batch_matmul',['../namespacetvm_1_1topi_1_1contrib.html#abf1113dd429e1285752b48f62fe12848',1,'tvm::topi::contrib']]],
diff --git a/docs/reference/api/doxygen/search/functions_13.js b/docs/reference/api/doxygen/search/functions_13.js
index d0edc4fa1e..1e5c84401e 100644
--- a/docs/reference/api/doxygen/search/functions_13.js
+++ b/docs/reference/api/doxygen/search/functions_13.js
@@ -110,7 +110,7 @@ var searchData=
   ['setvalue_3c_20uint64_5ft_20_3e',['SetValue&lt; uint64_t &gt;',['../namespacetvm_1_1detail.html#acb3382242cbf538f64edae13e4ec5a84',1,'tvm::detail']]],
   ['shallowcopy',['ShallowCopy',['../classtvm_1_1IRModuleNode.html#a86bbdc4b857ce5958a2b5f29e1d6fcb6',1,'tvm::IRModuleNode']]],
   ['shallowcopyirmodule',['ShallowCopyIRModule',['../classtvm_1_1IRModule.html#aea8b821cf92cf525bd87bf15f5d31889',1,'tvm::IRModule']]],
-  ['shape',['Shape',['../classtvm_1_1runtime_1_1NDArray.html#ad273c7bc59b73fb026fd64fc764cbebc',1,'tvm::runtime::NDArray::Shape()'],['../classtvm_1_1runtime_1_1metadata_1_1TensorInfoNode.html#a5ddcd966b82c4df89084dbdf92d3108e',1,'tvm::runtime::metadata::TensorInfoNode::shape()'],['../namespacetvm_1_1topi.html#af30c02f3a3f37c7963b3af60fb9c72a1',1,'tvm::topi::shape()']]],
+  ['shape',['shape',['../classtvm_1_1runtime_1_1metadata_1_1TensorInfoNode.html#a5ddcd966b82c4df89084dbdf92d3108e',1,'tvm::runtime::metadata::TensorInfoNode::shape()'],['../classtvm_1_1runtime_1_1NDArray.html#ad273c7bc59b73fb026fd64fc764cbebc',1,'tvm::runtime::NDArray::Shape()'],['../namespacetvm_1_1topi.html#af30c02f3a3f37c7963b3af60fb9c72a1',1,'tvm::topi::shape()']]],
   ['shapediv',['shapediv',['../namespacetvm.html#a15f25703cfce73c75cb4cd33c74ea8f0',1,'tvm']]],
   ['shapeindex',['ShapeIndex',['../classtvm_1_1runtime_1_1DataType.html#a04f0e069017af3f0da47bc0c1fd80916',1,'tvm::runtime::DataType']]],
   ['shapeof',['ShapeOf',['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a5f278c637580946bc06b020f5852e44a',1,'tvm::runtime::vm::Instruction']]],
@@ -183,7 +183,7 @@ var searchData=
   ['startmessage',['StartMessage',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#acd512b977c6dd888f90c4fd6d2b9500f',1,'tvm::runtime::micro_rpc::Session']]],
   ['startpacket',['StartPacket',['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#ade10d3bd3a26e3b7af881ae134e9a998',1,'tvm::runtime::micro_rpc::Framer']]],
   ['startsession',['StartSession',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#a15d3f9ecb8b22bf2d330f6f0a16c5239',1,'tvm::runtime::micro_rpc::Session']]],
-  ['state',['State',['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()'],['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()']]],
+  ['state',['state',['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()'],['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()']]],
   ['stats',['Stats',['../classtvm_1_1runtime_1_1vm_1_1Executable.html#a5445bd71aa14ec97552fa099dc3bd787',1,'tvm::runtime::vm::Executable']]],
   ['stepapplytoschedule',['StepApplyToSchedule',['../namespacetvm_1_1auto__scheduler.html#ac58f7548a94b92f801b2b9a6f65bd785',1,'tvm::auto_scheduler']]],
   ['stepapplytostate',['StepApplyToState',['../namespacetvm_1_1auto__scheduler.html#a6909bc5a99d1cc8372201e9392717832',1,'tvm::auto_scheduler']]],
diff --git a/docs/reference/api/doxygen/search/functions_15.js b/docs/reference/api/doxygen/search/functions_15.js
index d7e5c1f2e8..7e95ae7553 100644
--- a/docs/reference/api/doxygen/search/functions_15.js
+++ b/docs/reference/api/doxygen/search/functions_15.js
@@ -37,7 +37,7 @@ var searchData=
   ['unionlowerbound',['UnionLowerBound',['../namespacetvm_1_1arith.html#ab22d7fd95abb5fa372843a40e19d80c5',1,'tvm::arith']]],
   ['unionregion',['UnionRegion',['../namespacetvm_1_1arith.html#ad27c4f216e41eb8e81296fb7ec4b9453',1,'tvm::arith']]],
   ['unionregionlowerbound',['UnionRegionLowerBound',['../namespacetvm_1_1arith.html#a4c3dedfa4cba4ad39c953eb51eb83e4d',1,'tvm::arith']]],
-  ['unique',['unique',['../classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7',1,'tvm::runtime::Object::unique()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#af95c6c6fcd89da0f62b93f1167b72314',1,'tvm::runtime::ObjectPtr::unique()'],['../classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7',1,'tvm::runtime::ObjectRef::unique()'],['../classtvm_1_1VirtualDeviceCache.html#a25ba1351484aa58a2cc7cef8f8e4423c',1,'tvm::VirtualDeviceCache::Unique()']]],
+  ['unique',['Unique',['../classtvm_1_1VirtualDeviceCache.html#a25ba1351484aa58a2cc7cef8f8e4423c',1,'tvm::VirtualDeviceCache::Unique()'],['../classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7',1,'tvm::runtime::Object::unique()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#af95c6c6fcd89da0f62b93f1167b72314',1,'tvm::runtime::ObjectPtr::unique()'],['../classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7',1,'tvm::runtime::ObjectRef::unique()']]],
   ['uniqueglobalfor',['UniqueGlobalFor',['../classtvm_1_1GlobalVarSupplyNode.html#af67bad5d9d93381c440a7886cbef430a',1,'tvm::GlobalVarSupplyNode']]],
   ['unknownattributeaccesspathnode',['UnknownAttributeAccessPathNode',['../classtvm_1_1UnknownAttributeAccessPathNode.html#a1882e9e591466a2785acc761dc63d56e',1,'tvm::UnknownAttributeAccessPathNode']]],
   ['unmatchedcases',['UnmatchedCases',['../namespacetvm_1_1relay.html#aa3a8cace40f8056fd6412f39c3eaa605',1,'tvm::relay']]],
diff --git a/docs/reference/api/doxygen/search/functions_4.js b/docs/reference/api/doxygen/search/functions_4.js
index 760a8b3818..45ced1e810 100644
--- a/docs/reference/api/doxygen/search/functions_4.js
+++ b/docs/reference/api/doxygen/search/functions_4.js
@@ -17,6 +17,7 @@ var searchData=
   ['default',['Default',['../classtvm_1_1DiagnosticContext.html#ab0a08b05d11230b5108086cd5118f488',1,'tvm::DiagnosticContext::Default()'],['../classtvm_1_1meta__schedule_1_1MeasureCallback.html#a88ce90c3501edf83c42196f29920029f',1,'tvm::meta_schedule::MeasureCallback::Default()'],['../classtvm_1_1VirtualDevice.html#a73364da6471b4634fb14abf10ce42f3c',1,'tvm::VirtualDevice::Default()']]],
   ['default_5fschedule',['default_schedule',['../namespacetvm_1_1topi_1_1generic.html#ae10c7793be021c3da437aeb2f79d8d2e',1,'tvm::topi::generic::default_schedule()'],['../namespacetvm_1_1topi_1_1x86.html#a8df4b07cd29b24d5c1323df91892fad4',1,'tvm::topi::x86::default_schedule()']]],
   ['default_5fschedule_5fauto_5finline',['default_schedule_auto_inline',['../namespacetvm_1_1topi_1_1generic.html#a1b7888cf36fa1da754ec65303a2dbbfb',1,'tvm::topi::generic::default_schedule_auto_inline()'],['../namespacetvm_1_1topi_1_1x86.html#af70d13cc92e434e9bce17cf76f4ef4f8',1,'tvm::topi::x86::default_schedule_auto_inline()']]],
+  ['defaultcputensorization',['DefaultCPUTensorization',['../classtvm_1_1meta__schedule_1_1Postproc.html#a4fe2775d916e99f27815aac6df46fd0c',1,'tvm::meta_schedule::Postproc']]],
   ['defaultcuda',['DefaultCUDA',['../classtvm_1_1meta__schedule_1_1Mutator.html#a6eb9b1298865cdeb5a8247a4e14454e3',1,'tvm::meta_schedule::Mutator::DefaultCUDA()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#a799e989283bbfa92471829ab23179df5',1,'tvm::meta_schedule::Postproc::DefaultCUDA()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a77ab3dd14cbfcec7ed059559f7afc372',1,'tvm::meta_schedule::ScheduleRule::DefaultCUDA()']]],
   ['defaultcudatensorcore',['DefaultCUDATensorCore',['../classtvm_1_1meta__schedule_1_1Mutator.html#af612e614b9550f83d7cc30e0a431df2a',1,'tvm::meta_schedule::Mutator::DefaultCUDATensorCore()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#a48dc2532ac0a7970cfcf1d482473a631',1,'tvm::meta_schedule::Postproc::DefaultCUDATensorCore()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a2abd71c2f3600573784d855d3cd63814',1,'tvm::meta_schedule::ScheduleRule::DefaultCUDATensorCore()']]],
   ['defaulthexagon',['DefaultHexagon',['../classtvm_1_1meta__schedule_1_1Mutator.html#a4ce54511e556a30567e5d5876c81c91d',1,'tvm::meta_schedule::Mutator::DefaultHexagon()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#ae4b33fac30e9420d0a0287ab44c37a98',1,'tvm::meta_schedule::Postproc::DefaultHexagon()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#acd4de1f7ace3a34603f8832ae1b3180b',1,'tvm::meta_schedule::ScheduleRule::DefaultHexagon()']]],
@@ -24,7 +25,7 @@ var searchData=
   ['defaultllvm',['DefaultLLVM',['../classtvm_1_1meta__schedule_1_1Mutator.html#a15a0354263735c53c4b7419153da7c87',1,'tvm::meta_schedule::Mutator::DefaultLLVM()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#a540ba92c0e373ff6872c736e3a2ca1b7',1,'tvm::meta_schedule::Postproc::DefaultLLVM()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a031b6dcad67f1d985aa30adb13e2b6e8',1,'tvm::meta_schedule::ScheduleRule::DefaultLLVM()']]],
   ['defaultmicro',['DefaultMicro',['../classtvm_1_1meta__schedule_1_1Mutator.html#af8fca919396df4557beeacfce9be0ef2',1,'tvm::meta_schedule::Mutator::DefaultMicro()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#a83c92e6d1f474a65115e7c4a1216e631',1,'tvm::meta_schedule::Postproc::DefaultMicro()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#ad181358bf6ca1951f0038f0691308bee',1,'tvm::meta_schedule::ScheduleRule::DefaultMicro()']]],
   ['defaulttimer',['DefaultTimer',['../namespacetvm_1_1runtime.html#ab69f2cbb94a9c579ee870ca7f186cf10',1,'tvm::runtime']]],
-  ['defaultvnni',['DefaultVNNI',['../classtvm_1_1meta__schedule_1_1Mutator.html#a8473324dbcbe078f021a58219a2cb687',1,'tvm::meta_schedule::Mutator::DefaultVNNI()'],['../classtvm_1_1meta__schedule_1_1Postproc.html#ad8e2da27bbe3f41d69742d87a3232c4d',1,'tvm::meta_schedule::Postproc::DefaultVNNI()'],['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#ab4b54d01446fee31cbcb1235bf8926cf',1,'tvm::meta_schedule::ScheduleRule::DefaultVNNI()']]],
+  ['defaultx86',['DefaultX86',['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a5342931a76e2269970f132d0921e2f45',1,'tvm::meta_schedule::ScheduleRule']]],
   ['defequal',['DefEqual',['../classtvm_1_1SEqualReducer.html#a62ba4c55928d4886853f9c33f4147340',1,'tvm::SEqualReducer']]],
   ['deferfail',['DeferFail',['../classtvm_1_1SEqualReducer_1_1Handler.html#aa59c1a7a863c81f2a903795b1a96f986',1,'tvm::SEqualReducer::Handler::DeferFail()'],['../classtvm_1_1SEqualHandlerDefault.html#a916706dd76898d8ff4e381233c609d14',1,'tvm::SEqualHandlerDefault::DeferFail()']]],
   ['defhash',['DefHash',['../classtvm_1_1SHashReducer.html#a74260485bd50d1bfa52ded457a6a7777',1,'tvm::SHashReducer']]],
diff --git a/docs/reference/api/doxygen/search/functions_d.js b/docs/reference/api/doxygen/search/functions_d.js
index c335f309d6..7e69d95257 100644
--- a/docs/reference/api/doxygen/search/functions_d.js
+++ b/docs/reference/api/doxygen/search/functions_d.js
@@ -66,7 +66,7 @@ var searchData=
   ['microtvmruntimegetoutput',['MicroTVMRuntimeGetOutput',['../microtvm__runtime_8h.html#a76129be7b6de972791a3f9a1b312acfa',1,'microtvm_runtime.h']]],
   ['microtvmruntimerun',['MicroTVMRuntimeRun',['../microtvm__runtime_8h.html#ac43a544f675dd716e8c279c3e41f6e45',1,'microtvm_runtime.h']]],
   ['microtvmruntimesetinput',['MicroTVMRuntimeSetInput',['../microtvm__runtime_8h.html#aa593edc600f4356f2b560702aa01b113',1,'microtvm_runtime.h']]],
-  ['min',['Min',['../classtvm_1_1tir_1_1Min.html#a3a4403aec40029a5206e22cd334e356b',1,'tvm::tir::Min::Min()'],['../classtvm_1_1arith_1_1IntSet.html#ae5517de2862e93a801224eed98a57001',1,'tvm::arith::IntSet::min()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#aec5f11b588fa3a12294a46c945c34411',1,'tvm::support::LinearCongruentialEngine::min()'],['../namespacetvm.html#aac2abc149c1a47944c37b560181b15c0',1,'tvm::min(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.ht [...]
+  ['min',['min',['../classtvm_1_1arith_1_1IntSet.html#ae5517de2862e93a801224eed98a57001',1,'tvm::arith::IntSet::min()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#aec5f11b588fa3a12294a46c945c34411',1,'tvm::support::LinearCongruentialEngine::min()'],['../classtvm_1_1tir_1_1Min.html#a3a4403aec40029a5206e22cd334e356b',1,'tvm::tir::Min::Min()'],['../namespacetvm.html#aac2abc149c1a47944c37b560181b15c0',1,'tvm::min(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.ht [...]
   ['min_5fvalue',['min_value',['../namespacetvm.html#a3b37fa55ea93d6868751a2441996b072',1,'tvm']]],
   ['minimum',['minimum',['../namespacetvm_1_1topi.html#a7ac1dc0d99ce93090a4cdf90ab19d4b8',1,'tvm::topi::minimum(const tvm::PrimExpr &amp;a, const tvm::PrimExpr &amp;b)'],['../namespacetvm_1_1topi.html#a0e19dc06a2b1ecbb83b0942fdf836169',1,'tvm::topi::minimum(const tvm::te::Tensor &amp;A, const tvm::te::Tensor &amp;B, std::string name=&quot;T_&quot; &quot;minimum&quot;, std::string tag=kBroadcast)'],['../namespacetvm_1_1topi.html#a28d4ef4b3426bff237215ce356dd5681',1,'tvm::topi::minimum(con [...]
   ['minop',['MinOp',['../namespacetvm_1_1topi.html#aea9a989b0aaa2aef03fe8ee237d8257e',1,'tvm::topi']]],
@@ -79,7 +79,7 @@ var searchData=
   ['mixedmodulepassmanager',['MixedModulePassManager',['../namespacetvm.html#abc01352eff102d4902632d097adc0e08',1,'tvm']]],
   ['mma_5ffill',['mma_fill',['../namespacetvm_1_1tir_1_1builtin.html#a307667c449c54cef747d781771f79bab',1,'tvm::tir::builtin']]],
   ['mma_5fstore',['mma_store',['../namespacetvm_1_1tir_1_1builtin.html#a772fb68f083e71e635c50bb503903f22',1,'tvm::tir::builtin']]],
-  ['mod',['mod',['../classtvm_1_1tir_1_1ScheduleNode.html#a6dd7ec20629e09cd0be1aa49e5f57c12',1,'tvm::tir::ScheduleNode::mod()'],['../classtvm_1_1tir_1_1Mod.html#a8bb56b57ed569d8f357c4439fd8a2f13',1,'tvm::tir::Mod::Mod()'],['../namespacetvm_1_1topi.html#aaa95d3ad68932ab206efbe0a326db6a2',1,'tvm::topi::mod(const tvm::PrimExpr &amp;a, const tvm::PrimExpr &amp;b)'],['../namespacetvm_1_1topi.html#a4eb4b5a58cf4c5dbbdd4413cfd166882',1,'tvm::topi::mod(const tvm::te::Tensor &amp;A, const tvm::te: [...]
+  ['mod',['Mod',['../classtvm_1_1tir_1_1Mod.html#a8bb56b57ed569d8f357c4439fd8a2f13',1,'tvm::tir::Mod::Mod()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a6dd7ec20629e09cd0be1aa49e5f57c12',1,'tvm::tir::ScheduleNode::mod()'],['../namespacetvm_1_1topi.html#aaa95d3ad68932ab206efbe0a326db6a2',1,'tvm::topi::mod(const tvm::PrimExpr &amp;a, const tvm::PrimExpr &amp;b)'],['../namespacetvm_1_1topi.html#a4eb4b5a58cf4c5dbbdd4413cfd166882',1,'tvm::topi::mod(const tvm::te::Tensor &amp;A, const tvm::te: [...]
   ['mod_5fname',['mod_name',['../classtvm_1_1runtime_1_1metadata_1_1MetadataNode.html#a1c05bb5eb88b5d55b3abeeb2de263191',1,'tvm::runtime::metadata::MetadataNode']]],
   ['modularset',['ModularSet',['../classtvm_1_1arith_1_1ModularSet.html#a9f54896d98169246c6a24cc338fde500',1,'tvm::arith::ModularSet']]],
   ['module',['Module',['../classtvm_1_1runtime_1_1Module.html#abfbc619b3b3166d63ec52e399c24bed9',1,'tvm::runtime::Module::Module()'],['../classtvm_1_1runtime_1_1Module.html#abd1380b3f813c2b6acefca3aaef425f4',1,'tvm::runtime::Module::Module(ObjectPtr&lt; Object &gt; n)']]],
diff --git a/docs/reference/api/python/auto_scheduler.html b/docs/reference/api/python/auto_scheduler.html
index 5bc38d598d..4245126ab0 100644
--- a/docs/reference/api/python/auto_scheduler.html
+++ b/docs/reference/api/python/auto_scheduler.html
@@ -1615,7 +1615,7 @@ history states as starting point to perform Evolutionary Search).</p></li>
 
 <dl class="py class">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.SketchPolicy">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
 <dd><p>The search policy that searches in a hierarchical search space defined by sketches.
 The policy randomly samples programs from the space defined by sketches and use evolutionary
 search to fine-tune them.</p>
@@ -1899,7 +1899,7 @@ Candidates:
 
 <dl class="py function">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.auto_schedule">
-<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
+<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
 <dd><p>THIS API IS DEPRECATED.</p>
 <p>Run auto scheduling search for a task.</p>
 <dl class="field-list simple">
diff --git a/docs/reference/api/typedoc/classes/bytestreamreader.html b/docs/reference/api/typedoc/classes/bytestreamreader.html
index 1c3be82f1b..dbb0ae0b7e 100644
--- a/docs/reference/api/typedoc/classes/bytestreamreader.html
+++ b/docs/reference/api/typedoc/classes/bytestreamreader.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -141,7 +141,7 @@
 					<div class="tsd-signature tsd-kind-icon">bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Uint8Array</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -151,7 +151,7 @@
 					<div class="tsd-signature tsd-kind-icon">offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L42">rpc_server.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L42">rpc_server.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -168,7 +168,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L63">rpc_server.ts:63</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L63">rpc_server.ts:63</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">Uint8Array</span></h4>
@@ -185,7 +185,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L49">rpc_server.ts:49</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L49">rpc_server.ts:49</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -202,7 +202,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L57">rpc_server.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L57">rpc_server.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/cachedcallstack.html b/docs/reference/api/typedoc/classes/cachedcallstack.html
index f2e2873f1e..3eddda553f 100644
--- a/docs/reference/api/typedoc/classes/cachedcallstack.html
+++ b/docs/reference/api/typedoc/classes/cachedcallstack.html
@@ -144,7 +144,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L223">memory.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L223">memory.ts:223</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">temp<wbr>Args<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><a href="../interfaces/disposable.html" class="tsd-signature-type">Disposable</a><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L208">memory.ts:208</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L208">memory.ts:208</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -194,7 +194,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L312">memory.ts:312</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L312">memory.ts:312</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L284">memory.ts:284</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L284">memory.ts:284</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L388">memory.ts:388</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L388">memory.ts:388</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -300,7 +300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L376">memory.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L376">memory.ts:376</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -340,7 +340,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L267">memory.ts:267</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L267">memory.ts:267</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -373,7 +373,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L243">memory.ts:243</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L243">memory.ts:243</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -390,7 +390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L321">memory.ts:321</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L321">memory.ts:321</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -422,7 +422,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L252">memory.ts:252</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L252">memory.ts:252</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -444,7 +444,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L359">memory.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L359">memory.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -470,7 +470,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L342">memory.ts:342</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L342">memory.ts:342</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -496,7 +496,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L350">memory.ts:350</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L350">memory.ts:350</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -522,7 +522,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L326">memory.ts:326</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L326">memory.ts:326</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -548,7 +548,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L363">memory.ts:363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L363">memory.ts:363</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -574,7 +574,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L346">memory.ts:346</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L346">memory.ts:346</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -600,7 +600,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L334">memory.ts:334</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L334">memory.ts:334</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/dldatatype.html b/docs/reference/api/typedoc/classes/dldatatype.html
index 1447b1363d..eef3ad155e 100644
--- a/docs/reference/api/typedoc/classes/dldatatype.html
+++ b/docs/reference/api/typedoc/classes/dldatatype.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L262">runtime.ts:262</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L262">runtime.ts:262</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">bits<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L260">runtime.ts:260</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L260">runtime.ts:260</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">code<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L258">runtime.ts:258</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L258">runtime.ts:258</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -177,7 +177,7 @@
 					<div class="tsd-signature tsd-kind-icon">lanes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L262">runtime.ts:262</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L262">runtime.ts:262</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -199,7 +199,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L279">runtime.ts:279</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L279">runtime.ts:279</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -216,7 +216,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L270">runtime.ts:270</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L270">runtime.ts:270</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/dldevice.html b/docs/reference/api/typedoc/classes/dldevice.html
index 8c3e5c05ed..1beb6c4cd3 100644
--- a/docs/reference/api/typedoc/classes/dldevice.html
+++ b/docs/reference/api/typedoc/classes/dldevice.html
@@ -118,7 +118,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L202">runtime.ts:202</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L202">runtime.ts:202</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L200">runtime.ts:200</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L200">runtime.ts:200</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -161,7 +161,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L198">runtime.ts:198</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L198">runtime.ts:198</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -183,7 +183,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L223">runtime.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L223">runtime.ts:223</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -205,7 +205,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L230">runtime.ts:230</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L230">runtime.ts:230</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/environment.html b/docs/reference/api/typedoc/classes/environment.html
index bad6583f5a..7118c66246 100644
--- a/docs/reference/api/typedoc/classes/environment.html
+++ b/docs/reference/api/typedoc/classes/environment.html
@@ -125,7 +125,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/environment.ts#L86">environment.ts:86</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/environment.ts#L86">environment.ts:86</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 					<aside class="tsd-sources">
 						<p>Implementation of <a href="../interfaces/libraryprovider.html">LibraryProvider</a>.<a href="../interfaces/libraryprovider.html#imports">imports</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/environment.ts#L70">environment.ts:70</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/environment.ts#L70">environment.ts:70</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/environment.ts#L69">environment.ts:69</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/environment.ts#L69">environment.ts:69</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -210,7 +210,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">ctypes.FTVMWasmPackedCFunc</span><span class="tsd-signature-symbol"> | </span><span class="tsd-signature-type">undefined</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = [undefined,]</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/environment.ts#L78">environment.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/environment.ts#L78">environment.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -228,7 +228,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<wbr>Free<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/environment.ts#L84">environment.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/environment.ts#L84">environment.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/environment.ts#L105">environment.ts:105</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/environment.ts#L105">environment.ts:105</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ffilibrary.html b/docs/reference/api/typedoc/classes/ffilibrary.html
index e81a2e68ed..7da11f5bb5 100644
--- a/docs/reference/api/typedoc/classes/ffilibrary.html
+++ b/docs/reference/api/typedoc/classes/ffilibrary.html
@@ -131,7 +131,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L49">runtime.ts:49</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L49">runtime.ts:49</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L46">runtime.ts:46</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L46">runtime.ts:46</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L45">runtime.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L45">runtime.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L44">runtime.ts:44</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L44">runtime.ts:44</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">webGPUContext<span class="tsd-signature-symbol">:</span> <a href="webgpucontext.html" class="tsd-signature-type">WebGPUContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L47">runtime.ts:47</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L47">runtime.ts:47</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -203,7 +203,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L76">runtime.ts:76</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L76">runtime.ts:76</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L66">runtime.ts:66</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L66">runtime.ts:66</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -243,7 +243,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L84">runtime.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L84">runtime.ts:84</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <a href="cachedcallstack.html" class="tsd-signature-type">CachedCallStack</a></h4>
@@ -260,7 +260,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L95">runtime.ts:95</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L95">runtime.ts:95</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -283,7 +283,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L72">runtime.ts:72</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L72">runtime.ts:72</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/graphexecutor.html b/docs/reference/api/typedoc/classes/graphexecutor.html
index 584f3e3a29..9c9ca15cd2 100644
--- a/docs/reference/api/typedoc/classes/graphexecutor.html
+++ b/docs/reference/api/typedoc/classes/graphexecutor.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L583">runtime.ts:583</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L583">runtime.ts:583</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">module<span class="tsd-signature-symbol">:</span> <a href="module.html" class="tsd-signature-type">Module</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L579">runtime.ts:579</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L579">runtime.ts:579</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L654">runtime.ts:654</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L654">runtime.ts:654</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L597">runtime.ts:597</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L597">runtime.ts:597</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -241,7 +241,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L631">runtime.ts:631</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L631">runtime.ts:631</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -279,7 +279,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L644">runtime.ts:644</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L644">runtime.ts:644</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -310,7 +310,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L621">runtime.ts:621</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L621">runtime.ts:621</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -332,7 +332,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L609">runtime.ts:609</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L609">runtime.ts:609</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/instance.html b/docs/reference/api/typedoc/classes/instance.html
index 7c3a7c5291..f72a35a76c 100644
--- a/docs/reference/api/typedoc/classes/instance.html
+++ b/docs/reference/api/typedoc/classes/instance.html
@@ -139,7 +139,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L692">runtime.ts:692</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L692">runtime.ts:692</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -202,7 +202,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L684">runtime.ts:684</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L684">runtime.ts:684</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -212,7 +212,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L683">runtime.ts:683</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L683">runtime.ts:683</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -229,7 +229,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L932">runtime.ts:932</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L932">runtime.ts:932</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -260,7 +260,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L994">runtime.ts:994</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L994">runtime.ts:994</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -303,7 +303,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L924">runtime.ts:924</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L924">runtime.ts:924</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -341,7 +341,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L732">runtime.ts:732</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L732">runtime.ts:732</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -358,7 +358,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L952">runtime.ts:952</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L952">runtime.ts:952</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -402,7 +402,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L816">runtime.ts:816</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L816">runtime.ts:816</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -434,7 +434,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L1033">runtime.ts:1033</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L1033">runtime.ts:1033</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -465,7 +465,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L846">runtime.ts:846</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L846">runtime.ts:846</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -497,7 +497,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L750">runtime.ts:750</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L750">runtime.ts:750</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -520,7 +520,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L1013">runtime.ts:1013</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L1013">runtime.ts:1013</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -568,7 +568,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L789">runtime.ts:789</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L789">runtime.ts:789</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -608,7 +608,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L914">runtime.ts:914</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L914">runtime.ts:914</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -646,7 +646,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L1145">runtime.ts:1145</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L1145">runtime.ts:1145</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -698,7 +698,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L740">runtime.ts:740</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L740">runtime.ts:740</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -722,7 +722,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L868">runtime.ts:868</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L868">runtime.ts:868</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -754,7 +754,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L857">runtime.ts:857</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L857">runtime.ts:857</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -786,7 +786,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L940">runtime.ts:940</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L940">runtime.ts:940</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/memory.html b/docs/reference/api/typedoc/classes/memory.html
index 8da55edb57..db80fccf71 100644
--- a/docs/reference/api/typedoc/classes/memory.html
+++ b/docs/reference/api/typedoc/classes/memory.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L40">memory.ts:40</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L40">memory.ts:40</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Memory</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L32">memory.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L32">memory.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span><span class="tsd-signature-symbol"> = true</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L33">memory.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L33">memory.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L154">memory.ts:154</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L154">memory.ts:154</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -210,7 +210,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L90">memory.ts:90</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L90">memory.ts:90</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -233,7 +233,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L97">memory.ts:97</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L97">memory.ts:97</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -256,7 +256,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L74">memory.ts:74</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L74">memory.ts:74</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -279,7 +279,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L81">memory.ts:81</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L81">memory.ts:81</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -302,7 +302,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L104">memory.ts:104</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L104">memory.ts:104</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -325,7 +325,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L132">memory.ts:132</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L132">memory.ts:132</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -362,7 +362,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L145">memory.ts:145</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L145">memory.ts:145</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -393,7 +393,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L60">memory.ts:60</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L60">memory.ts:60</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -416,7 +416,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L67">memory.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L67">memory.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -439,7 +439,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L53">memory.ts:53</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L53">memory.ts:53</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -462,7 +462,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L114">memory.ts:114</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L114">memory.ts:114</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -485,7 +485,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L124">memory.ts:124</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L124">memory.ts:124</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -502,7 +502,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/memory.ts#L175">memory.ts:175</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/memory.ts#L175">memory.ts:175</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/module.html b/docs/reference/api/typedoc/classes/module.html
index 87aa1289c6..a5267d766c 100644
--- a/docs/reference/api/typedoc/classes/module.html
+++ b/docs/reference/api/typedoc/classes/module.html
@@ -124,7 +124,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L504">runtime.ts:504</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L504">runtime.ts:504</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -170,7 +170,7 @@
 					<div class="tsd-signature tsd-kind-icon">handle<span class="tsd-signature-symbol">:</span> <a href="../index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L502">runtime.ts:502</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L502">runtime.ts:502</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -187,7 +187,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L516">runtime.ts:516</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L516">runtime.ts:516</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -204,7 +204,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L530">runtime.ts:530</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L530">runtime.ts:530</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -236,7 +236,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L561">runtime.ts:561</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L561">runtime.ts:561</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ndarray.html b/docs/reference/api/typedoc/classes/ndarray.html
index 9e565a8e5c..c9a3927e4a 100644
--- a/docs/reference/api/typedoc/classes/ndarray.html
+++ b/docs/reference/api/typedoc/classes/ndarray.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L304">runtime.ts:304</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L304">runtime.ts:304</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <a href="dldevice.html" class="tsd-signature-type">DLDevice</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L297">runtime.ts:297</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L297">runtime.ts:297</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -173,7 +173,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L293">runtime.ts:293</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L293">runtime.ts:293</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -188,7 +188,7 @@
 					<div class="tsd-signature tsd-kind-icon">handle<span class="tsd-signature-symbol">:</span> <a href="../index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L289">runtime.ts:289</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L289">runtime.ts:289</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -203,7 +203,7 @@
 					<div class="tsd-signature tsd-kind-icon">ndim<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L291">runtime.ts:291</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L291">runtime.ts:291</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -218,7 +218,7 @@
 					<div class="tsd-signature tsd-kind-icon">shape<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L295">runtime.ts:295</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L295">runtime.ts:295</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -240,7 +240,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L370">runtime.ts:370</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L370">runtime.ts:370</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -273,7 +273,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L414">runtime.ts:414</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L414">runtime.ts:414</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -305,7 +305,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L355">runtime.ts:355</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L355">runtime.ts:355</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -322,7 +322,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L474">runtime.ts:474</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L474">runtime.ts:474</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -346,7 +346,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L443">runtime.ts:443</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L443">runtime.ts:443</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/packedfunccell.html b/docs/reference/api/typedoc/classes/packedfunccell.html
index 3e6c8d3886..d6d505693b 100644
--- a/docs/reference/api/typedoc/classes/packedfunccell.html
+++ b/docs/reference/api/typedoc/classes/packedfunccell.html
@@ -122,7 +122,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L158">runtime.ts:158</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L158">runtime.ts:158</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">handle<span class="tsd-signature-symbol">:</span> <a href="../index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L157">runtime.ts:157</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L157">runtime.ts:157</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -164,7 +164,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L165">runtime.ts:165</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L165">runtime.ts:165</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
diff --git a/docs/reference/api/typedoc/classes/rpcserver.html b/docs/reference/api/typedoc/classes/rpcserver.html
index 198d3f3c3d..913fda0be7 100644
--- a/docs/reference/api/typedoc/classes/rpcserver.html
+++ b/docs/reference/api/typedoc/classes/rpcserver.html
@@ -115,7 +115,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L92">rpc_server.ts:92</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L92">rpc_server.ts:92</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">unknown</span><span class="tsd-signat [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -201,7 +201,7 @@
 					<div class="tsd-signature tsd-kind-icon">key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L78">rpc_server.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L78">rpc_server.ts:78</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -211,7 +211,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -242,7 +242,7 @@
 					<div class="tsd-signature tsd-kind-icon">socket<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">WebSocket</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -252,7 +252,7 @@
 					<div class="tsd-signature tsd-kind-icon">state<span class="tsd-signature-symbol">:</span> <a href="../enums/rpcserverstate.html" class="tsd-signature-type">RPCServerState</a><span class="tsd-signature-symbol"> = RPCServerState.InitHeader</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -262,7 +262,7 @@
 					<div class="tsd-signature tsd-kind-icon">url<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L77">rpc_server.ts:77</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L77">rpc_server.ts:77</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/classes/scalar.html b/docs/reference/api/typedoc/classes/scalar.html
index 43a2e4df5d..b30620c50d 100644
--- a/docs/reference/api/typedoc/classes/scalar.html
+++ b/docs/reference/api/typedoc/classes/scalar.html
@@ -112,7 +112,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L145">runtime.ts:145</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -137,7 +137,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L145">runtime.ts:145</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">value<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L143">runtime.ts:143</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L143">runtime.ts:143</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/webgpucontext.html b/docs/reference/api/typedoc/classes/webgpucontext.html
index edef5b836e..a88e4912b0 100644
--- a/docs/reference/api/typedoc/classes/webgpucontext.html
+++ b/docs/reference/api/typedoc/classes/webgpucontext.html
@@ -120,7 +120,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -145,7 +145,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">GPUDevice</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -155,7 +155,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -172,7 +172,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -209,7 +209,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/enums/argtypecode.html b/docs/reference/api/typedoc/enums/argtypecode.html
index 958adadd58..dd5c877658 100644
--- a/docs/reference/api/typedoc/enums/argtypecode.html
+++ b/docs/reference/api/typedoc/enums/argtypecode.html
@@ -106,7 +106,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 6</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L220">ctypes.ts:220</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L220">ctypes.ts:220</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -116,7 +116,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L216">ctypes.ts:216</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L216">ctypes.ts:216</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -126,7 +126,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L214">ctypes.ts:214</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L214">ctypes.ts:214</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -136,7 +136,7 @@
 					<div class="tsd-signature tsd-kind-icon">Null<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L218">ctypes.ts:218</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L218">ctypes.ts:218</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 12</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMDLTensor<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 7</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L219">ctypes.ts:219</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L219">ctypes.ts:219</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMModule<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 9</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMNDArray<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 13</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -196,7 +196,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -206,7 +206,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObjectRValue<wbr>Ref<wbr>Arg<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 14</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -216,7 +216,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMOpaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L217">ctypes.ts:217</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L217">ctypes.ts:217</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -226,7 +226,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMPacked<wbr>Func<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 10</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -236,7 +236,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 11</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -246,7 +246,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/aynccallbackcode.html b/docs/reference/api/typedoc/enums/aynccallbackcode.html
index 27f6cf96c1..c7513a676f 100644
--- a/docs/reference/api/typedoc/enums/aynccallbackcode.html
+++ b/docs/reference/api/typedoc/enums/aynccallbackcode.html
@@ -93,7 +93,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Exception<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L676">runtime.ts:676</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L676">runtime.ts:676</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -103,7 +103,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L675">runtime.ts:675</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L675">runtime.ts:675</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/dldatatypecode.html b/docs/reference/api/typedoc/enums/dldatatypecode.html
index 32b1f4f691..42c7f3e3f5 100644
--- a/docs/reference/api/typedoc/enums/dldatatypecode.html
+++ b/docs/reference/api/typedoc/enums/dldatatypecode.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L242">runtime.ts:242</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L242">runtime.ts:242</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L240">runtime.ts:240</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L240">runtime.ts:240</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">Opaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L243">runtime.ts:243</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L243">runtime.ts:243</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -125,7 +125,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L241">runtime.ts:241</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L241">runtime.ts:241</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/rpcserverstate.html b/docs/reference/api/typedoc/enums/rpcserverstate.html
index e4ded00e25..6de7bf7ec8 100644
--- a/docs/reference/api/typedoc/enums/rpcserverstate.html
+++ b/docs/reference/api/typedoc/enums/rpcserverstate.html
@@ -90,7 +90,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L27">rpc_server.ts:27</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L27">rpc_server.ts:27</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<wbr>Key<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L28">rpc_server.ts:28</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L28">rpc_server.ts:28</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Server<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Body<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">Wait<wbr>For<wbr>Callback<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/sizeof.html b/docs/reference/api/typedoc/enums/sizeof.html
index de934711c2..2697022b08 100644
--- a/docs/reference/api/typedoc/enums/sizeof.html
+++ b/docs/reference/api/typedoc/enums/sizeof.html
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L206">ctypes.ts:206</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L206">ctypes.ts:206</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32 + I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L207">ctypes.ts:207</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L207">ctypes.ts:207</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">F32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L203">ctypes.ts:203</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L203">ctypes.ts:203</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">F64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L204">ctypes.ts:204</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L204">ctypes.ts:204</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">I32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -150,7 +150,7 @@
 					<div class="tsd-signature tsd-kind-icon">I64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L202">ctypes.ts:202</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L202">ctypes.ts:202</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -160,7 +160,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMValue<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L205">ctypes.ts:205</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L205">ctypes.ts:205</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -170,7 +170,7 @@
 					<div class="tsd-signature tsd-kind-icon">U16<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L200">ctypes.ts:200</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L200">ctypes.ts:200</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 					<div class="tsd-signature tsd-kind-icon">U8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L199">ctypes.ts:199</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L199">ctypes.ts:199</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/index.html b/docs/reference/api/typedoc/index.html
index ca6faee802..dd0dd36899 100644
--- a/docs/reference/api/typedoc/index.html
+++ b/docs/reference/api/typedoc/index.html
@@ -174,7 +174,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Alloc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>shape<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, ndim<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeCode<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeBits<span class="tsd [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L112">ctypes.ts:112</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L112">ctypes.ts:112</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>Bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">num [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L128">ctypes.ts:128</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L128">ctypes.ts:128</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -282,7 +282,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>To<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>from<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, to<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-sig [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L144">ctypes.ts:144</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L144">ctypes.ts:144</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -326,7 +326,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>ToBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</sp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L136">ctypes.ts:136</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L136">ctypes.ts:136</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -370,7 +370,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L121">ctypes.ts:121</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L121">ctypes.ts:121</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -406,7 +406,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMBackend<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number< [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L160">ctypes.ts:160</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L160">ctypes.ts:160</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -458,7 +458,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCFunc<wbr>Set<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ret<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L77">ctypes.ts:77</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L77">ctypes.ts:77</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -506,7 +506,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCb<wbr>Arg<wbr>ToReturn<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, code<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span c [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L83">ctypes.ts:83</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L83">ctypes.ts:83</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -545,7 +545,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Call<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L67">ctypes.ts:67</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L67">ctypes.ts:67</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -601,7 +601,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L57">ctypes.ts:57</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L57">ctypes.ts:57</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -637,7 +637,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Get<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span cla [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L100">ctypes.ts:100</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L100">ctypes.ts:100</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -676,7 +676,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>List<wbr>Global<wbr>Names<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>outSize<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, outArray<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L88">ctypes.ts:88</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L88">ctypes.ts:88</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -715,7 +715,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Register<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, f<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, override<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L94">ctypes.ts:94</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L94">ctypes.ts:94</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -758,7 +758,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMGet<wbr>Last<wbr>Error<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -788,7 +788,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L52">ctypes.ts:52</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L52">ctypes.ts:52</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -824,7 +824,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Get<wbr>Function<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, funcName<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, queryImports<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">numbe [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -872,7 +872,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Import<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, dep<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-si [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -912,7 +912,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMSynchronize<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>deviceType<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, deviceId<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signatur [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L150">ctypes.ts:150</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L150">ctypes.ts:150</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -954,7 +954,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Alloc<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>size<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L167">ctypes.ts:167</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L167">ctypes.ts:167</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -990,7 +990,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Free<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ptr<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L170">ctypes.ts:170</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L170">ctypes.ts:170</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1026,7 +1026,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Func<wbr>Create<wbr>FromCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resource<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L187">ctypes.ts:187</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L187">ctypes.ts:187</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1066,7 +1066,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>args<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1118,7 +1118,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<wbr>Finalizer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resourceHandle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L193">ctypes.ts:193</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L193">ctypes.ts:193</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1154,7 +1154,7 @@
 					<div class="tsd-signature tsd-kind-icon">GPUPointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1169,7 +1169,7 @@
 					<div class="tsd-signature tsd-kind-icon">Packed<wbr>Func<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">...</span>args<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol"> &amp; </span><a href="interfaces/disp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L36">runtime.ts:36</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L36">runtime.ts:36</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1184,7 +1184,7 @@
 					<div class="tsd-signature tsd-kind-icon">Pointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1199,7 +1199,7 @@
 					<div class="tsd-signature tsd-kind-icon">Ptr<wbr>Offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1217,7 +1217,7 @@
 					<div class="tsd-signature tsd-kind-icon">RPC_<wbr>MAGIC<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">1045105</span><span class="tsd-signature-symbol"> = 1045105</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/rpc_server.ts#L36">rpc_server.ts:36</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/rpc_server.ts#L36">rpc_server.ts:36</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1239,7 +1239,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/support.ts#L25">support.ts:25</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/support.ts#L25">support.ts:25</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1271,7 +1271,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/support.ts#L39">support.ts:39</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/support.ts#L39">support.ts:39</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1300,7 +1300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/support.ts#L52">support.ts:52</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/support.ts#L52">support.ts:52</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1337,7 +1337,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/compact.ts#L38">compact.ts:38</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/compact.ts#L38">compact.ts:38</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1368,7 +1368,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1390,7 +1390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/environment.ts#L32">environment.ts:32</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/environment.ts#L32">environment.ts:32</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1421,7 +1421,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/compact.ts#L24">compact.ts:24</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/compact.ts#L24">compact.ts:24</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1443,7 +1443,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L1367">runtime.ts:1367</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L1367">runtime.ts:1367</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1508,7 +1508,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/support.ts#L62">support.ts:62</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/support.ts#L62">support.ts:62</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1530,7 +1530,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<wbr>Code<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L246">runtime.ts:246</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L246">runtime.ts:246</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1539,7 +1539,7 @@
 						<div class="tsd-signature tsd-kind-icon">0<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;int&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L247">runtime.ts:247</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L247">runtime.ts:247</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1549,7 +1549,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;uint&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L248">runtime.ts:248</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L248">runtime.ts:248</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1559,7 +1559,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;float&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L249">runtime.ts:249</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L249">runtime.ts:249</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1569,7 +1569,7 @@
 						<div class="tsd-signature tsd-kind-icon">3<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;handle&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L250">runtime.ts:250</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L250">runtime.ts:250</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1580,7 +1580,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Enum<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L175">runtime.ts:175</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L175">runtime.ts:175</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1589,7 +1589,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L176">runtime.ts:176</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L176">runtime.ts:176</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1599,7 +1599,7 @@
 						<div class="tsd-signature tsd-kind-icon">15<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;webgpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L180">runtime.ts:180</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L180">runtime.ts:180</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1609,7 +1609,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cuda&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L177">runtime.ts:177</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L177">runtime.ts:177</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1619,7 +1619,7 @@
 						<div class="tsd-signature tsd-kind-icon">4<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;opencl&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L178">runtime.ts:178</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L178">runtime.ts:178</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1629,7 +1629,7 @@
 						<div class="tsd-signature tsd-kind-icon">8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;metal&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L179">runtime.ts:179</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L179">runtime.ts:179</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1640,7 +1640,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Str<wbr>ToEnum<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L183">runtime.ts:183</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L183">runtime.ts:183</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1649,7 +1649,7 @@
 						<div class="tsd-signature tsd-kind-icon">cl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L186">runtime.ts:186</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L186">runtime.ts:186</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1659,7 +1659,7 @@
 						<div class="tsd-signature tsd-kind-icon">cpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 1</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L184">runtime.ts:184</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L184">runtime.ts:184</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1669,7 +1669,7 @@
 						<div class="tsd-signature tsd-kind-icon">cuda<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 2</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L185">runtime.ts:185</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L185">runtime.ts:185</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1679,7 +1679,7 @@
 						<div class="tsd-signature tsd-kind-icon">metal<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 8</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L189">runtime.ts:189</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L189">runtime.ts:189</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1689,7 +1689,7 @@
 						<div class="tsd-signature tsd-kind-icon">opencl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L187">runtime.ts:187</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L187">runtime.ts:187</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1699,7 +1699,7 @@
 						<div class="tsd-signature tsd-kind-icon">vulkan<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 7</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L188">runtime.ts:188</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L188">runtime.ts:188</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1709,7 +1709,7 @@
 						<div class="tsd-signature tsd-kind-icon">webgpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 15</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/runtime.ts#L190">runtime.ts:190</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/runtime.ts#L190">runtime.ts:190</a></li>
 							</ul>
 						</aside>
 					</section>
diff --git a/docs/reference/api/typedoc/interfaces/disposable.html b/docs/reference/api/typedoc/interfaces/disposable.html
index 9683824180..351a32159a 100644
--- a/docs/reference/api/typedoc/interfaces/disposable.html
+++ b/docs/reference/api/typedoc/interfaces/disposable.html
@@ -113,7 +113,7 @@
 					<div class="tsd-signature tsd-kind-icon">dispose<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/types.ts#L52">types.ts:52</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/types.ts#L52">types.ts:52</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/interfaces/functioninfo.html b/docs/reference/api/typedoc/interfaces/functioninfo.html
index 3ee2373eb1..4d5c7cb4d6 100644
--- a/docs/reference/api/typedoc/interfaces/functioninfo.html
+++ b/docs/reference/api/typedoc/interfaces/functioninfo.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">arg_<wbr>types<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">launch_<wbr>param_<wbr>tags<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">name<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/interfaces/libraryprovider.html b/docs/reference/api/typedoc/interfaces/libraryprovider.html
index f866f2d94f..042f876531 100644
--- a/docs/reference/api/typedoc/interfaces/libraryprovider.html
+++ b/docs/reference/api/typedoc/interfaces/libraryprovider.html
@@ -112,7 +112,7 @@
 					<div class="tsd-signature tsd-kind-icon">imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/types.ts#L34">types.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/types.ts#L34">types.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -127,7 +127,7 @@
 					<div class="tsd-signature tsd-kind-icon">start<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>inst<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">Instance</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/c9b401600/web/src/types.ts#L39">types.ts:39</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/328122675/web/src/types.ts#L39">types.ts:39</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/searchindex.js b/docs/searchindex.js
index c0d4e120f1..d1dd524927 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
+Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
diff --git a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
index c5c3d2dbe6..83a206e08a 100644
--- a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:30.622</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
+<p><strong>00:29.740</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -349,7 +349,7 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-relay-vta-py"><span class="std std-ref">Auto-tuning a convolutional network on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_vta.py</span></code>)</p></td>
-<td><p>00:30.615</p></td>
+<td><p>00:29.734</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_alu_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-alu-vta-py"><span class="std std-ref">Auto-tuning a ALU fused op on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_alu_vta.py</span></code>)</p></td>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_classification.html b/docs/topic/vta/tutorials/frontend/deploy_classification.html
index 4cfbf2ea1f..2b26a2031f 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_classification.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_classification.html
@@ -583,7 +583,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
   DeprecationWarning,
 /workspace/vta/tutorials/frontend/deploy_classification.py:213: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
   relay_prog, target=tvm.target.Target(target, host=env.target_host), params=params
-resnet18_v1 inference graph built in 32.94s!
+resnet18_v1 inference graph built in 32.07s!
 </pre></div>
 </div>
 </div>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_detection.html b/docs/topic/vta/tutorials/frontend/deploy_detection.html
index da91aaeed8..52456094d5 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_detection.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_detection.html
@@ -601,7 +601,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/workspace/python/tvm/relay/build_module.py:348: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
   DeprecationWarning,
-yolov3-tiny inference graph built in 22.43s!
+yolov3-tiny inference graph built in 21.69s!
 </pre></div>
 </div>
 </div>
diff --git a/docs/topic/vta/tutorials/frontend/sg_execution_times.html b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
index c2f952f441..72b9a75d0f 100644
--- a/docs/topic/vta/tutorials/frontend/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-frontend-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>01:39.276</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
+<p><strong>01:37.533</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -349,11 +349,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_classification.html#sphx-glr-topic-vta-tutorials-frontend-deploy-classification-py"><span class="std std-ref">Deploy Pretrained Vision Model from MxNet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_classification.py</span></code>)</p></td>
-<td><p>00:49.885</p></td>
+<td><p>00:48.897</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_detection.html#sphx-glr-topic-vta-tutorials-frontend-deploy-detection-py"><span class="std std-ref">Deploy Pretrained Vision Detection Model from Darknet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_detection.py</span></code>)</p></td>
-<td><p>00:49.390</p></td>
+<td><p>00:48.636</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/optimize/sg_execution_times.html b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
index c610ed35f5..aa2021845e 100644
--- a/docs/topic/vta/tutorials/optimize/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-optimize-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:03.159</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
+<p><strong>00:03.128</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -349,11 +349,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="convolution_opt.html#sphx-glr-topic-vta-tutorials-optimize-convolution-opt-py"><span class="std std-ref">2D Convolution Optimization</span></a> (<code class="docutils literal notranslate"><span class="pre">convolution_opt.py</span></code>)</p></td>
-<td><p>00:02.676</p></td>
+<td><p>00:02.673</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="matrix_multiply_opt.html#sphx-glr-topic-vta-tutorials-optimize-matrix-multiply-opt-py"><span class="std std-ref">Matrix Multiply Blocking</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply_opt.py</span></code>)</p></td>
-<td><p>00:00.483</p></td>
+<td><p>00:00.455</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/sg_execution_times.html b/docs/topic/vta/tutorials/sg_execution_times.html
index 77de57762e..1228c913ee 100644
--- a/docs/topic/vta/tutorials/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:00.868</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
+<p><strong>00:00.830</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -349,11 +349,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="matrix_multiply.html#sphx-glr-topic-vta-tutorials-matrix-multiply-py"><span class="std std-ref">Simple Matrix Multiply</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply.py</span></code>)</p></td>
-<td><p>00:00.466</p></td>
+<td><p>00:00.447</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="vta_get_started.html#sphx-glr-topic-vta-tutorials-vta-get-started-py"><span class="std std-ref">Get Started with VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">vta_get_started.py</span></code>)</p></td>
-<td><p>00:00.402</p></td>
+<td><p>00:00.383</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/tutorial/auto_scheduler_matmul_x86.html b/docs/tutorial/auto_scheduler_matmul_x86.html
index 842f7f47d6..8b813b70d2 100644
--- a/docs/tutorial/auto_scheduler_matmul_x86.html
+++ b/docs/tutorial/auto_scheduler_matmul_x86.html
@@ -578,7 +578,7 @@ operator fusion.</p>
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 94.114 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 93.924 ms
 </pre></div>
 </div>
 </div>
@@ -652,7 +652,7 @@ automatically optimize a matrix multiplication, without the need to specify a
 search template.  It ends a series of examples that starts from the Tensor
 Expression (TE) language that demonstrates how TVM can optimize computational
 operations.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  10.690 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.634 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-auto-scheduler-matmul-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/eac4389b114db015e95cb3cdf8b86b83/auto_scheduler_matmul_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">auto_scheduler_matmul_x86.py</span></code></a></p>
diff --git a/docs/tutorial/autotvm_matmul_x86.html b/docs/tutorial/autotvm_matmul_x86.html
index 7ae62e2432..222b77ee67 100644
--- a/docs/tutorial/autotvm_matmul_x86.html
+++ b/docs/tutorial/autotvm_matmul_x86.html
@@ -680,16 +680,16 @@ reduce variance, we take 5 measurements and average them.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>waiting for device...
 device available
 Get devices for measurement successfully!
-No: 1   GFLOPS: 9.47/9.47       result: MeasureResult(costs=(0.0283574308,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7383923530578613, timestamp=1673984925.7289474)       [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 32])],None,53
-No: 2   GFLOPS: 8.80/9.47       result: MeasureResult(costs=(0.0305009988,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.9284043312072754, timestamp=1673984926.473074)        [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 64])],None,64
-No: 3   GFLOPS: 11.57/11.57     result: MeasureResult(costs=(0.023199505199999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6386911869049072, timestamp=1673984927.8991182)       [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 32])],None,55
-No: 4   GFLOPS: 2.10/11.57      result: MeasureResult(costs=(0.1275317764,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3018863201141357, timestamp=1673984930.9908652)       [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 4])],None,27
-No: 5   GFLOPS: 10.32/11.57     result: MeasureResult(costs=(0.026002647,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7220709323883057, timestamp=1673984931.8265462)        [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 64])],None,63
-No: 6   GFLOPS: 9.12/11.57      result: MeasureResult(costs=(0.0294320272,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7722988128662109, timestamp=1673984932.554135)        [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 32])],None,54
-No: 7   GFLOPS: 10.18/11.57     result: MeasureResult(costs=(0.0263612358,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6615927219390869, timestamp=1673984934.0353885)       [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 512])],None,99
-No: 8   GFLOPS: 9.87/11.57      result: MeasureResult(costs=(0.027185856400000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7084567546844482, timestamp=1673984934.724381)        [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 64])],None,62
-No: 9   GFLOPS: 3.87/11.57      result: MeasureResult(costs=(0.0692972198,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.3387506008148193, timestamp=1673984936.176239)        [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 16])],None,45
-No: 10  GFLOPS: 3.04/11.57      result: MeasureResult(costs=(0.088355355,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.6279051303863525, timestamp=1673984937.8458364)        [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 8])],None,38
+No: 1   GFLOPS: 3.27/3.27       result: MeasureResult(costs=(0.0821269278,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.5673408508300781, timestamp=1673994419.8678765)       [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 8])],None,35
+No: 2   GFLOPS: 3.86/3.86       result: MeasureResult(costs=(0.06947809320000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.3771319389343262, timestamp=1673994422.0002797)        [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 16])],None,45
+No: 3   GFLOPS: 9.85/9.85       result: MeasureResult(costs=(0.027254431599999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6701366901397705, timestamp=1673994422.6914155)       [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 256])],None,80
+No: 4   GFLOPS: 1.18/9.85       result: MeasureResult(costs=(0.2271749152,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.8723807334899902, timestamp=1673994427.3629255)       [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 1])],None,4
+No: 5   GFLOPS: 1.77/9.85       result: MeasureResult(costs=(0.1514958012,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.6639575958251953, timestamp=1673994430.1423552)       [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 2])],None,14
+No: 6   GFLOPS: 2.02/9.85       result: MeasureResult(costs=(0.13314590980000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3672876358032227, timestamp=1673994432.5257761)        [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 2])],None,13
+No: 7   GFLOPS: 1.25/9.85       result: MeasureResult(costs=(0.21483905599999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.659679412841797, timestamp=1673994436.9883332) [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 2])],None,10
+No: 8   GFLOPS: 7.96/9.85       result: MeasureResult(costs=(0.033740898199999994,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7652373313903809, timestamp=1673994437.7808642)       [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 32])],None,50
+No: 9   GFLOPS: 1.35/9.85       result: MeasureResult(costs=(0.1989227634,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.379838228225708, timestamp=1673994441.512279) [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 1])],None,0
+No: 10  GFLOPS: 9.96/9.96       result: MeasureResult(costs=(0.026950177600000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.8734245300292969, timestamp=1673994442.2069173)       [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 128])],None,71
 </pre></div>
 </div>
 <p>With tuning completed, we can choose the configuration from the log file that
diff --git a/docs/tutorial/autotvm_relay_x86.html b/docs/tutorial/autotvm_relay_x86.html
index 475071dfc6..86b6d882b8 100644
--- a/docs/tutorial/autotvm_relay_x86.html
+++ b/docs/tutorial/autotvm_relay_x86.html
@@ -558,7 +558,7 @@ standard deviation.</p>
 <span class="nb">print</span><span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 516.0710022699914, &#39;median&#39;: 516.3282544500362, &#39;std&#39;: 1.3528910766007418}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 515.8998823199988, &#39;median&#39;: 515.666321499998, &#39;std&#39;: 1.6769409329523433}
 </pre></div>
 </div>
 </div>
@@ -710,178 +710,178 @@ depending on the specifics of the model and the target platform.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  1/25]  Current/Best:   15.61/  15.61 GFLOPS | Progress: (4/20) | 9.47 s
-[Task  1/25]  Current/Best:   12.41/  22.05 GFLOPS | Progress: (8/20) | 12.53 s
-[Task  1/25]  Current/Best:   15.58/  22.05 GFLOPS | Progress: (12/20) | 14.93 s
-[Task  1/25]  Current/Best:   23.41/  23.41 GFLOPS | Progress: (16/20) | 17.95 s
-[Task  1/25]  Current/Best:   11.31/  23.41 GFLOPS | Progress: (20/20) | 20.61 s Done.
+[Task  1/25]  Current/Best:   18.98/  18.98 GFLOPS | Progress: (4/20) | 9.43 s
+[Task  1/25]  Current/Best:   13.57/  18.98 GFLOPS | Progress: (8/20) | 12.46 s
+[Task  1/25]  Current/Best:   23.81/  23.81 GFLOPS | Progress: (12/20) | 14.81 s
+[Task  1/25]  Current/Best:    9.50/  23.81 GFLOPS | Progress: (16/20) | 18.82 s
+[Task  1/25]  Current/Best:   12.93/  23.81 GFLOPS | Progress: (20/20) | 21.27 s Done.
 
 [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  2/25]  Current/Best:   11.38/  19.14 GFLOPS | Progress: (4/20) | 3.42 s
-[Task  2/25]  Current/Best:   13.91/  19.54 GFLOPS | Progress: (8/20) | 5.54 s
-[Task  2/25]  Current/Best:    8.50/  19.54 GFLOPS | Progress: (12/20) | 7.96 s
-[Task  2/25]  Current/Best:   12.74/  19.54 GFLOPS | Progress: (16/20) | 9.47 s
-[Task  2/25]  Current/Best:   15.68/  19.54 GFLOPS | Progress: (20/20) | 11.35 s Done.
+[Task  2/25]  Current/Best:   17.92/  17.92 GFLOPS | Progress: (4/20) | 3.63 s
+[Task  2/25]  Current/Best:   12.95/  17.99 GFLOPS | Progress: (8/20) | 5.22 s
+[Task  2/25]  Current/Best:   13.06/  17.99 GFLOPS | Progress: (12/20) | 7.39 s
+[Task  2/25]  Current/Best:    5.52/  17.99 GFLOPS | Progress: (16/20) | 8.93 s
+[Task  2/25]  Current/Best:   18.49/  18.49 GFLOPS | Progress: (20/20) | 10.39 s Done.
 
 [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  3/25]  Current/Best:   10.68/  16.15 GFLOPS | Progress: (4/20) | 4.37 s
-[Task  3/25]  Current/Best:   17.13/  22.38 GFLOPS | Progress: (8/20) | 6.48 s
-[Task  3/25]  Current/Best:   11.10/  22.38 GFLOPS | Progress: (12/20) | 9.15 s
-[Task  3/25]  Current/Best:   10.98/  22.38 GFLOPS | Progress: (16/20) | 12.32 s
-[Task  3/25]  Current/Best:    8.43/  22.38 GFLOPS | Progress: (20/20) | 14.63 s Done.
+[Task  3/25]  Current/Best:   22.72/  22.72 GFLOPS | Progress: (4/20) | 4.15 s
+[Task  3/25]  Current/Best:   12.62/  22.72 GFLOPS | Progress: (8/20) | 7.08 s
+[Task  3/25]  Current/Best:   11.64/  22.72 GFLOPS | Progress: (12/20) | 9.75 s
+[Task  3/25]  Current/Best:   21.78/  22.72 GFLOPS | Progress: (16/20) | 11.70 s
+[Task  3/25]  Current/Best:   16.39/  24.10 GFLOPS | Progress: (20/20) | 13.79 s Done.
 
 [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  4/25]  Current/Best:   18.30/  18.30 GFLOPS | Progress: (4/20) | 8.13 s
-[Task  4/25]  Current/Best:    5.05/  18.30 GFLOPS | Progress: (8/20) | 13.41 s
-[Task  4/25]  Current/Best:    7.49/  18.30 GFLOPS | Progress: (12/20) | 19.01 s
-[Task  4/25]  Current/Best:    5.99/  18.30 GFLOPS | Progress: (16/20) | 22.02 s
-[Task  4/25]  Current/Best:   14.47/  18.30 GFLOPS | Progress: (20/20) | 23.88 s Done.
+[Task  4/25]  Current/Best:    7.13/  15.81 GFLOPS | Progress: (4/20) | 4.33 s
+[Task  4/25]  Current/Best:    8.65/  15.81 GFLOPS | Progress: (8/20) | 6.30 s
+[Task  4/25]  Current/Best:   20.81/  20.81 GFLOPS | Progress: (12/20) | 8.88 s
+[Task  4/25]  Current/Best:   21.42/  21.42 GFLOPS | Progress: (16/20) | 11.42 s
+[Task  4/25]  Current/Best:   10.38/  21.42 GFLOPS | Progress: (20/20) | 13.90 s Done.
 
 [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  5/25]  Current/Best:    8.56/  15.51 GFLOPS | Progress: (4/20) | 3.86 s
-[Task  5/25]  Current/Best:    1.69/  15.72 GFLOPS | Progress: (8/20) | 6.37 s
-[Task  5/25]  Current/Best:    5.33/  20.66 GFLOPS | Progress: (12/20) | 8.94 s
-[Task  5/25]  Current/Best:    6.64/  20.66 GFLOPS | Progress: (16/20) | 11.40 s
-[Task  5/25]  Current/Best:   12.47/  20.66 GFLOPS | Progress: (20/20) | 13.43 s Done.
+[Task  5/25]  Current/Best:   13.84/  13.88 GFLOPS | Progress: (4/20) | 3.97 s
+[Task  5/25]  Current/Best:   13.14/  22.62 GFLOPS | Progress: (8/20) | 6.45 s
+[Task  5/25]  Current/Best:   15.79/  22.62 GFLOPS | Progress: (12/20) | 8.54 s
+[Task  5/25]  Current/Best:   13.15/  22.62 GFLOPS | Progress: (16/20) | 11.05 s
+[Task  5/25]  Current/Best:   12.38/  22.62 GFLOPS | Progress: (20/20) | 13.45 s Done.
 
 [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  6/25]  Current/Best:   14.42/  15.00 GFLOPS | Progress: (4/20) | 4.47 s
-[Task  6/25]  Current/Best:   10.92/  15.00 GFLOPS | Progress: (8/20) | 9.27 s
-[Task  6/25]  Current/Best:   14.10/  23.15 GFLOPS | Progress: (12/20) | 12.87 s
-[Task  6/25]  Current/Best:    6.08/  23.15 GFLOPS | Progress: (16/20) | 15.35 s
-[Task  6/25]  Current/Best:    6.48/  23.15 GFLOPS | Progress: (20/20) | 18.44 s Done.
+[Task  6/25]  Current/Best:   13.63/  16.94 GFLOPS | Progress: (4/20) | 4.74 s
+[Task  6/25]  Current/Best:   21.18/  21.18 GFLOPS | Progress: (8/20) | 6.69 s
+[Task  6/25]  Current/Best:    2.96/  21.18 GFLOPS | Progress: (12/20) | 10.61 s
+[Task  6/25]  Current/Best:   20.55/  21.18 GFLOPS | Progress: (16/20) | 13.46 s
+[Task  6/25]  Current/Best:   10.76/  22.40 GFLOPS | Progress: (20/20) | 17.15 s Done.
 
 [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  7/25]  Current/Best:   13.04/  13.15 GFLOPS | Progress: (4/20) | 4.95 s
-[Task  7/25]  Current/Best:    3.04/  15.55 GFLOPS | Progress: (8/20) | 8.05 s
-[Task  7/25]  Current/Best:    6.11/  15.55 GFLOPS | Progress: (12/20) | 11.00 s
-[Task  7/25]  Current/Best:   16.78/  16.78 GFLOPS | Progress: (16/20) | 15.06 s
-[Task  7/25]  Current/Best:    6.91/  16.78 GFLOPS | Progress: (20/20) | 18.03 s Done.
+[Task  7/25]  Current/Best:   16.89/  16.89 GFLOPS | Progress: (4/20) | 4.25 s
+[Task  7/25]  Current/Best:    5.55/  16.89 GFLOPS | Progress: (8/20) | 6.87 s
+[Task  7/25]  Current/Best:   18.85/  18.85 GFLOPS | Progress: (12/20) | 9.68 s
+[Task  7/25]  Current/Best:   19.15/  19.15 GFLOPS | Progress: (16/20) | 12.53 s
+[Task  7/25]  Current/Best:    6.05/  19.15 GFLOPS | Progress: (20/20) | 14.96 s Done.
 
 [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  8/25]  Current/Best:    8.14/  21.95 GFLOPS | Progress: (4/20) | 8.68 s
-[Task  8/25]  Current/Best:    5.71/  21.95 GFLOPS | Progress: (8/20) | 20.40 s
-[Task  8/25]  Current/Best:   13.99/  21.95 GFLOPS | Progress: (12/20) | 28.60 s
-[Task  8/25]  Current/Best:   16.14/  21.95 GFLOPS | Progress: (16/20) | 34.51 s
-[Task  8/25]  Current/Best:    6.28/  21.95 GFLOPS | Progress: (20/20) | 37.01 s
+[Task  8/25]  Current/Best:   19.96/  19.96 GFLOPS | Progress: (4/20) | 13.49 s
+[Task  8/25]  Current/Best:   13.79/  19.96 GFLOPS | Progress: (8/20) | 16.61 s
+[Task  8/25]  Current/Best:   12.87/  19.96 GFLOPS | Progress: (12/20) | 19.45 s
+[Task  8/25]  Current/Best:   12.15/  19.96 GFLOPS | Progress: (16/20) | 22.70 s
+[Task  8/25]  Current/Best:    8.28/  19.96 GFLOPS | Progress: (20/20) | 34.20 s
 [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  9/25]  Current/Best:   16.33/  17.59 GFLOPS | Progress: (4/20) | 4.63 s
-[Task  9/25]  Current/Best:   21.95/  21.95 GFLOPS | Progress: (8/20) | 12.22 s
-[Task  9/25]  Current/Best:   10.06/  21.95 GFLOPS | Progress: (12/20) | 17.61 s
-[Task  9/25]  Current/Best:    6.61/  21.95 GFLOPS | Progress: (16/20) | 19.92 s
-[Task  9/25]  Current/Best:   11.78/  21.95 GFLOPS | Progress: (20/20) | 28.84 s Done.
+[Task  9/25]  Current/Best:   15.51/  15.51 GFLOPS | Progress: (4/20) | 4.41 s
+[Task  9/25]  Current/Best:   16.76/  16.76 GFLOPS | Progress: (8/20) | 11.63 s
+[Task  9/25]  Current/Best:   16.46/  16.76 GFLOPS | Progress: (12/20) | 14.19 s
+[Task  9/25]  Current/Best:   17.41/  17.97 GFLOPS | Progress: (16/20) | 18.26 s
+[Task  9/25]  Current/Best:   16.64/  17.97 GFLOPS | Progress: (20/20) | 21.38 s Done.
 
 [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 10/25]  Current/Best:   14.42/  17.81 GFLOPS | Progress: (4/20) | 3.64 s
-[Task 10/25]  Current/Best:    5.61/  17.81 GFLOPS | Progress: (8/20) | 5.76 s
-[Task 10/25]  Current/Best:   12.81/  17.81 GFLOPS | Progress: (12/20) | 7.43 s
-[Task 10/25]  Current/Best:   16.50/  17.81 GFLOPS | Progress: (16/20) | 9.69 s
-[Task 10/25]  Current/Best:   13.01/  18.30 GFLOPS | Progress: (20/20) | 11.86 s Done.
+[Task 10/25]  Current/Best:   10.90/  18.90 GFLOPS | Progress: (4/20) | 3.98 s
+[Task 10/25]  Current/Best:   14.85/  18.90 GFLOPS | Progress: (8/20) | 6.22 s
+[Task 10/25]  Current/Best:   20.42/  21.15 GFLOPS | Progress: (12/20) | 8.41 s
+[Task 10/25]  Current/Best:   10.99/  21.15 GFLOPS | Progress: (16/20) | 10.17 s
+[Task 10/25]  Current/Best:   14.07/  21.15 GFLOPS | Progress: (20/20) | 13.46 s Done.
 
 [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 11/25]  Current/Best:   18.52/  18.52 GFLOPS | Progress: (4/20) | 4.22 s
-[Task 11/25]  Current/Best:   21.47/  23.97 GFLOPS | Progress: (8/20) | 6.59 s
-[Task 11/25]  Current/Best:   10.97/  23.97 GFLOPS | Progress: (12/20) | 9.39 s
-[Task 11/25]  Current/Best:   11.97/  23.97 GFLOPS | Progress: (16/20) | 11.89 s
-[Task 11/25]  Current/Best:    6.93/  23.97 GFLOPS | Progress: (20/20) | 14.60 s Done.
+[Task 11/25]  Current/Best:   11.66/  17.08 GFLOPS | Progress: (4/20) | 4.50 s
+[Task 11/25]  Current/Best:   11.18/  18.89 GFLOPS | Progress: (8/20) | 7.72 s
+[Task 11/25]  Current/Best:    8.76/  18.89 GFLOPS | Progress: (12/20) | 10.69 s
+[Task 11/25]  Current/Best:   21.83/  21.83 GFLOPS | Progress: (16/20) | 13.34 s
+[Task 11/25]  Current/Best:   19.55/  21.83 GFLOPS | Progress: (20/20) | 15.99 s Done.
 
 [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 12/25]  Current/Best:   10.70/  17.06 GFLOPS | Progress: (4/20) | 6.08 s
-[Task 12/25]  Current/Best:    9.28/  17.06 GFLOPS | Progress: (8/20) | 10.34 s
-[Task 12/25]  Current/Best:    6.45/  17.06 GFLOPS | Progress: (12/20) | 14.73 s
-[Task 12/25]  Current/Best:   12.89/  21.49 GFLOPS | Progress: (16/20) | 17.00 s
-[Task 12/25]  Current/Best:   13.86/  21.49 GFLOPS | Progress: (20/20) | 21.05 s Done.
+[Task 12/25]  Current/Best:    5.23/  14.62 GFLOPS | Progress: (4/20) | 4.56 s
+[Task 12/25]  Current/Best:   12.42/  21.87 GFLOPS | Progress: (8/20) | 9.28 s
+[Task 12/25]  Current/Best:   15.91/  21.87 GFLOPS | Progress: (12/20) | 12.80 s
+[Task 12/25]  Current/Best:   10.48/  21.87 GFLOPS | Progress: (16/20) | 15.61 s
+[Task 12/25]  Current/Best:   21.38/  21.87 GFLOPS | Progress: (20/20) | 17.81 s Done.
 
 [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 13/25]  Current/Best:    9.81/  12.87 GFLOPS | Progress: (4/20) | 5.74 s
-[Task 13/25]  Current/Best:   16.19/  19.41 GFLOPS | Progress: (8/20) | 8.11 s
-[Task 13/25]  Current/Best:   21.76/  21.76 GFLOPS | Progress: (12/20) | 11.19 s
-[Task 13/25]  Current/Best:   17.65/  21.76 GFLOPS | Progress: (16/20) | 14.83 s
-[Task 13/25]  Current/Best:   12.23/  21.76 GFLOPS | Progress: (20/20) | 18.40 s Done.
+[Task 13/25]  Current/Best:    5.98/  15.95 GFLOPS | Progress: (4/20) | 5.58 s
+[Task 13/25]  Current/Best:   18.73/  18.73 GFLOPS | Progress: (8/20) | 8.91 s
+[Task 13/25]  Current/Best:    8.42/  18.73 GFLOPS | Progress: (12/20) | 11.49 s
+[Task 13/25]  Current/Best:   17.28/  18.73 GFLOPS | Progress: (16/20) | 14.86 s
+[Task 13/25]  Current/Best:   18.48/  22.00 GFLOPS | Progress: (20/20) | 17.66 s Done.
 
 [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 14/25]  Current/Best:   13.73/  14.59 GFLOPS | Progress: (4/20) | 7.24 s
-[Task 14/25]  Current/Best:   10.69/  14.59 GFLOPS | Progress: (8/20) | 13.97 s
-[Task 14/25]  Current/Best:   14.83/  14.83 GFLOPS | Progress: (12/20) | 16.28 s
-[Task 14/25]  Current/Best:    8.22/  20.41 GFLOPS | Progress: (16/20) | 23.51 s
-[Task 14/25]  Current/Best:    8.00/  20.41 GFLOPS | Progress: (20/20) | 30.51 s
-[Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 15/25]  Current/Best:   18.43/  18.43 GFLOPS | Progress: (4/20) | 7.21 s
-[Task 15/25]  Current/Best:   22.08/  22.08 GFLOPS | Progress: (8/20) | 8.92 s Done.
+[Task 14/25]  Current/Best:   10.42/  13.56 GFLOPS | Progress: (4/20) | 4.17 s
+[Task 14/25]  Current/Best:    6.07/  16.88 GFLOPS | Progress: (8/20) | 7.04 s
+[Task 14/25]  Current/Best:    5.31/  16.88 GFLOPS | Progress: (12/20) | 10.11 s
+[Task 14/25]  Current/Best:   15.66/  16.88 GFLOPS | Progress: (16/20) | 15.53 s
+[Task 14/25]  Current/Best:    9.13/  16.88 GFLOPS | Progress: (20/20) | 17.93 s
+[Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
  Done.
 
-[Task 15/25]  Current/Best:   12.41/  22.08 GFLOPS | Progress: (12/20) | 12.20 s
-[Task 15/25]  Current/Best:   13.56/  22.08 GFLOPS | Progress: (16/20) | 16.07 s
-[Task 15/25]  Current/Best:   20.89/  22.08 GFLOPS | Progress: (20/20) | 24.55 s Done.
-
+[Task 15/25]  Current/Best:   13.48/  14.40 GFLOPS | Progress: (4/20) | 3.96 s
+[Task 15/25]  Current/Best:   10.87/  22.01 GFLOPS | Progress: (8/20) | 8.74 s
+[Task 15/25]  Current/Best:   10.84/  22.01 GFLOPS | Progress: (12/20) | 11.03 s
+[Task 15/25]  Current/Best:   19.88/  22.01 GFLOPS | Progress: (16/20) | 12.90 s
+[Task 15/25]  Current/Best:    6.33/  22.01 GFLOPS | Progress: (20/20) | 15.25 s
 [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 16/25]  Current/Best:    5.11/  14.77 GFLOPS | Progress: (4/20) | 4.19 s
-[Task 16/25]  Current/Best:   17.68/  22.02 GFLOPS | Progress: (8/20) | 5.80 s
-[Task 16/25]  Current/Best:   20.44/  22.02 GFLOPS | Progress: (12/20) | 7.26 s
-[Task 16/25]  Current/Best:    9.79/  22.02 GFLOPS | Progress: (16/20) | 9.47 s
-[Task 16/25]  Current/Best:   16.68/  22.02 GFLOPS | Progress: (20/20) | 11.29 s Done.
+[Task 16/25]  Current/Best:   13.76/  18.07 GFLOPS | Progress: (4/20) | 4.30 s
+[Task 16/25]  Current/Best:    6.58/  18.07 GFLOPS | Progress: (8/20) | 6.02 s
+[Task 16/25]  Current/Best:    6.18/  18.07 GFLOPS | Progress: (12/20) | 9.00 s
+[Task 16/25]  Current/Best:    5.77/  18.07 GFLOPS | Progress: (16/20) | 10.75 s
+[Task 16/25]  Current/Best:   11.01/  18.97 GFLOPS | Progress: (20/20) | 13.60 s Done.
 
 [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 17/25]  Current/Best:    6.13/  19.59 GFLOPS | Progress: (4/20) | 5.64 s
-[Task 17/25]  Current/Best:   22.02/  22.02 GFLOPS | Progress: (8/20) | 7.94 s
-[Task 17/25]  Current/Best:   16.57/  22.02 GFLOPS | Progress: (12/20) | 10.70 s
-[Task 17/25]  Current/Best:   15.68/  22.02 GFLOPS | Progress: (16/20) | 13.45 s
-[Task 17/25]  Current/Best:   17.95/  22.02 GFLOPS | Progress: (20/20) | 15.69 s Done.
+[Task 17/25]  Current/Best:   11.64/  21.25 GFLOPS | Progress: (4/20) | 4.98 s
+[Task 17/25]  Current/Best:   11.12/  21.25 GFLOPS | Progress: (8/20) | 7.98 s
+[Task 17/25]  Current/Best:   15.83/  21.25 GFLOPS | Progress: (12/20) | 11.60 s
+[Task 17/25]  Current/Best:   12.20/  21.25 GFLOPS | Progress: (16/20) | 14.25 s
+[Task 17/25]  Current/Best:   22.01/  22.01 GFLOPS | Progress: (20/20) | 16.26 s Done.
 
 [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 18/25]  Current/Best:   11.68/  19.25 GFLOPS | Progress: (4/20) | 5.47 s
-[Task 18/25]  Current/Best:   12.82/  19.25 GFLOPS | Progress: (8/20) | 9.31 s
-[Task 18/25]  Current/Best:   11.73/  21.99 GFLOPS | Progress: (12/20) | 11.97 s
-[Task 18/25]  Current/Best:   12.20/  21.99 GFLOPS | Progress: (16/20) | 20.30 s
-[Task 18/25]  Current/Best:   14.97/  22.17 GFLOPS | Progress: (20/20) | 22.66 s Done.
+[Task 18/25]  Current/Best:   16.77/  18.95 GFLOPS | Progress: (4/20) | 3.86 s
+[Task 18/25]  Current/Best:   16.07/  18.95 GFLOPS | Progress: (8/20) | 6.15 s
+[Task 18/25]  Current/Best:   11.02/  18.95 GFLOPS | Progress: (12/20) | 10.45 s
+[Task 18/25]  Current/Best:   11.61/  18.95 GFLOPS | Progress: (16/20) | 13.11 s
+[Task 18/25]  Current/Best:   13.83/  18.95 GFLOPS | Progress: (20/20) | 16.86 s Done.
 
 [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 19/25]  Current/Best:   11.40/  19.30 GFLOPS | Progress: (4/20) | 4.77 s
-[Task 19/25]  Current/Best:    7.32/  19.30 GFLOPS | Progress: (8/20) | 10.23 s
-[Task 19/25]  Current/Best:   17.22/  20.42 GFLOPS | Progress: (12/20) | 13.39 s
-[Task 19/25]  Current/Best:   19.93/  20.42 GFLOPS | Progress: (16/20) | 20.59 s
-[Task 19/25]  Current/Best:    1.55/  20.42 GFLOPS | Progress: (20/20) | 26.65 s Done.
+[Task 19/25]  Current/Best:    2.65/   9.71 GFLOPS | Progress: (4/20) | 6.64 s
+[Task 19/25]  Current/Best:   11.82/  18.09 GFLOPS | Progress: (8/20) | 10.61 s
+[Task 19/25]  Current/Best:   12.92/  18.09 GFLOPS | Progress: (12/20) | 14.17 s
+[Task 19/25]  Current/Best:   13.61/  18.09 GFLOPS | Progress: (16/20) | 17.26 s
+[Task 19/25]  Current/Best:   11.82/  18.76 GFLOPS | Progress: (20/20) | 20.35 s Done.
 
 [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 20/25]  Current/Best:    8.92/  16.07 GFLOPS | Progress: (4/20) | 4.30 s
-[Task 20/25]  Current/Best:   10.41/  16.07 GFLOPS | Progress: (8/20) | 6.30 s
-[Task 20/25]  Current/Best:   10.13/  16.07 GFLOPS | Progress: (12/20) | 9.48 s
-[Task 20/25]  Current/Best:   14.34/  16.07 GFLOPS | Progress: (16/20) | 11.68 s
-[Task 20/25]  Current/Best:   14.39/  16.07 GFLOPS | Progress: (20/20) | 14.57 s
-[Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 21/25]  Current/Best:   14.29/  19.56 GFLOPS | Progress: (4/20) | 4.16 s
-[Task 21/25]  Current/Best:    7.24/  19.56 GFLOPS | Progress: (8/20) | 6.07 s
-[Task 21/25]  Current/Best:   12.13/  19.56 GFLOPS | Progress: (12/20) | 8.41 s Done.
-
-[Task 21/25]  Current/Best:   12.88/  19.56 GFLOPS | Progress: (16/20) | 11.09 s
-[Task 21/25]  Current/Best:   11.76/  19.56 GFLOPS | Progress: (20/20) | 14.00 s
+[Task 20/25]  Current/Best:   16.68/  16.68 GFLOPS | Progress: (4/20) | 3.88 s
+[Task 20/25]  Current/Best:   17.80/  17.80 GFLOPS | Progress: (8/20) | 6.96 s
+[Task 20/25]  Current/Best:   18.13/  19.21 GFLOPS | Progress: (12/20) | 8.78 s
+[Task 20/25]  Current/Best:   18.93/  19.21 GFLOPS | Progress: (16/20) | 12.06 s
+[Task 20/25]  Current/Best:    9.81/  19.21 GFLOPS | Progress: (20/20) | 14.32 s
+[Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+ Done.
+
+[Task 21/25]  Current/Best:    5.25/  10.03 GFLOPS | Progress: (4/20) | 4.30 s
+[Task 21/25]  Current/Best:    8.15/  10.03 GFLOPS | Progress: (8/20) | 6.22 s
+[Task 21/25]  Current/Best:   22.52/  22.52 GFLOPS | Progress: (12/20) | 8.51 s
+[Task 21/25]  Current/Best:   12.28/  22.52 GFLOPS | Progress: (16/20) | 12.05 s
+[Task 21/25]  Current/Best:    9.83/  22.52 GFLOPS | Progress: (20/20) | 13.97 s
 [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 22/25]  Current/Best:    2.68/  10.80 GFLOPS | Progress: (4/20) | 4.76 s
-[Task 22/25]  Current/Best:    2.69/  21.23 GFLOPS | Progress: (8/20) | 6.87 s
-[Task 22/25]  Current/Best:   14.46/  21.23 GFLOPS | Progress: (12/20) | 9.05 s
-[Task 22/25]  Current/Best:   10.20/  21.23 GFLOPS | Progress: (16/20) | 11.21 s
-[Task 22/25]  Current/Best:    4.44/  21.23 GFLOPS | Progress: (20/20) | 13.71 s Done.
+[Task 22/25]  Current/Best:    9.29/  19.91 GFLOPS | Progress: (4/20) | 5.40 s
+[Task 22/25]  Current/Best:   15.70/  19.91 GFLOPS | Progress: (8/20) | 7.18 s
+[Task 22/25]  Current/Best:    6.89/  19.91 GFLOPS | Progress: (12/20) | 9.94 s
+[Task 22/25]  Current/Best:   16.01/  19.91 GFLOPS | Progress: (16/20) | 12.09 s
+[Task 22/25]  Current/Best:   10.84/  19.91 GFLOPS | Progress: (20/20) | 13.91 s Done.
 
 [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 23/25]  Current/Best:    9.55/  18.29 GFLOPS | Progress: (4/20) | 4.54 s
-[Task 23/25]  Current/Best:   12.29/  18.29 GFLOPS | Progress: (8/20) | 7.62 s
-[Task 23/25]  Current/Best:   21.95/  21.95 GFLOPS | Progress: (12/20) | 10.14 s
-[Task 23/25]  Current/Best:   12.91/  22.63 GFLOPS | Progress: (16/20) | 13.19 s
-[Task 23/25]  Current/Best:   16.29/  22.63 GFLOPS | Progress: (20/20) | 16.32 s Done.
+[Task 23/25]  Current/Best:   18.37/  18.37 GFLOPS | Progress: (4/20) | 5.12 s
+[Task 23/25]  Current/Best:   11.62/  18.37 GFLOPS | Progress: (8/20) | 12.77 s
+[Task 23/25]  Current/Best:   21.25/  21.25 GFLOPS | Progress: (12/20) | 15.28 s
+[Task 23/25]  Current/Best:   23.43/  23.43 GFLOPS | Progress: (16/20) | 17.50 s
+[Task 23/25]  Current/Best:   20.16/  23.43 GFLOPS | Progress: (20/20) | 19.88 s Done.
 
 [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 24/25]  Current/Best:    3.14/  10.03 GFLOPS | Progress: (4/20) | 7.79 s
-[Task 24/25]  Current/Best:    7.49/  10.03 GFLOPS | Progress: (8/20) | 18.51 s
-[Task 24/25]  Current/Best:    6.92/  10.07 GFLOPS | Progress: (12/20) | 20.96 s
-[Task 24/25]  Current/Best:    8.26/  10.07 GFLOPS | Progress: (16/20) | 29.10 s
-[Task 24/25]  Current/Best:    3.30/  10.07 GFLOPS | Progress: (20/20) | 31.44 s
-[Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 25/25]  Current/Best:    3.48/   4.18 GFLOPS | Progress: (4/20) | 12.86 s
-[Task 25/25]  Current/Best:    3.58/   8.27 GFLOPS | Progress: (8/20) | 23.80 s Done.
+[Task 24/25]  Current/Best:    3.05/   4.82 GFLOPS | Progress: (4/20) | 12.78 s
+[Task 24/25]  Current/Best:    2.17/   4.82 GFLOPS | Progress: (8/20) | 23.74 s
+[Task 24/25]  Current/Best:    3.70/   4.82 GFLOPS | Progress: (12/20) | 34.41 s
+[Task 24/25]  Current/Best:    6.80/  10.35 GFLOPS | Progress: (16/20) | 46.36 s Done.
 
-[Task 25/25]  Current/Best:    8.43/   8.73 GFLOPS | Progress: (12/20) | 34.74 s
-[Task 25/25]  Current/Best:    4.42/   9.20 GFLOPS | Progress: (16/20) | 37.62 s
-[Task 25/25]  Current/Best:    6.81/   9.20 GFLOPS | Progress: (20/20) | 48.55 s
+[Task 24/25]  Current/Best:    2.87/  10.35 GFLOPS | Progress: (20/20) | 58.02 s
+[Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
+[Task 25/25]  Current/Best:    5.64/   8.08 GFLOPS | Progress: (4/20) | 4.16 s
+[Task 25/25]  Current/Best:    4.53/   8.08 GFLOPS | Progress: (8/20) | 5.86 s
+[Task 25/25]  Current/Best:    5.90/   8.08 GFLOPS | Progress: (12/20) | 16.82 s
+[Task 25/25]  Current/Best:    3.02/   8.08 GFLOPS | Progress: (16/20) | 18.94 s
+[Task 25/25]  Current/Best:    9.24/   9.24 GFLOPS | Progress: (20/20) | 28.99 s
 </pre></div>
 </div>
 <p>The output from this tuning process will look something like this:</p>
@@ -942,7 +942,7 @@ model using optimized operators to speed up our computations.</p>
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;class=&#39;</span><span class="si">%s</span><span class="s2">&#39; with probability=</span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#list" title="builtins.list" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">labels</span></a [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>class=&#39;n02123045 tabby, tabby cat&#39; with probability=0.621102
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>class=&#39;n02123045 tabby, tabby cat&#39; with probability=0.621103
 class=&#39;n02123159 tiger cat&#39; with probability=0.356379
 class=&#39;n02124075 Egyptian cat&#39; with probability=0.019712
 class=&#39;n02129604 tiger, Panthera tigris&#39; with probability=0.001215
@@ -980,8 +980,8 @@ improvement in comparing the optimized model to the unoptimized model.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;unoptimized: </span><span class="si">%s</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">))</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 409.3763455399767, &#39;median&#39;: 409.3127982999249, &#39;std&#39;: 1.043979226518125}
-unoptimized: {&#39;mean&#39;: 516.0710022699914, &#39;median&#39;: 516.3282544500362, &#39;std&#39;: 1.3528910766007418}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 409.63730430999703, &#39;median&#39;: 408.2600837999962, &#39;std&#39;: 2.9548387924640633}
+unoptimized: {&#39;mean&#39;: 515.8998823199988, &#39;median&#39;: 515.666321499998, &#39;std&#39;: 1.6769409329523433}
 </pre></div>
 </div>
 </div>
@@ -995,7 +995,7 @@ models.</p>
 <p>Here we presented a simple example using ResNet-50 v2 locally. However, TVM
 supports many more features including cross-compilation, remote execution and
 profiling/benchmarking.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 12 minutes  26.333 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 11 minutes  36.214 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-autotvm-relay-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/57a45d9bef1af358191e7d50043e652c/autotvm_relay_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">autotvm_relay_x86.py</span></code></a></p>
diff --git a/docs/tutorial/cross_compilation_and_rpc.html b/docs/tutorial/cross_compilation_and_rpc.html
index 5d1783f935..d3beb4f955 100644
--- a/docs/tutorial/cross_compilation_and_rpc.html
+++ b/docs/tutorial/cross_compilation_and_rpc.html
@@ -538,7 +538,7 @@ device and returns the measured cost. Network overhead is excluded.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;</span><span class="si">%g</span><span class="s2"> secs/op&quot;</span> <span class="o">%</span> <span class="n">cost</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.252e-07 secs/op
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.248e-07 secs/op
 </pre></div>
 </div>
 </div>
diff --git a/docs/tutorial/intro_topi.html b/docs/tutorial/intro_topi.html
index 6000423f0f..08c42615b9 100644
--- a/docs/tutorial/intro_topi.html
+++ b/docs/tutorial/intro_topi.html
@@ -495,7 +495,7 @@ we can schedule the following series of operations ending with <code class="code
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/ir.html#tvm.ir.Array" title="tvm.ir.Array" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">sg</span><span class="o">.</span><span class="n">stages</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0xe56b460)), stage(b, placeholder(b, 0x223a2f30)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attrs [...]
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0x20168730)), stage(b, placeholder(b, 0x216a1830)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attr [...]
 </pre></div>
 </div>
 <p>We can test the correctness by comparing with <code class="code docutils literal notranslate"><span class="pre">numpy</span></code> result as follows</p>
diff --git a/docs/tutorial/sg_execution_times.html b/docs/tutorial/sg_execution_times.html
index 8b2b670ab3..30bda59838 100644
--- a/docs/tutorial/sg_execution_times.html
+++ b/docs/tutorial/sg_execution_times.html
@@ -340,7 +340,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-tutorial-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>15:33.061</strong> total execution time for <strong>tutorial</strong> files:</p>
+<p><strong>15:12.816</strong> total execution time for <strong>tutorial</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -349,35 +349,35 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_relay_x86.html#sphx-glr-tutorial-autotvm-relay-x86-py"><span class="std std-ref">Compiling and Optimizing a Model with the Python Interface (AutoTVM)</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_relay_x86.py</span></code>)</p></td>
-<td><p>12:26.333</p></td>
+<td><p>11:36.214</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="auto_scheduler_matmul_x86.html#sphx-glr-tutorial-auto-scheduler-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Auto-scheduling</span></a> (<code class="docutils literal notranslate"><span class="pre">auto_scheduler_matmul_x86.py</span></code>)</p></td>
-<td><p>01:10.690</p></td>
+<td><p>01:29.634</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_expr_get_started.html#sphx-glr-tutorial-tensor-expr-get-started-py"><span class="std std-ref">Working with Operators Using Tensor Expression</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_expr_get_started.py</span></code>)</p></td>
-<td><p>01:01.321</p></td>
+<td><p>01:00.638</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="relay_quick_start.html#sphx-glr-tutorial-relay-quick-start-py"><span class="std std-ref">Quick Start Tutorial for Compiling Deep Learning Models</span></a> (<code class="docutils literal notranslate"><span class="pre">relay_quick_start.py</span></code>)</p></td>
-<td><p>00:35.710</p></td>
+<td><p>00:35.638</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_matmul_x86.html#sphx-glr-tutorial-autotvm-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Schedule Templates and AutoTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_matmul_x86.py</span></code>)</p></td>
-<td><p>00:17.376</p></td>
+<td><p>00:28.372</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="intro_topi.html#sphx-glr-tutorial-intro-topi-py"><span class="std std-ref">Introduction to TOPI</span></a> (<code class="docutils literal notranslate"><span class="pre">intro_topi.py</span></code>)</p></td>
-<td><p>00:00.828</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="tensor_ir_blitz_course.html#sphx-glr-tutorial-tensor-ir-blitz-course-py"><span class="std std-ref">Blitz Course to TensorIR</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_ir_blitz_course.py</span></code>)</p></td>
+<td><p>00:01.324</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tensor_ir_blitz_course.html#sphx-glr-tutorial-tensor-ir-blitz-course-py"><span class="std std-ref">Blitz Course to TensorIR</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_ir_blitz_course.py</span></code>)</p></td>
-<td><p>00:00.620</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="intro_topi.html#sphx-glr-tutorial-intro-topi-py"><span class="std std-ref">Introduction to TOPI</span></a> (<code class="docutils literal notranslate"><span class="pre">intro_topi.py</span></code>)</p></td>
+<td><p>00:00.827</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="cross_compilation_and_rpc.html#sphx-glr-tutorial-cross-compilation-and-rpc-py"><span class="std std-ref">Cross Compilation and RPC</span></a> (<code class="docutils literal notranslate"><span class="pre">cross_compilation_and_rpc.py</span></code>)</p></td>
-<td><p>00:00.183</p></td>
+<td><p>00:00.169</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="uma.html#sphx-glr-tutorial-uma-py"><span class="std std-ref">Making your Hardware Accelerator TVM-ready with UMA</span></a> (<code class="docutils literal notranslate"><span class="pre">uma.py</span></code>)</p></td>
diff --git a/docs/tutorial/tensor_expr_get_started.html b/docs/tutorial/tensor_expr_get_started.html
index fd94098728..114860f26e 100644
--- a/docs/tutorial/tensor_expr_get_started.html
+++ b/docs/tutorial/tensor_expr_get_started.html
@@ -550,7 +550,7 @@ helper function to run a profile of the TVM generated code.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.000007
-naive: 0.000008
+naive: 0.000007
 </pre></div>
 </div>
 </div>
@@ -669,10 +669,10 @@ factor to be the number of threads on your CPU.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Operator                  Timing             Performance
-   numpy    6.819860000177869e-06                    1.0
-   naive              7.7823e-06      1.1411231315301238
-parallel              6.9705e-06      1.0220884299411135
-  vector    2.4512300000000002e-05     3.594252667849589
+   numpy    7.4744799985637655e-06                   1.0
+   naive              6.7377e-06      0.9014272566512535
+parallel              6.9822e-06      0.9341385623269635
+  vector             2.46861e-05      3.3027180492480386
 </pre></div>
 </div>
 <div class="admonition-code-specialization admonition">
@@ -988,7 +988,7 @@ matrix multiplication.</p>
 <span class="n">answer</span> <span class="o">=</span> <span class="n">numpy</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a</span><span class="o">.</span><span class="n">numpy</span><span class="p">(),</span> <span class="n">b</span><span class="o">.</span><span class="n">numpy</span><span class="p">())</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019000
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018132
 </pre></div>
 </div>
 <p>Now we write a basic matrix multiplication using TVM TE and verify that it
@@ -1029,7 +1029,7 @@ optimizations.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.394787
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.369933
 </pre></div>
 </div>
 <p>Let’s take a look at the intermediate representation of the operator and
@@ -1093,7 +1093,7 @@ schedule.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.320045
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.300298
 </pre></div>
 </div>
 <p>By reordering the computation to take advantage of caching, you should see a
@@ -1151,7 +1151,7 @@ already cache friendly from our previous optimizations.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.350142
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.343557
 @main = primfn(A_1: handle, B_1: handle, C_1: handle) -&gt; ()
   attr = {&quot;from_legacy_te_schedule&quot;: True, &quot;global_symbol&quot;: &quot;main&quot;, &quot;tir.noalias&quot;: True}
   buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1205,7 +1205,7 @@ more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.122443
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.116184
 @main = primfn(A_1: handle, B_1: handle, C_1: handle) -&gt; ()
   attr = {&quot;from_legacy_te_schedule&quot;: True, &quot;global_symbol&quot;: &quot;main&quot;, &quot;tir.noalias&quot;: True}
   buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1280,7 +1280,7 @@ optimized schedule.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.108446
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.107706
 @main = primfn(A_1: handle, B_1: handle, C_1: handle) -&gt; ()
   attr = {&quot;from_legacy_te_schedule&quot;: True, &quot;global_symbol&quot;: &quot;main&quot;, &quot;tir.noalias&quot;: True}
   buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1353,7 +1353,7 @@ to `C</cite> when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.110250
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.110915
 @main = primfn(A_1: handle, B_1: handle, C_1: handle) -&gt; ()
   attr = {&quot;from_legacy_te_schedule&quot;: True, &quot;global_symbol&quot;: &quot;main&quot;, &quot;tir.noalias&quot;: True}
   buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1419,7 +1419,7 @@ of thread-level parallelization.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.146611
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.145955
 @main = primfn(A_1: handle, B_1: handle, C_1: handle) -&gt; ()
   attr = {&quot;from_legacy_te_schedule&quot;: True, &quot;global_symbol&quot;: &quot;main&quot;, &quot;tir.noalias&quot;: True}
   buffers = {A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], []),
@@ -1480,13 +1480,13 @@ working, we can compare the results.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>        Operator                  Timing             Performance
-            none      3.3947865387999996                     1.0
-        blocking     0.32004532770000005     0.09427553810588948
-   vectorization            0.3501424029      0.1031412134159603
-loop permutation     0.12244309140000001    0.036067979532899176
-   array packing            0.1084455223     0.03194472496592781
-   block caching            0.1102499436    0.032476252141311814
- parallelization            0.1466109915     0.04318710169972117
+            none      3.3699329131999995                     1.0
+        blocking            0.3002977844     0.08911090877320911
+   vectorization            0.3435569755     0.10194771953895287
+loop permutation            0.1161838557    0.034476607900682174
+   array packing     0.10770565089999999     0.03196077004326046
+   block caching     0.11091495809999999     0.03291310567802315
+ parallelization     0.14595452820000002     0.04331081121178921
 </pre></div>
 </div>
 <p>Note that the outputs on the web page reflect the running times on a
@@ -1518,7 +1518,7 @@ is</p>
 you can build generic templates of the matrix multiplication and other
 operations with tunable parameters that allows you to automatically optimize
 the computation for specific platforms.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  1.321 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  0.638 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-tensor-expr-get-started-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/40a01cffb015a67aaec0fad7e27cf80d/tensor_expr_get_started.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tensor_expr_get_started.py</span></code></a></p>