You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2023/05/24 18:22:14 UTC

[tvm-site] branch asf-site updated: deploying docs (apache/tvm@35ac2262ebf84f150ceada047325b768ad772f61)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 268a913254 deploying docs (apache/tvm@35ac2262ebf84f150ceada047325b768ad772f61)
268a913254 is described below

commit 268a9132542ccf871ef1c8d46606728b386fab8f
Author: tvm-bot <95...@users.noreply.github.com>
AuthorDate: Wed May 24 18:22:07 2023 +0000

    deploying docs (apache/tvm@35ac2262ebf84f150ceada047325b768ad772f61)
---
 .../how_to/compile_models/from_darknet.rst.txt     |   2 +-
 .../how_to/compile_models/from_mxnet.rst.txt       |   2 +-
 .../how_to/compile_models/from_oneflow.rst.txt     |   2 +-
 .../how_to/compile_models/from_paddle.rst.txt      |   2 +-
 .../how_to/compile_models/from_pytorch.rst.txt     |   2 +-
 .../how_to/compile_models/from_tensorflow.rst.txt  |   2 +-
 .../compile_models/sg_execution_times.rst.txt      |  22 +-
 .../deploy_models/deploy_model_on_adreno.rst.txt   |   4 +-
 .../deploy_model_on_adreno_tvmc.rst.txt            |   2 +-
 .../deploy_models/deploy_model_on_android.rst.txt  |   2 +-
 .../deploy_object_detection_pytorch.rst.txt        |   4 +-
 .../deploy_models/deploy_prequantized.rst.txt      |   6 +-
 .../deploy_prequantized_tflite.rst.txt             |   2 +-
 .../how_to/deploy_models/deploy_quantized.rst.txt  |   2 +-
 .../deploy_models/sg_execution_times.rst.txt       |  20 +-
 .../extend_tvm/bring_your_own_datatypes.rst.txt    |   2 +-
 .../how_to/extend_tvm/sg_execution_times.rst.txt   |   8 +-
 .../how_to/extend_tvm/use_pass_instrument.rst.txt  |  16 +-
 .../optimize_operators/opt_conv_cuda.rst.txt       |   2 +-
 .../optimize_operators/opt_conv_tensorcore.rst.txt |   2 +-
 .../how_to/optimize_operators/opt_gemm.rst.txt     |  16 +-
 .../optimize_operators/sg_execution_times.rst.txt  |   8 +-
 .../sg_execution_times.rst.txt                     |  14 +-
 .../tune_conv2d_layer_cuda.rst.txt                 |   2 +-
 .../tune_network_cuda.rst.txt                      |   4 +-
 .../tune_network_x86.rst.txt                       |   4 +-
 .../tune_with_autotvm/sg_execution_times.rst.txt   |  10 +-
 .../tune_with_autotvm/tune_conv2d_cuda.rst.txt     |   2 +-
 .../work_with_microtvm/micro_autotune.rst.txt      |  18 +-
 .../work_with_microtvm/micro_pytorch.rst.txt       |   4 +-
 .../how_to/work_with_microtvm/micro_train.rst.txt  |  16 +-
 .../work_with_microtvm/sg_execution_times.rst.txt  |  14 +-
 .../work_with_relay/sg_execution_times.rst.txt     |   8 +-
 .../how_to/work_with_schedules/intrin_math.rst.txt |   2 +-
 .../work_with_schedules/sg_execution_times.rst.txt |  16 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |   4 +-
 .../frontend/deploy_classification.rst.txt         |   4 +-
 .../tutorials/frontend/deploy_detection.rst.txt    |   4 +-
 .../tutorials/frontend/sg_execution_times.rst.txt  |   6 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |   6 +-
 .../topic/vta/tutorials/sg_execution_times.rst.txt |   6 +-
 .../tutorial/auto_scheduler_matmul_x86.rst.txt     |   6 +-
 docs/_sources/tutorial/autotvm_matmul_x86.rst.txt  |  20 +-
 docs/_sources/tutorial/autotvm_relay_x86.rst.txt   |  69 +++---
 .../tutorial/cross_compilation_and_rpc.rst.txt     |   2 +-
 docs/_sources/tutorial/intro_topi.rst.txt          |   2 +-
 docs/_sources/tutorial/sg_execution_times.rst.txt  |  20 +-
 .../tutorial/tensor_expr_get_started.rst.txt       |  47 ++--
 docs/commit_hash                                   |   2 +-
 docs/how_to/compile_models/from_darknet.html       |   2 +-
 docs/how_to/compile_models/from_mxnet.html         |   2 +-
 docs/how_to/compile_models/from_oneflow.html       |  15 +-
 docs/how_to/compile_models/from_paddle.html        |   2 +-
 docs/how_to/compile_models/from_pytorch.html       |  15 +-
 docs/how_to/compile_models/from_tensorflow.html    |   2 +-
 docs/how_to/compile_models/sg_execution_times.html |  30 +--
 .../deploy_models/deploy_model_on_adreno.html      |   4 +-
 .../deploy_models/deploy_model_on_adreno_tvmc.html |  35 ++-
 .../deploy_models/deploy_model_on_android.html     |   2 +-
 .../deploy_object_detection_pytorch.html           |  62 ++---
 docs/how_to/deploy_models/deploy_prequantized.html |  10 +-
 .../deploy_models/deploy_prequantized_tflite.html  |   2 +-
 docs/how_to/deploy_models/deploy_quantized.html    |   2 +-
 docs/how_to/deploy_models/sg_execution_times.html  |  24 +-
 .../extend_tvm/bring_your_own_datatypes.html       |   2 +-
 docs/how_to/extend_tvm/sg_execution_times.html     |   8 +-
 docs/how_to/extend_tvm/use_pass_instrument.html    |  16 +-
 docs/how_to/optimize_operators/opt_conv_cuda.html  |   2 +-
 .../optimize_operators/opt_conv_tensorcore.html    |   2 +-
 docs/how_to/optimize_operators/opt_gemm.html       |  16 +-
 .../optimize_operators/sg_execution_times.html     |   8 +-
 .../sg_execution_times.html                        |  20 +-
 .../tune_conv2d_layer_cuda.html                    |   2 +-
 .../tune_with_autoscheduler/tune_network_cuda.html |   4 +-
 .../tune_with_autoscheduler/tune_network_x86.html  |   4 +-
 .../tune_with_autotvm/sg_execution_times.html      |  10 +-
 .../how_to/tune_with_autotvm/tune_conv2d_cuda.html |   2 +-
 docs/how_to/work_with_microtvm/micro_autotune.html |  18 +-
 docs/how_to/work_with_microtvm/micro_pytorch.html  |   6 +-
 docs/how_to/work_with_microtvm/micro_train.html    |  16 +-
 .../work_with_microtvm/sg_execution_times.html     |  18 +-
 .../how_to/work_with_relay/sg_execution_times.html |   8 +-
 docs/how_to/work_with_schedules/intrin_math.html   |   2 +-
 .../work_with_schedules/sg_execution_times.html    |  16 +-
 docs/reference/api/python/auto_scheduler.html      |   4 +-
 .../api/typedoc/classes/bytestreamreader.html      |  12 +-
 .../api/typedoc/classes/cachedcallstack.html       |  34 +--
 docs/reference/api/typedoc/classes/dldatatype.html |  12 +-
 docs/reference/api/typedoc/classes/dldevice.html   |  10 +-
 .../reference/api/typedoc/classes/environment.html |  12 +-
 docs/reference/api/typedoc/classes/ffilibrary.html |  20 +-
 docs/reference/api/typedoc/classes/instance.html   |  58 ++---
 docs/reference/api/typedoc/classes/memory.html     |  34 +--
 docs/reference/api/typedoc/classes/module.html     |  10 +-
 docs/reference/api/typedoc/classes/ndarray.html    |  22 +-
 .../api/typedoc/classes/packedfunccell.html        |   6 +-
 docs/reference/api/typedoc/classes/rpcserver.html  |  14 +-
 .../api/typedoc/classes/runtimecontext.html        |  22 +-
 docs/reference/api/typedoc/classes/scalar.html     |   6 +-
 docs/reference/api/typedoc/classes/tvmarray.html   |  16 +-
 docs/reference/api/typedoc/classes/tvmobject.html  |  12 +-
 .../api/typedoc/classes/webgpucontext.html         |  12 +-
 docs/reference/api/typedoc/enums/argtypecode.html  |  30 +--
 .../api/typedoc/enums/aynccallbackcode.html        |   4 +-
 .../api/typedoc/enums/dldatatypecode.html          |   8 +-
 .../api/typedoc/enums/rpcserverstate.html          |  12 +-
 docs/reference/api/typedoc/enums/sizeof.html       |  18 +-
 docs/reference/api/typedoc/index.html              | 124 +++++-----
 .../api/typedoc/interfaces/disposable.html         |   2 +-
 .../api/typedoc/interfaces/functioninfo.html       |   6 +-
 .../api/typedoc/interfaces/libraryprovider.html    |   4 +-
 docs/searchindex.js                                |   2 +-
 .../vta/tutorials/autotvm/sg_execution_times.html  |   4 +-
 .../tutorials/frontend/deploy_classification.html  |   4 +-
 .../vta/tutorials/frontend/deploy_detection.html   |   4 +-
 .../vta/tutorials/frontend/sg_execution_times.html |   6 +-
 .../vta/tutorials/optimize/sg_execution_times.html |   6 +-
 docs/topic/vta/tutorials/sg_execution_times.html   |   6 +-
 docs/tutorial/auto_scheduler_matmul_x86.html       |   5 +-
 docs/tutorial/autotvm_matmul_x86.html              |  20 +-
 docs/tutorial/autotvm_relay_x86.html               | 273 ++++++++++-----------
 docs/tutorial/cross_compilation_and_rpc.html       |   2 +-
 docs/tutorial/intro_topi.html                      |   2 +-
 docs/tutorial/sg_execution_times.html              |  20 +-
 docs/tutorial/tensor_expr_get_started.html         |  43 ++--
 125 files changed, 878 insertions(+), 874 deletions(-)

diff --git a/docs/_sources/how_to/compile_models/from_darknet.rst.txt b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
index 2cd6ddd1d1..bf4d365176 100644
--- a/docs/_sources/how_to/compile_models/from_darknet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
@@ -318,7 +318,7 @@ The process is no different from other examples.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  33.328 seconds)
+   **Total running time of the script:** ( 1 minutes  34.838 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_darknet.py:
diff --git a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
index 86cc97c628..158375d3b2 100644
--- a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
@@ -116,7 +116,7 @@ In this section, we download a pretrained imagenet model and classify an image.
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip72932c81-1807-4862-8c11-9468138eecda from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipbf978a80-d444-4877-972b-6637229f1c9f from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
     x (1, 3, 224, 224)
 
 
diff --git a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
index 235156153a..17049a00f2 100644
--- a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
@@ -121,7 +121,7 @@ Load a pretrained OneFlow model and save model
  .. code-block:: none
 
     Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
-
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     15%|#5        | 6.33M/41.5M [00:00<00:00, 57.5MB/s]
     28%|##8       | 11.8M/41.5M [00:00<00:00, 46.4MB/s]
     39%|###9      | 16.3M/41.5M [00:00<00:00, 33.8MB/s]
     54%|#####3    | 22.3M/41.5M [00:00<00:00, 37.7MB/s]
     63%|######2   | 26.1M/41.5M [00:00<00:00, 35.9MB/s]
     77%|#######7  | 32.0M/41.5M [00:00<00:00, 33.9MB/s]
     92%|#########2| 38.3M/41.5M [00:01<00:00, 38.1MB/s]
    100%|##########| 41.5M/41.5M [00:01<00:00, 37.1MB/s]
+
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     19%|#9        | 7.99M/41.5M [00:00<00:01, 32.5MB/s]
     39%|###8      | 16.0M/41.5M [00:00<00:00, 44.1MB/s]
     54%|#####3    | 22.3M/41.5M [00:00<00:00, 43.4MB/s]
     64%|######4   | 26.7M/41.5M [00:00<00:00, 36.7MB/s]
     77%|#######7  | 32.0M/41.5M [00:00<00:00, 39.7MB/s]
     96%|#########6| 40.0M/41.5M [00:00<00:00, 48.4MB/s]
    100%|##########| 41.5M/41.5M [00:00<00:00, 44.7MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_paddle.rst.txt b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
index e4d6f74460..0b0ebe7265 100644
--- a/docs/_sources/how_to/compile_models/from_paddle.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
@@ -209,7 +209,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  2.537 seconds)
+   **Total running time of the script:** ( 1 minutes  7.851 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_paddle.py:
diff --git a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
index 93628df390..8dc239de4b 100644
--- a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
@@ -101,7 +101,7 @@ Load a pretrained PyTorch model
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     14%|#4        | 6.30M/44.7M [00:00<00:00, 64.0MB/s]
     28%|##7       | 12.4M/44.7M [00:00<00:00, 47.0MB/s]
     38%|###8      | 17.1M/44.7M [00:00<00:00, 37.1MB/s]
     54%|#####3    | 24.0M/44.7M [00:00<00:00, 40.0MB/s]
     63%|######2   | 28.0M/44.7M [00:00<00:00, 38.3MB/s]
     72%|#######1  | 32.0M/44.7M [00:00<00:00, 33.8MB/s]
     86%|########5 | 38.3M/44.7M [00:01<00:00, 36.8MB/s]
     94%|#########3| 41.9M/44.7M [00:01<00:00, 34.0MB/s]
    100%|##########| 44.7M/44.7M [00:01<00:00, 39.3MB/s]
+
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     18%|#7        | 7.99M/44.7M [00:00<00:00, 54.7MB/s]
     36%|###5      | 16.0M/44.7M [00:00<00:00, 57.5MB/s]
     54%|#####3    | 24.0M/44.7M [00:00<00:00, 61.0MB/s]
     72%|#######1  | 32.0M/44.7M [00:00<00:00, 56.1MB/s]
     90%|########9 | 40.0M/44.7M [00:00<00:00, 55.8MB/s]
    100%|##########| 44.7M/44.7M [00:00<00:00, 61.8MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
index 5878a26481..d3c9da7644 100644
--- a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
@@ -430,7 +430,7 @@ Run the corresponding model on tensorflow
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  31.252 seconds)
+   **Total running time of the script:** ( 1 minutes  35.118 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_tensorflow.py:
diff --git a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
index 25828097ea..b941c8b535 100644
--- a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**07:06.181** total execution time for **how_to_compile_models** files:
+**07:33.246** total execution time for **how_to_compile_models** files:
 
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:33.328 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:35.118 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:31.252 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:34.838 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:02.537 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:07.851 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:41.395 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:45.640 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:36.935 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:41.115 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:32.847 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:35.919 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:28.257 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:29.268 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:24.956 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:27.657 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:11.819 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:12.757 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.854 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:03.084 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
index fe9377a2a2..a35fe80a95 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
@@ -673,7 +673,7 @@ well as provides information about the model's performance
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-     4226.2542    4224.7779    4235.2199    4222.6341      3.6576   
+     4227.5565    4227.5888    4234.6745    4223.7105      3.1666   
                
 
 
@@ -682,7 +682,7 @@ well as provides information about the model's performance
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  20.132 seconds)
+   **Total running time of the script:** ( 1 minutes  22.678 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_model_on_adreno.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
index 926b152069..2fecf5cba3 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
@@ -127,7 +127,7 @@ Make a Keras Resnet50 Model
  .. code-block:: none
 
     Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
-
         8192/102967424 [..............................] - ETA: 0s
      2187264/102967424 [..............................] - ETA: 3s
      5439488/102967424 [>.............................] - ETA: 2s
      8380416/102967424 [=>............................] - ETA: 2s
     16769024/102967424 [===>..........................] - ETA: 2s
     25157632/102967424 [======>.......................] - ETA: 1s
     33546240/102967424 [========>.....................] - ETA: 1s
     41934848/102967424 [===========>..................] - ETA: 0s
 
     44130304/102967424 [===========>..................] - ETA: 0s
     48578560/102967424 [=============>................] - ETA: 0s
     50700288/102967424 [=============>................] - ETA: 0s
     56967168/102967424 [===============>..............] - ETA: 0s
     67100672/102967424 [==================>...........] - ETA: 0s
     72540160/102967424 [====================>.........] - ETA: 0s
     75489280/102967424 [====================>.........] - ETA: 0s
     84017152/102967424 [=======================>......] -
  ETA: 0s
     92266496/102967424 [=========================>....] - ETA: 0s
     98910208/102967424 [===========================>..] - ETA: 0s
    100368384/102967424 [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 2s 0us/step
+
         8192/102967424 [..............................] - ETA: 0s
      6635520/102967424 [>.............................] - ETA: 2s
      8380416/102967424 [=>............................] - ETA: 2s
     15360000/102967424 [===>..........................] - ETA: 1s
     16769024/102967424 [===>..........................] - ETA: 1s
     23412736/102967424 [=====>........................] - ETA: 1s
     25157632/102967424 [======>.......................] - ETA: 1s
     25296896/102967424 [======>.......................] - ETA: 2s
 
     30605312/102967424 [=======>......................] - ETA: 1s
     33546240/102967424 [========>.....................] - ETA: 1s
     38461440/102967424 [==========>...................] - ETA: 1s
     41934848/102967424 [===========>..................] - ETA: 1s
     48578560/102967424 [=============>................] - ETA: 1s
     50323456/102967424 [=============>................] - ETA: 1s
     56967168/102967424 [===============>..............] - ETA: 1s
     58712064/102967424 [================>.............] -
  ETA: 1s
     64864256/102967424 [=================>............] - ETA: 1s
     65355776/102967424 [==================>...........] - ETA: 1s
     67100672/102967424 [==================>...........] - ETA: 1s
     69296128/102967424 [===================>..........] - ETA: 1s
     75489280/102967424 [====================>.........] - ETA: 0s
     82124800/102967424 [======================>.......] - ETA: 0s
     83877888/102967424 [=======================>......] - ETA: 0s
     90521600/102967424
  [=========================>....] - ETA: 0s
     92266496/102967424 [=========================>....] - ETA: 0s
     98910208/102967424 [===========================>..] - ETA: 0s
    100646912/102967424 [============================>.] - ETA: 0s
    102850560/102967424 [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 3s 0us/step
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
index af2996058d..010f1ff3e2 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
@@ -437,7 +437,7 @@ Execute on TVM
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      15.7718      15.8008      16.3740      15.0997       0.3673   
+      15.8922      15.8378      16.6459      15.5476       0.3041   
                
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
index ba21cbebba..a5d42a256c 100644
--- a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
@@ -130,7 +130,7 @@ Load pre-trained maskrcnn from torchvision and do tracing
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MaskRCNN_ResNet50_FPN_Weights.COCO_V1`. You can also use `weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-
      0%|          | 0.00/170M [00:00<?, ?B/s]
      5%|4         | 7.99M/170M [00:00<00:03, 50.3MB/s]
      9%|9         | 16.0M/170M [00:00<00:02, 54.8MB/s]
     14%|#4        | 24.0M/170M [00:00<00:02, 51.0MB/s]
     18%|#7        | 30.3M/170M [00:00<00:03, 39.3MB/s]
     20%|##        | 34.3M/170M [00:00<00:04, 35.1MB/s]
     24%|##3       | 40.0M/170M [00:01<00:03, 38.2MB/s]
     28%|##8       | 48.0M/170M [00:01<00:03, 40.0MB/s]
     33%|###2      | 56.0M/170M [00:01<00:02, 44.3MB/s]
     38%|###7      | 64.0M/170M [00:01<00:02, 42.9MB/s]
     42%|####2     | 72.0M/170M [00:01<00:02, 44.3MB/s]
     46%|####6     | 78.3M/170M [00:01<00:02, 42.3MB/s]
     49%|####8     | 82.4M/170M [00:02<00:02, 42.1MB/s]
     52%|#####1    | 88.0M/170M [00:02<00:02, 41.0MB/s]
     56%|#####5    | 94.3M/170M [00:02<00:01, 44.7MB/s]
     58%|#####8    | 98.6M/170M [00:02<00:01, 44.8MB/s]
     61%|######    | 103M/170M [00:02<00:01, 40.8MB/s] 
     63%|######2   | 107M/170M [00:02<00:01, 36.2MB/s
 ]
     66%|######5   | 112M/170M [00:02<00:01, 36.6MB/s]
     68%|######8   | 116M/170M [00:02<00:01, 35.2MB/s]
     71%|#######   | 120M/170M [00:03<00:01, 33.1MB/s]
     74%|#######4  | 126M/170M [00:03<00:01, 27.5MB/s]
     76%|#######6  | 129M/170M [00:03<00:01, 27.0MB/s]
     80%|########  | 136M/170M [00:03<00:01, 30.8MB/s]
     85%|########4 | 144M/170M [00:03<00:00, 34.6MB/s]
     89%|########9 | 151M/170M [00:04<00:00, 42.7MB/s]
     92%|#########1| 156M/170M [00:04<00:00, 39.9MB/s]
     94%|#########4| 160M/170M [00:04<00:00, 30.6MB/s]
     98%|#########7| 166M/170M [00:04<00:00, 33.4MB/s]
    100%|##########| 170M/170M [00:04<00:00, 38.2MB/s]
+
      0%|          | 0.00/170M [00:00<?, ?B/s]
      4%|3         | 6.30M/170M [00:00<00:04, 35.6MB/s]
      6%|5         | 9.70M/170M [00:00<00:06, 27.1MB/s]
      8%|8         | 14.3M/170M [00:00<00:04, 33.3MB/s]
     10%|#         | 17.7M/170M [00:00<00:05, 29.5MB/s]
     14%|#4        | 24.0M/170M [00:00<00:05, 26.9MB/s]
     19%|#8        | 32.0M/170M [00:01<00:04, 33.8MB/s]
     24%|##3       | 40.0M/170M [00:01<00:03, 40.3MB/s]
     28%|##8       | 48.0M/170M [00:01<00:02, 43.9MB/s]
     33%|###2      | 56.0M/170M [00:01<00:02, 47.6MB/s]
     37%|###6      | 62.3M/170M [00:01<00:02, 51.7MB/s]
     40%|###9      | 67.5M/170M [00:01<00:02, 49.0MB/s]
     43%|####2     | 72.4M/170M [00:01<00:02, 42.0MB/s]
     46%|####6     | 78.3M/170M [00:02<00:02, 45.9MB/s]
     49%|####8     | 82.9M/170M [00:02<00:02, 44.8MB/s]
     51%|#####1    | 87.4M/170M [00:02<00:02, 40.8MB/s]
     54%|#####3    | 91.4M/170M [00:02<00:02, 39.5MB/s]
     57%|#####6    | 96.0M/170M [00:02<00:02, 35.6MB/
 s]
     60%|######    | 102M/170M [00:02<00:01, 41.5MB/s] 
     63%|######2   | 106M/170M [00:02<00:01, 36.8MB/s]
     66%|######5   | 112M/170M [00:02<00:01, 37.2MB/s]
     69%|######8   | 117M/170M [00:03<00:01, 38.0MB/s]
     71%|#######1  | 121M/170M [00:03<00:01, 37.0MB/s]
     75%|#######5  | 128M/170M [00:03<00:01, 39.2MB/s]
     80%|########  | 136M/170M [00:03<00:00, 39.6MB/s]
     85%|########4 | 144M/170M [00:03<00:00, 43.3MB/s]
     88%|########8 | 150M/170M [00:03<00:00, 46.4MB/s]
     91%|#########1| 155M/170M [00:04<00:00, 45.8MB/s]
     94%|#########3| 159M/170M [00:04<00:00, 37.9MB/s]
     96%|#########6| 163M/170M [00:04<00:00, 39.2MB/s]
     99%|#########8| 167M/170M [00:04<00:00, 38.8MB/s]
    100%|##########| 170M/170M [00:04<00:00, 39.8MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
       (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -295,7 +295,7 @@ Get boxes with score larger than 0.9
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 3 minutes  44.380 seconds)
+   **Total running time of the script:** ( 3 minutes  54.779 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
index ab530ca0d0..cc1c4a8b17 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
@@ -227,7 +227,7 @@ training. Other models require a full post training calibration.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
-
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     59%|#####8    | 7.99M/13.6M [00:00<00:00, 50.0MB/s]
     94%|#########4| 12.8M/13.6M [00:00<00:00, 44.1MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 47.3MB/s]
+
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     47%|####6     | 6.30M/13.6M [00:00<00:00, 48.7MB/s]
     81%|########  | 10.9M/13.6M [00:00<00:00, 30.6MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 32.9MB/s]
 
 
 
@@ -409,7 +409,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      89.2425      89.1767      90.6769      88.7424       0.2954   
+      89.1701      89.1335      93.3542      88.6979       0.4907   
                
 
 
@@ -458,7 +458,7 @@ TODO
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  28.431 seconds)
+   **Total running time of the script:** ( 1 minutes  35.572 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_prequantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
index 9932d7126d..73e7c47f95 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
@@ -423,7 +423,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      109.4787     109.4029     110.4955     108.8343      0.3101   
+      110.1091     109.9590     115.1783     109.5069      0.6258   
                
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
index 70606f79ee..dc1f17df27 100644
--- a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
@@ -257,7 +257,7 @@ We create a Relay VM to build and execute the model.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  0.059 seconds)
+   **Total running time of the script:** ( 2 minutes  19.368 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_quantized.py:
diff --git a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
index f795f8d1f3..7a54caea82 100644
--- a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**12:02.027** total execution time for **how_to_deploy_models** files:
+**12:59.608** total execution time for **how_to_deploy_models** files:
 
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:44.380 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:54.779 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 02:00.059 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 02:19.368 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:28.431 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:35.572 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:20.132 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:22.678 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:52.139 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:56.278 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:51.155 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:55.071 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:45.288 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:50.872 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:30.331 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:32.525 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:30.106 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:32.458 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.007 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
index f73095b44f..3427277fd8 100644
--- a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
@@ -463,7 +463,7 @@ First let us define two helper functions to get the mobilenet model and a cat im
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip486dd616-2c9d-4de8-8edd-5728f7951673 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip7d141282-fd43-4445-b2d9-aef2526ed0be from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 
 
 
diff --git a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
index bd6b897952..e4b06dab83 100644
--- a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:58.305** total execution time for **how_to_extend_tvm** files:
+**00:58.576** total execution time for **how_to_extend_tvm** files:
 
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:54.272 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:54.588 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.830 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.804 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.196 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.177 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.007 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
index 772efdba80..23ea8290a3 100644
--- a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
@@ -220,10 +220,10 @@ profile the execution time of each passes.
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 24091us [24091us] (48.23%; 48.23%)
-    FoldScaleAxis: 25860us [9us] (51.77%; 51.77%)
-            FoldConstant: 25851us [1882us] (51.75%; 99.97%)
-                    InferType: 23969us [23969us] (47.98%; 92.72%)
+    InferType: 23974us [23974us] (44.05%; 44.05%)
+    FoldScaleAxis: 30448us [10us] (55.95%; 55.95%)
+            FoldConstant: 30438us [2762us] (55.93%; 99.97%)
+                    InferType: 27677us [27677us] (50.86%; 90.93%)
 
 
 
@@ -262,10 +262,10 @@ Refer to following sections and :py:func:`tvm.instrument.pass_instrument` for th
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 23960us [23960us] (48.04%; 48.04%)
-    FoldScaleAxis: 25920us [9us] (51.96%; 51.96%)
-            FoldConstant: 25911us [1925us] (51.95%; 99.96%)
-                    InferType: 23986us [23986us] (48.09%; 92.57%)
+    InferType: 23430us [23430us] (48.26%; 48.26%)
+    FoldScaleAxis: 25123us [8us] (51.74%; 51.74%)
+            FoldConstant: 25116us [1871us] (51.73%; 99.97%)
+                    InferType: 23245us [23245us] (47.88%; 92.55%)
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
index f7ea5168c5..cc89b2fede 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
@@ -331,7 +331,7 @@ latency of convolution.
 
  .. code-block:: none
 
-    Convolution: 33.582592 ms
+    Convolution: 53.581855 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
index 2f194b793c..1d68dc674c 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
@@ -598,7 +598,7 @@ be able to run on our build server
 
  .. code-block:: none
 
-    conv2d with tensor core: 12.269376 ms
+    conv2d with tensor core: 12.276531 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
index 391d2f1336..5468f3e0d4 100644
--- a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
@@ -134,8 +134,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 
  .. code-block:: none
 
-    Numpy running time: 0.019010
-    Baseline: 3.180731
+    Numpy running time: 0.018542
+    Baseline: 3.545863
 
 
 
@@ -227,7 +227,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 
  .. code-block:: none
 
-    Opt1: 0.307755
+    Opt1: 0.297697
 
 
 
@@ -318,7 +318,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
 
  .. code-block:: none
 
-    Opt2: 0.283842
+    Opt2: 0.289679
 
 
 
@@ -406,7 +406,7 @@ the access pattern for A matrix is more cache friendly.
 
  .. code-block:: none
 
-    Opt3: 0.120883
+    Opt3: 0.118086
 
 
 
@@ -523,7 +523,7 @@ flattening.
 
  .. code-block:: none
 
-    Opt4: 0.107620
+    Opt4: 0.108616
 
 
 
@@ -635,7 +635,7 @@ write to C when all the block results are ready.
 
  .. code-block:: none
 
-    Opt5: 0.111692
+    Opt5: 0.112535
 
 
 
@@ -748,7 +748,7 @@ Furthermore, we can also utilize multi-core processors to do the thread-level pa
 
  .. code-block:: none
 
-    Opt6: 0.133280
+    Opt6: 0.134580
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
index 119e44cd30..1058f3d7bb 100644
--- a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:33.751** total execution time for **how_to_optimize_operators** files:
+**00:34.893** total execution time for **how_to_optimize_operators** files:
 
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:30.490 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:31.614 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:01.963 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:02.039 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.299 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.240 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
index f34a49ad78..d13a3759d2 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
@@ -5,18 +5,18 @@
 
 Computation times
 =================
-**03:35.538** total execution time for **how_to_tune_with_autoscheduler** files:
+**03:46.816** total execution time for **how_to_tune_with_autoscheduler** files:
 
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:29.749 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:33.828 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:16.274 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:19.762 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:17.408 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:18.140 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:16.348 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:17.591 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:15.654 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:17.392 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:00.104 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:00.103 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
index 9db90262ed..e9147cdf83 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
@@ -767,7 +767,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 0.356 ms
+    Execution time of this operator: 0.340 ms
 
 
 
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
index f74bec923f..619e8401c0 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
@@ -647,7 +647,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       8.1405       8.1461       8.1480       8.1274       0.0093   
+       8.1221       8.1217       8.1298       8.1149       0.0061   
                
 
 
@@ -675,7 +675,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  16.274 seconds)
+   **Total running time of the script:** ( 1 minutes  19.762 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
index bfe9530fe1..547d29e5be 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
@@ -666,7 +666,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      750.1840     749.0816     752.9914     748.4790      2.0003   
+      756.9340     757.8507     757.9034     755.0479      1.3339   
                
 
 
@@ -694,7 +694,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  29.749 seconds)
+   **Total running time of the script:** ( 1 minutes  33.828 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_x86.py:
diff --git a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
index 9e50db5012..515789da5d 100644
--- a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
@@ -5,16 +5,16 @@
 
 Computation times
 =================
-**00:23.526** total execution time for **how_to_tune_with_autotvm** files:
+**00:24.587** total execution time for **how_to_tune_with_autotvm** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:23.489 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:24.547 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.022 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.024 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)             | 00:00.006 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)               | 00:00.005 | 0.0 MB |
-+--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``) | 00:00.005 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)               | 00:00.005 | 0.0 MB |
++--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
index d169f45bc4..0e2fbe3a34 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
@@ -326,7 +326,7 @@ and measure running time.
 
     Best config:
     ,None
-    Time cost of this operator: 0.037251
+    Time cost of this operator: 0.037097
 
 
 
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
index 9e3d76d1bf..58339dbea3 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
@@ -360,10 +360,10 @@ Timing the untuned program
     ########## Build without Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  309.1     98.737   (1, 2, 10, 10, 3)  2       1        [309.1]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.956     0.944    (1, 6, 10, 10)     1       1        [2.956]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.999     0.319    (1, 1, 10, 10, 3)  1       1        [0.999]           
-    Total_time                                    -                                             313.055   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  308.9     98.757   (1, 2, 10, 10, 3)  2       1        [308.9]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.918     0.933    (1, 6, 10, 10)     1       1        [2.918]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.971     0.31     (1, 1, 10, 10, 3)  1       1        [0.971]           
+    Total_time                                    -                                             312.789   -        -                  -       -        -                 
 
 
 
@@ -428,10 +428,10 @@ Timing the tuned program
     ########## Build with Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  136.7     98.081   (1, 6, 10, 10, 1)  2       1        [136.7]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.81      1.299    (1, 6, 10, 10)     1       1        [1.81]            
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.865     0.621    (1, 3, 10, 10, 1)  1       1        [0.865]           
-    Total_time                                    -                                             139.375   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  101.7     97.466   (1, 6, 10, 10, 1)  2       1        [101.7]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.799     1.724    (1, 6, 10, 10)     1       1        [1.799]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.846     0.811    (1, 3, 10, 10, 1)  1       1        [0.846]           
+    Total_time                                    -                                             104.344   -        -                  -       -        -                 
 
 
 
@@ -439,7 +439,7 @@ Timing the tuned program
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  26.459 seconds)
+   **Total running time of the script:** ( 1 minutes  28.926 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_autotune.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
index 47f06ec366..ea02d648d0 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
@@ -118,7 +118,7 @@ download a cat image and preprocess it to use as the model input.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/ao/quantization/utils.py:310: UserWarning: must run observer before calling calculate_qparams. Returning default values.
       warnings.warn(
     Downloading: "https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
-
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
     67%|######6   | 2.29M/3.42M [00:00<00:00, 24.0MB/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 27.4MB/s]
+
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
     61%|######    | 2.09M/3.42M [00:00<00:00, 14.8MB/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 23.4MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
       device=storage.device,
     /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -326,7 +326,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  28.613 seconds)
+   **Total running time of the script:** ( 1 minutes  28.633 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_pytorch.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
index 2caf7d7bad..e5ad75624b 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
@@ -217,7 +217,7 @@ take about **2 minutes** to download the Stanford Cars, while COCO 2017 validati
  .. code-block:: none
 
 
-    '/tmp/tmpt_udtumq/images/random'
+    '/tmp/tmp2dfhyzae/images/random'
 
 
 
@@ -317,8 +317,8 @@ objects to other stuff? We can display some examples from our datasets using ``m
 
  .. code-block:: none
 
-    /tmp/tmpt_udtumq/images/target contains 8144 images
-    /tmp/tmpt_udtumq/images/random contains 5000 images
+    /tmp/tmp2dfhyzae/images/target contains 8144 images
+    /tmp/tmp2dfhyzae/images/random contains 5000 images
 
 
 
@@ -493,13 +493,13 @@ the time on our validation set).
  .. code-block:: none
 
     Epoch 1/3
-    328/328 - 41s - loss: 0.2102 - accuracy: 0.9292 - val_loss: 0.1365 - val_accuracy: 0.9520 - 41s/epoch - 124ms/step
+    328/328 - 41s - loss: 0.2184 - accuracy: 0.9255 - val_loss: 0.1313 - val_accuracy: 0.9528 - 41s/epoch - 124ms/step
     Epoch 2/3
-    328/328 - 35s - loss: 0.1001 - accuracy: 0.9637 - val_loss: 0.0911 - val_accuracy: 0.9728 - 35s/epoch - 108ms/step
+    328/328 - 36s - loss: 0.1026 - accuracy: 0.9633 - val_loss: 0.1049 - val_accuracy: 0.9660 - 36s/epoch - 109ms/step
     Epoch 3/3
-    328/328 - 35s - loss: 0.0684 - accuracy: 0.9740 - val_loss: 0.1135 - val_accuracy: 0.9687 - 35s/epoch - 108ms/step
+    328/328 - 36s - loss: 0.0669 - accuracy: 0.9751 - val_loss: 0.1600 - val_accuracy: 0.9528 - 36s/epoch - 108ms/step
 
-    <keras.callbacks.History object at 0x7f4afa9f3850>
+    <keras.callbacks.History object at 0x7f4cf695beb0>
 
 
 
@@ -860,7 +860,7 @@ Arduino tutorial for how to do that `on GitHub <https://github.com/guberti/tvm-a
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 4 minutes  39.162 seconds)
+   **Total running time of the script:** ( 4 minutes  55.393 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_train.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
index 9f93f8520f..3dfed8ff71 100644
--- a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
@@ -5,20 +5,20 @@
 
 Computation times
 =================
-**08:04.172** total execution time for **how_to_work_with_microtvm** files:
+**08:23.233** total execution time for **how_to_work_with_microtvm** files:
 
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 04:39.162 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 04:55.393 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:28.613 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:28.926 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:26.459 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:28.633 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:12.023 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:12.170 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:09.325 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:09.516 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.591 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.595 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)         | 00:00.000 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
index 7208d6db26..50428d735c 100644
--- a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:40.856** total execution time for **how_to_work_with_relay** files:
+**00:41.706** total execution time for **how_to_work_with_relay** files:
 
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:35.436 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:36.210 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.254 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.364 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:02.161 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:02.125 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``)                 | 00:00.006 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
index 27d8b64cb1..718836b17a 100644
--- a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
@@ -278,7 +278,7 @@ The following example customizes CUDA lowering rule for :code:`exp`.
  .. code-block:: none
 
 
-    <function my_cuda_math_rule at 0x7f498c799e50>
+    <function my_cuda_math_rule at 0x7f4b80713af0>
 
 
 
diff --git a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
index 8043c9cae2..b7a55b4ec7 100644
--- a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
@@ -5,22 +5,22 @@
 
 Computation times
 =================
-**00:06.398** total execution time for **how_to_work_with_schedules** files:
+**00:09.018** total execution time for **how_to_work_with_schedules** files:
 
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:03.409 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:05.720 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.236 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.558 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.747 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.732 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.744 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.731 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.121 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.123 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.060 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.070 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.052 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.029 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.031 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
index 1a0f5202e6..81cde77403 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:35.288** total execution time for **topic_vta_tutorials_autotvm** files:
+**00:37.828** total execution time for **topic_vta_tutorials_autotvm** files:
 
 +---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:35.280 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:37.820 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)     | 00:00.008 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
index 166ea665e1..2cf3cf3e51 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -293,7 +293,7 @@ The compilation steps are:
       warnings.warn(
     /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
       graph, lib, params = relay.build(
-    resnet18_v1 inference graph built in 37.88s!
+    resnet18_v1 inference graph built in 42.06s!
 
 
 
@@ -416,7 +416,7 @@ and an input test image.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  5.199 seconds)
+   **Total running time of the script:** ( 1 minutes  9.415 seconds)
 
 
 .. _sphx_glr_download_topic_vta_tutorials_frontend_deploy_classification.py:
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
index f01fcb7560..2f3266b4f2 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
@@ -337,7 +337,7 @@ The compilation steps are:
 
     /workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
       warnings.warn(
-    yolov3-tiny inference graph built in 25.82s!
+    yolov3-tiny inference graph built in 28.81s!
 
 
 
@@ -447,7 +447,7 @@ Download test image
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  9.293 seconds)
+   **Total running time of the script:** ( 1 minutes  12.453 seconds)
 
 
 .. _sphx_glr_download_topic_vta_tutorials_frontend_deploy_detection.py:
diff --git a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
index d61a3ee262..b64b9b2576 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**02:14.491** total execution time for **topic_vta_tutorials_frontend** files:
+**02:21.869** total execution time for **topic_vta_tutorials_frontend** files:
 
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 01:09.293 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 01:12.453 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 01:05.199 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 01:09.415 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
index 73c884da1a..15f39eb392 100644
--- a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:03.388** total execution time for **topic_vta_tutorials_optimize** files:
+**00:03.344** total execution time for **topic_vta_tutorials_optimize** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.863 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.829 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.525 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.515 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
index b1b32c79bc..9edc6aaad9 100644
--- a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:00.887** total execution time for **topic_vta_tutorials** files:
+**00:00.900** total execution time for **topic_vta_tutorials** files:
 
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.456 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.467 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.432 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.433 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
index 3aebe1b5aa..a02f5e161c 100644
--- a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
@@ -318,7 +318,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 93.784 ms
+    Execution time of this operator: 97.395 ms
 
 
 
@@ -416,7 +416,7 @@ resume the status and do more 5 trials.
  .. code-block:: none
 
     Resume search:
-    *E
+
 
 
 
@@ -434,7 +434,7 @@ operations.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  47.147 seconds)
+   **Total running time of the script:** ( 1 minutes  33.068 seconds)
 
 
 .. _sphx_glr_download_tutorial_auto_scheduler_matmul_x86.py:
diff --git a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
index 2a95239634..01abad8de7 100644
--- a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
@@ -454,16 +454,16 @@ reduce variance, we take 5 measurements and average them.
     waiting for device...
     device available
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 4.71/4.71       result: MeasureResult(costs=(0.05694622020000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.1869642734527588, timestamp=1684922502.4330118)        [('tile_y', [-1, 8]), ('tile_x', [-1, 2])],None,13
-    No: 2   GFLOPS: 10.76/10.76     result: MeasureResult(costs=(0.024956706000000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.813683032989502, timestamp=1684922503.1108992)        [('tile_y', [-1, 2]), ('tile_x', [-1, 128])],None,71
-    No: 3   GFLOPS: 10.83/10.83     result: MeasureResult(costs=(0.024791978,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6751875877380371, timestamp=1684922503.7719996)        [('tile_y', [-1, 4]), ('tile_x', [-1, 16])],None,42
-    No: 4   GFLOPS: 4.13/10.83      result: MeasureResult(costs=(0.065054261,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.301656723022461, timestamp=1684922505.0782185) [('tile_y', [-1, 4]), ('tile_x', [-1, 2])],None,12
-    No: 5   GFLOPS: 2.89/10.83      result: MeasureResult(costs=(0.0927725522,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.751298427581787, timestamp=1684922506.9588819)        [('tile_y', [-1, 16]), ('tile_x', [-1, 2])],None,14
-    No: 6   GFLOPS: 3.30/10.83      result: MeasureResult(costs=(0.0814160686,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.5724496841430664, timestamp=1684922508.5319996)       [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
-    No: 7   GFLOPS: 8.62/10.83      result: MeasureResult(costs=(0.031157640400000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7739567756652832, timestamp=1684922509.2955496)       [('tile_y', [-1, 256]), ('tile_x', [-1, 16])],None,48
-    No: 8   GFLOPS: 2.05/10.83      result: MeasureResult(costs=(0.1309710022,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.358995199203491, timestamp=1684922511.65407)  [('tile_y', [-1, 32]), ('tile_x', [-1, 4])],None,25
-    No: 9   GFLOPS: 11.15/11.15     result: MeasureResult(costs=(0.0240744876,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6144266128540039, timestamp=1684922512.3943744)       [('tile_y', [-1, 128]), ('tile_x', [-1, 512])],None,97
-    No: 10  GFLOPS: 16.23/16.23     result: MeasureResult(costs=(0.016537032,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5081222057342529, timestamp=1684922512.9238546)        [('tile_y', [-1, 16]), ('tile_x', [-1, 64])],None,64
+    No: 1   GFLOPS: 7.67/7.67       result: MeasureResult(costs=(0.0349940904,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.8255264759063721, timestamp=1684949662.1887333)       [('tile_y', [-1, 512]), ('tile_x', [-1, 16])],None,49
+    No: 2   GFLOPS: 1.02/7.67       result: MeasureResult(costs=(0.2625456442,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.4924726486206055, timestamp=1684949666.646765)        [('tile_y', [-1, 128]), ('tile_x', [-1, 2])],None,17
+    No: 3   GFLOPS: 10.03/10.03     result: MeasureResult(costs=(0.026756031999999996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6823074817657471, timestamp=1684949667.3306112)       [('tile_y', [-1, 2]), ('tile_x', [-1, 8])],None,31
+    No: 4   GFLOPS: 3.51/10.03      result: MeasureResult(costs=(0.07655771539999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4601919651031494, timestamp=1684949668.8121912)        [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
+    No: 5   GFLOPS: 0.50/10.03      result: MeasureResult(costs=(0.5372087728,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.894731283187866, timestamp=1684949677.8450584)        [('tile_y', [-1, 32]), ('tile_x', [-1, 1])],None,5
+    No: 6   GFLOPS: 11.13/11.13     result: MeasureResult(costs=(0.0241146898,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6550126075744629, timestamp=1684949678.4965243)       [('tile_y', [-1, 16]), ('tile_x', [-1, 256])],None,84
+    No: 7   GFLOPS: 9.96/11.13      result: MeasureResult(costs=(0.0269434458,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7120945453643799, timestamp=1684949679.1821337)       [('tile_y', [-1, 8]), ('tile_x', [-1, 16])],None,43
+    No: 8   GFLOPS: 11.88/11.88     result: MeasureResult(costs=(0.0225995996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6406514644622803, timestamp=1684949679.8011682)       [('tile_y', [-1, 16]), ('tile_x', [-1, 16])],None,44
+    No: 9   GFLOPS: 7.43/11.88      result: MeasureResult(costs=(0.036129904000000004,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.79469895362854, timestamp=1684949680.7161095) [('tile_y', [-1, 2]), ('tile_x', [-1, 4])],None,21
+    No: 10  GFLOPS: 2.10/11.88      result: MeasureResult(costs=(0.12794936620000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.274810791015625, timestamp=1684949683.0235138) [('tile_y', [-1, 32]), ('tile_x', [-1, 4])],None,25
 
 
 
diff --git a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
index 365cb0640a..c6fe4dc0e3 100644
--- a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
@@ -311,7 +311,7 @@ standard deviation.
 
  .. code-block:: none
 
-    {'mean': 496.5515389698703, 'median': 496.1619072502799, 'std': 2.0135947670101677}
+    {'mean': 492.5452644500001, 'median': 492.4691212000198, 'std': 2.1286296796252318}
 
 
 
@@ -582,30 +582,32 @@ the tuning data to.
 
  .. code-block:: none
 
-
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   14.46/  14.46 GFLOPS | Progress: (4/20) | 10.23 s
    [Task  1/25]  Current/Best:   16.78/  21.30 GFLOPS | Progress: (8/20) | 12.43 s
    [Task  1/25]  Current/Best:   20.59/  21.30 GFLOPS | Progress: (12/20) | 14.61 s
    [Task  1/25]  Current/Best:   10.08/  21.74 GFLOPS | Progress: (16/20) | 17.09 s
    [Task  1/25]  Current/Best:    9.32/  23.65 GFLOPS | Progress: (20/20) | 24.94 s Done.
-
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:    5.62/  18.68 GFLOPS | Progress: (4/20) | 4.78 s
    [Task  2/25]  Current/Best:    5.67/  18.68 GFLOPS | Progress: (8/20) | 6.57 s
    [Task  2/25]  Current/Best:    6.61/  18.68 GFLOPS | Progress: (12/20) | 8.23 s
    [Task  2/25]  Current/Best:    6.87/  18.68 GFLOPS | Progress: (16/20) | 10.23 s
    [Task  2/25]  Current/Best:    9.36/  18.68 GFLOPS | Progress: (20/20) | 11.80 s Done.
-
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:   20.36/  23.36 GFLOPS | Progress: (4/20) | 4.88 s
    [Task  3/25]  Current/Best:   19.56/  23.36 GFLOPS | Progress: (8/20) | 6.75 s
    [Task  3/25]  Current/Best:    5.41/  23.36 GFLOPS | Progress: (12/20) | 8.96 s
    [Task  3/25]  Current/Best:   15.69/  23.36 GFLOPS | Progress: (16/20) | 12.05 s
    [Task  3/25]  Current/Best:    7.75/  23.90 GFLOPS | Progress: (20/20) | 14.99 s Done.
-
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:   15.51/  22.02 GFLOPS | Progress: (4/20) | 4.45 s
    [Task  4/25]  Current/Best:    5.99/  22.02 GFLOPS | Progress: (8/20) | 6.82 s
    [Task  4/25]  Current/Best:    8.69/  22.02 GFLOPS | Progress: (12/20) | 12.03 s
    [Task  4/25]  Current/Best:   17.95/  22.02 GFLOPS | Progress: (16/20) | 13.59 s
    [Task  4/25]  Current/Best:   21.86/  22.02 GFLOPS | Progress: (20/20) | 15.77 s Done.
-
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:    8.32/  14.36 GFLOPS | Progress: (4/20) | 5.21 s
    [Task  5/25]  Current/Best:    8.21/  17.08 GFLOPS | Progress: (8/20) | 7.40 s
    [Task  5/25]  Current/Best:    2.94/  21.66 GFLOPS | Progress: (12/20) | 9.39 s
    [Task  5/25]  Current/Best:   10.85/  21.66 GFLOPS | Progress: (16/20) | 11.86 s
    [Task  5/25]  Current/Best:   14.99/  21.66 GFLOPS | Progress: (20/20) | 14.07 s Done.
-
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   10.88/  17.82 GFLOPS | Progress: (4/20) | 5.37 s
    [Task  6/25]  Current/Best:   15.59/  17.82 GFLOPS | Progress: (8/20) | 8.05 s
    [Task  6/25]  Current/Best:    7.12/  17.82 GFLOPS | Progress: (12/20) | 10.59 s
    [Task  6/25]  Current/Best:   10.42/  17.82 GFLOPS | Progress: (16/20) | 15.78 s
    [Task  6/25]  Current/Best:   16.01/  17.82 GFLOPS | Progress: (20/20) | 18.39 s Done.
-
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:    6.36/  21.98 GFLOPS | Progress: (4/20) | 5.36 s
    [Task  7/25]  Current/Best:   12.28/  21.98 GFLOPS | Progress: (8/20) | 8.07 s
    [Task  7/25]  Current/Best:   20.61/  21.98 GFLOPS | Progress: (12/20) | 10.68 s
    [Task  7/25]  Current/Best:   19.94/  21.98 GFLOPS | Progress: (16/20) | 12.80 s
    [Task  7/25]  Current/Best:   17.60/  21.98 GFLOPS | Progress: (20/20) | 15.01 s Done.
-
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   13.06/  14.08 GFLOPS | Progress: (4/20) | 5.47 s
    [Task  8/25]  Current/Best:    6.34/  14.08 GFLOPS | Progress: (8/20) | 8.23 s
    [Task  8/25]  Current/Best:   12.58/  15.43 GFLOPS | Progress: (12/20) | 19.81 s
    [Task  8/25]  Current/Best:    5.11/  15.43 GFLOPS | Progress: (16/20) | 23.54 s
    [Task  8/25]  Current/Best:    6.65/  15.85 GFLOPS | Progress: (20/20) | 29.78 s Done.
-
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:    6.56/  16.05 GFLOPS | Progress: (4/20) | 5.61 s
    [Task  9/25]  Current/Best:    6.84/  18.34 GFLOPS | Progress: (8/20) | 10.90 s
    [Task  9/25]  Current/Best:    6.75/  19.13 GFLOPS | Progress: (12/20) | 12.98 s
    [Task  9/25]  Current/Best:    9.51/  19.13 GFLOPS | Progress: (16/20) | 17.20 s
    [Task  9/25]  Current/Best:    6.31/  19.13 GFLOPS | Progress: (20/20) | 19.72 s Done.
-
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   20.19/  20.19 GFLOPS | Progress: (4/20) | 4.86 s
    [Task 10/25]  Current/Best:   20.50/  20.50 GFLOPS | Progress: (8/20) | 6.94 s
    [Task 10/25]  Current/Best:    4.48/  21.37 GFLOPS | Progress: (12/20) | 8.88 s
    [Task 10/25]  Current/Best:    8.52/  21.37 GFLOPS | Progress: (16/20) | 11.31 s
    [Task 10/25]  Current/Best:    9.55/  21.37 GFLOPS | Progress: (20/20) | 13.22 s Done.
-
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   16.07/  16.07 GFLOPS | Progress: (4/20) | 5.65 s
    [Task 11/25]  Current/Best:   21.14/  21.14 GFLOPS | Progress: (8/20) | 7.83 s
    [Task 11/25]  Current/Best:   20.71/  21.14 GFLOPS | Progress: (12/20) | 10.19 s
    [Task 11/25]  Current/Best:    7.89/  21.14 GFLOPS | Progress: (16/20) | 14.37 s
    [Task 11/25]  Current/Best:   15.67/  22.38 GFLOPS | Progress: (20/20) | 16.70 s Done.
-
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   13.95/  18.86 GFLOPS | Progress: (4/20) | 6.26 s
    [Task 12/25]  Current/Best:    5.66/  18.86 GFLOPS | Progress: (8/20) | 11.56 s
    [Task 12/25]  Current/Best:    9.46/  18.86 GFLOPS | Progress: (12/20) | 16.60 s
    [Task 12/25]  Current/Best:   15.71/  21.18 GFLOPS | Progress: (16/20) | 19.18 s
    [Task 12/25]  Current/Best:   11.95/  21.18 GFLOPS | Progress: (20/20) | 22.45 s Done.
-
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:    5.19/  19.72 GFLOPS | Progress: (4/20) | 5.31 s
    [Task 13/25]  Current/Best:   21.53/  21.53 GFLOPS | Progress: (8/20) | 9.84 s
    [Task 13/25]  Current/Best:   18.64/  21.53 GFLOPS | Progress: (12/20) | 12.12 s
    [Task 13/25]  Current/Best:    8.49/  21.53 GFLOPS | Progress: (16/20) | 15.98 s
    [Task 13/25]  Current/Best:   18.49/  21.53 GFLOPS | Progress: (20/20) | 18.69 s Done.
-
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   21.21/  21.21 GFLOPS | Progress: (4/20) | 4.92 s
    [Task 14/25]  Current/Best:   14.63/  21.21 GFLOPS | Progress: (8/20) | 16.43 s
    [Task 14/25]  Current/Best:   11.09/  21.21 GFLOPS | Progress: (12/20) | 29.28 s
    [Task 14/25]  Current/Best:   17.13/  21.21 GFLOPS | Progress: (16/20) | 32.47 s
    [Task 14/25]  Current/Best:   14.55/  21.21 GFLOPS | Progress: (20/20) | 35.22 s
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:   10.85/  16.46 GFLOPS | Progress: (4/20) | 5.82 s
    [Task 15/25]  Current/Best:   20.40/  20.40 GFLOPS | Progress: (8/20) | 9.98 s
    [Task 15/25]  Current/Best:    7.83/  20.40 GFLOPS | Progress: (12/20) | 17.19 s
    [Task 15/25]  Current/Best:   18.46/  20.40 GFLOPS | Progress: (16/20) | 19.59 s
    [Task 15/25]  Current/Best:    7.88/  20.40 GFLOPS | Progress: (20/2
 0) | 24.06 s Done.
-
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   14.35/  18.51 GFLOPS | Progress: (4/20) | 4.86 s
    [Task 16/25]  Current/Best:    5.74/  18.51 GFLOPS | Progress: (8/20) | 6.73 s
    [Task 16/25]  Current/Best:   16.94/  18.51 GFLOPS | Progress: (12/20) | 10.68 s
    [Task 16/25]  Current/Best:   15.94/  18.71 GFLOPS | Progress: (16/20) | 12.97 s
    [Task 16/25]  Current/Best:    1.57/  19.09 GFLOPS | Progress: (20/20) | 15.20 s Done.
-
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:   16.17/  16.17 GFLOPS | Progress: (4/20) | 6.02 s
    [Task 17/25]  Current/Best:    9.72/  16.17 GFLOPS | Progress: (8/20) | 8.53 s
    [Task 17/25]  Current/Best:    9.60/  22.58 GFLOPS | Progress: (12/20) | 12.00 s
    [Task 17/25]  Current/Best:   12.79/  22.58 GFLOPS | Progress: (16/20) | 14.86 s
    [Task 17/25]  Current/Best:    7.64/  22.58 GFLOPS | Progress: (20/20) | 17.77 s Done.
-
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:   14.86/  14.86 GFLOPS | Progress: (4/20) | 6.81 s
    [Task 18/25]  Current/Best:    5.71/  17.50 GFLOPS | Progress: (8/20) | 10.74 s
    [Task 18/25]  Current/Best:    4.89/  20.41 GFLOPS | Progress: (12/20) | 12.95 s
    [Task 18/25]  Current/Best:   18.21/  20.65 GFLOPS | Progress: (16/20) | 18.81 s
    [Task 18/25]  Current/Best:    9.95/  20.65 GFLOPS | Progress: (20/20) | 24.30 s Done.
-
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   13.92/  18.55 GFLOPS | Progress: (4/20) | 7.55 s
    [Task 19/25]  Current/Best:   11.42/  20.97 GFLOPS | Progress: (8/20) | 12.19 s
    [Task 19/25]  Current/Best:    9.98/  20.97 GFLOPS | Progress: (12/20) | 17.32 s
    [Task 19/25]  Current/Best:   21.01/  21.01 GFLOPS | Progress: (16/20) | 19.89 s
    [Task 19/25]  Current/Best:   21.46/  21.61 GFLOPS | Progress: (20/20) | 23.63 s Done.
-
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:   11.10/  16.33 GFLOPS | Progress: (4/20) | 5.37 s
    [Task 20/25]  Current/Best:   16.48/  22.10 GFLOPS | Progress: (8/20) | 13.41 s
    [Task 20/25]  Current/Best:   14.78/  22.10 GFLOPS | Progress: (12/20) | 19.89 s
    [Task 20/25]  Current/Best:    8.61/  22.10 GFLOPS | Progress: (16/20) | 23.75 s
    [Task 20/25]  Current/Best:    6.51/  22.10 GFLOPS | Progress: (20/20) | 26.00 s Done.
-
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:    5.04/  20.62 GFLOPS | Progress: (4/20) | 4.57 s
    [Task 21/25]  Current/Best:   19.70/  20.62 GFLOPS | Progress: (8/20) | 16.20 s
    [Task 21/25]  Current/Best:    7.00/  20.62 GFLOPS | Progress: (12/20) | 20.77 s
    [Task 21/25]  Current/Best:    8.47/  20.62 GFLOPS | Progress: (16/20) | 26.80 s
    [Task 21/25]  Current/Best:   21.04/  21.04 GFLOPS | Progress: (20/20) | 30.10 s Done.
-
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:   10.39/  21.83 GFLOPS | Progress: (4/20) | 4.99 s
    [Task 22/25]  Current/Best:   14.05/  21.83 GFLOPS | Progress: (8/20) | 6.75 s
    [Task 22/25]  Current/Best:   10.89/  21.83 GFLOPS | Progress: (12/20) | 10.41 s
    [Task 22/25]  Current/Best:   13.56/  21.83 GFLOPS | Progress: (16/20) | 12.35 s
    [Task 22/25]  Current/Best:    5.24/  21.83 GFLOPS | Progress: (20/20) | 15.47 s Done.
-
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   21.86/  22.39 GFLOPS | Progress: (4/20) | 5.91 s
    [Task 23/25]  Current/Best:    9.38/  22.39 GFLOPS | Progress: (8/20) | 12.21 s
    [Task 23/25]  Current/Best:    3.09/  22.39 GFLOPS | Progress: (12/20) | 15.45 s
    [Task 23/25]  Current/Best:   13.88/  22.39 GFLOPS | Progress: (16/20) | 19.75 s
    [Task 23/25]  Current/Best:   19.23/  22.39 GFLOPS | Progress: (20/20) | 22.00 s Done.
-
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    9.62/   9.62 GFLOPS | Progress: (4/20) | 13.84 s
    [Task 24/25]  Current/Best:    6.46/   9.62 GFLOPS | Progress: (8/20) | 26.08 s
    [Task 24/25]  Current/Best:    7.90/   9.62 GFLOPS | Progress: (12/20) | 30.96 s
    [Task 24/25]  Current/Best:    5.84/  10.12 GFLOPS | Progress: (16/20) | 41.66 s
    [Task 24/25]  Current/Best:    8.43/  10.12 GFLOPS | Progress: (20/20) | 54.39 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
-
    [Task 25/25]  Current/Best:    2.93/   9.25 GFLOPS | Progress: (4/20) | 8.12 s
    [Task 25/25]  Current/Best:    5.49/   9.31 GFLOPS | Progress: (8/20) | 16.67 s
    [Task 25/25]  Current/Best:    5.28/   9.31 GFLOPS | Progress: (12/20) | 27.68 s
    [Task 25/25]  Current/Best:    7.99/   9.31 GFLOPS | Progress: (16/20) | 38.68 s
    [Task 25/25]  Current/Best:    9.03/   9.31 GFLOPS | Progress: (20/20) | 41.47 s
+
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   20.79/  20.79 GFLOPS | Progress: (4/20) | 9.59 s
    [Task  1/25]  Current/Best:   21.97/  23.53 GFLOPS | Progress: (8/20) | 13.02 s
    [Task  1/25]  Current/Best:   21.77/  23.53 GFLOPS | Progress: (12/20) | 16.09 s
    [Task  1/25]  Current/Best:   22.09/  23.53 GFLOPS | Progress: (16/20) | 19.68 s
    [Task  1/25]  Current/Best:    3.45/  23.53 GFLOPS | Progress: (20/20) | 22.50 s Done.
+
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   13.58/  19.49 GFLOPS | Progress: (4/20) | 4.32 s
    [Task  2/25]  Current/Best:   14.88/  20.13 GFLOPS | Progress: (8/20) | 5.87 s
    [Task  2/25]  Current/Best:   10.68/  20.13 GFLOPS | Progress: (12/20) | 7.73 s
    [Task  2/25]  Current/Best:   14.22/  20.13 GFLOPS | Progress: (16/20) | 9.24 s
    [Task  2/25]  Current/Best:   13.08/  20.13 GFLOPS | Progress: (20/20) | 10.84 s Done.
+
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:    6.13/  20.39 GFLOPS | Progress: (4/20) | 6.56 s
    [Task  3/25]  Current/Best:    8.90/  20.39 GFLOPS | Progress: (8/20) | 8.75 s
    [Task  3/25]  Current/Best:   12.58/  23.09 GFLOPS | Progress: (12/20) | 12.52 s
    [Task  3/25]  Current/Best:   14.19/  23.09 GFLOPS | Progress: (16/20) | 15.00 s
    [Task  3/25]  Current/Best:   13.70/  23.09 GFLOPS | Progress: (20/20) | 17.49 s Done.
+
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:   10.32/  16.15 GFLOPS | Progress: (4/20) | 6.30 s
    [Task  4/25]  Current/Best:   14.38/  16.15 GFLOPS | Progress: (8/20) | 8.49 s
    [Task  4/25]  Current/Best:    6.41/  16.15 GFLOPS | Progress: (12/20) | 10.38 s
    [Task  4/25]  Current/Best:   11.79/  16.52 GFLOPS | Progress: (16/20) | 12.98 s
    [Task  4/25]  Current/Best:   14.47/  20.39 GFLOPS | Progress: (20/20) | 15.68 s Done.
+
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:    5.84/  23.28 GFLOPS | Progress: (4/20) | 5.51 s
    [Task  5/25]  Current/Best:   13.45/  23.28 GFLOPS | Progress: (8/20) | 8.10 s
    [Task  5/25]  Current/Best:   12.77/  23.28 GFLOPS | Progress: (12/20) | 9.91 s
    [Task  5/25]  Current/Best:    3.52/  23.28 GFLOPS | Progress: (16/20) | 12.06 s
    [Task  5/25]  Current/Best:    5.48/  23.28 GFLOPS | Progress: (20/20) | 14.64 s Done.
+
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   21.76/  21.76 GFLOPS | Progress: (4/20) | 6.69 s
    [Task  6/25]  Current/Best:   17.01/  21.76 GFLOPS | Progress: (8/20) | 9.78 s
    [Task  6/25]  Current/Best:   15.78/  21.76 GFLOPS | Progress: (12/20) | 12.46 s
    [Task  6/25]  Current/Best:    6.58/  21.76 GFLOPS | Progress: (16/20) | 15.66 s
    [Task  6/25]  Current/Best:   16.28/  21.76 GFLOPS | Progress: (20/20) | 19.17 s Done.
+
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:    1.58/  14.92 GFLOPS | Progress: (4/20) | 6.88 s
    [Task  7/25]  Current/Best:   16.37/  18.75 GFLOPS | Progress: (8/20) | 9.16 s
    [Task  7/25]  Current/Best:   13.98/  18.75 GFLOPS | Progress: (12/20) | 12.16 s
    [Task  7/25]  Current/Best:   12.25/  18.75 GFLOPS | Progress: (16/20) | 15.57 s
    [Task  7/25]  Current/Best:    6.99/  18.75 GFLOPS | Progress: (20/20) | 18.77 s Done.
+
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   16.03/  16.03 GFLOPS | Progress: (4/20) | 6.60 s
    [Task  8/25]  Current/Best:   10.15/  16.27 GFLOPS | Progress: (8/20) | 11.87 s
    [Task  8/25]  Current/Best:   16.24/  22.06 GFLOPS | Progress: (12/20) | 14.46 s
    [Task  8/25]  Current/Best:   20.39/  22.06 GFLOPS | Progress: (16/20) | 16.85 s
    [Task  8/25]  Current/Best:   10.87/  22.06 GFLOPS | Progress: (20/20) | 22.58 s Done.
+
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   18.09/  18.09 GFLOPS | Progress: (4/20) | 7.59 s
    [Task  9/25]  Current/Best:   11.39/  18.09 GFLOPS | Progress: (8/20) | 9.78 s
    [Task  9/25]  Current/Best:   12.60/  22.60 GFLOPS | Progress: (12/20) | 11.85 s
    [Task  9/25]  Current/Best:   10.65/  22.60 GFLOPS | Progress: (16/20) | 15.09 s
    [Task  9/25]  Current/Best:   12.91/  22.60 GFLOPS | Progress: (20/20) | 18.69 s Done.
+
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   18.24/  18.24 GFLOPS | Progress: (4/20) | 4.71 s
    [Task 10/25]  Current/Best:   16.31/  18.24 GFLOPS | Progress: (8/20) | 7.31 s
    [Task 10/25]  Current/Best:   10.53/  18.24 GFLOPS | Progress: (12/20) | 9.61 s
    [Task 10/25]  Current/Best:    6.37/  18.86 GFLOPS | Progress: (16/20) | 11.44 s
    [Task 10/25]  Current/Best:    9.64/  18.86 GFLOPS | Progress: (20/20) | 15.73 s Done.
+
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:    6.18/  21.92 GFLOPS | Progress: (4/20) | 5.35 s
    [Task 11/25]  Current/Best:   12.93/  21.92 GFLOPS | Progress: (8/20) | 7.47 s
    [Task 11/25]  Current/Best:   19.37/  21.92 GFLOPS | Progress: (12/20) | 9.55 s
    [Task 11/25]  Current/Best:    1.58/  21.92 GFLOPS | Progress: (16/20) | 13.93 s
    [Task 11/25]  Current/Best:    7.23/  21.92 GFLOPS | Progress: (20/20) | 16.53 s Done.
+
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   17.20/  17.20 GFLOPS | Progress: (4/20) | 6.26 s
    [Task 12/25]  Current/Best:    1.59/  17.41 GFLOPS | Progress: (8/20) | 9.75 s
    [Task 12/25]  Current/Best:   14.71/  17.41 GFLOPS | Progress: (12/20) | 12.65 s
    [Task 12/25]  Current/Best:    5.90/  20.72 GFLOPS | Progress: (16/20) | 14.78 s
    [Task 12/25]  Current/Best:   21.80/  21.80 GFLOPS | Progress: (20/20) | 17.35 s Done.
+
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:   19.66/  19.66 GFLOPS | Progress: (4/20) | 6.29 s
    [Task 13/25]  Current/Best:    9.46/  19.66 GFLOPS | Progress: (8/20) | 9.57 s
    [Task 13/25]  Current/Best:   10.87/  20.73 GFLOPS | Progress: (12/20) | 12.13 s
    [Task 13/25]  Current/Best:   12.05/  20.73 GFLOPS | Progress: (16/20) | 15.38 s
    [Task 13/25]  Current/Best:   10.31/  20.73 GFLOPS | Progress: (20/20) | 18.01 s Done.
+
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   14.75/  14.75 GFLOPS | Progress: (4/20) | 13.81 s
    [Task 14/25]  Current/Best:    7.20/  14.75 GFLOPS | Progress: (8/20) | 17.98 s
    [Task 14/25]  Current/Best:   10.90/  19.35 GFLOPS | Progress: (12/20) | 20.58 s
    [Task 14/25]  Current/Best:   18.87/  19.35 GFLOPS | Progress: (16/20) | 24.53 s
    [Task 14/25]  Current/Best:    7.03/  19.35 GFLOPS | Progress: (20/20) | 28.57 s Done.
+
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:    8.89/   8.89 GFLOPS | Progress: (4/20) | 5.30 s
    [Task 15/25]  Current/Best:   19.40/  20.06 GFLOPS | Progress: (8/20) | 6.84 s
    [Task 15/25]  Current/Best:   18.18/  20.06 GFLOPS | Progress: (12/20) | 13.26 s
    [Task 15/25]  Current/Best:    7.21/  20.06 GFLOPS | Progress: (16/20) | 16.38 s
    [Task 15/25]  Current/Best:   12.79/  20.06 GFLOPS | Progress: (20/20) | 24.28 s Done.
+
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   11.22/  17.89 GFLOPS | Progress: (4/20) | 4.96 s
    [Task 16/25]  Current/Best:   17.89/  18.25 GFLOPS | Progress: (8/20) | 6.58 s
    [Task 16/25]  Current/Best:    6.77/  18.25 GFLOPS | Progress: (12/20) | 8.95 s
    [Task 16/25]  Current/Best:   18.98/  18.98 GFLOPS | Progress: (16/20) | 12.16 s
    [Task 16/25]  Current/Best:    9.42/  18.98 GFLOPS | Progress: (20/20) | 13.86 s Done.
+
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:    4.79/  10.81 GFLOPS | Progress: (4/20) | 6.42 s
    [Task 17/25]  Current/Best:   18.77/  19.34 GFLOPS | Progress: (8/20) | 9.34 s
    [Task 17/25]  Current/Best:   20.64/  20.64 GFLOPS | Progress: (12/20) | 12.56 s
    [Task 17/25]  Current/Best:   20.18/  20.64 GFLOPS | Progress: (16/20) | 14.95 s
    [Task 17/25]  Current/Best:    9.58/  22.41 GFLOPS | Progress: (20/20) | 16.88 s Done.
+
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:    5.17/  13.13 GFLOPS | Progress: (4/20) | 6.72 s
    [Task 18/25]  Current/Best:   17.23/  17.23 GFLOPS | Progress: (8/20) | 10.45 s
    [Task 18/25]  Current/Best:   16.31/  17.23 GFLOPS | Progress: (12/20) | 13.96 s
    [Task 18/25]  Current/Best:    6.15/  17.28 GFLOPS | Progress: (16/20) | 16.37 s
    [Task 18/25]  Current/Best:    8.42/  17.28 GFLOPS | Progress: (20/20) | 21.31 s Done.
+
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   11.55/  20.51 GFLOPS | Progress: (4/20) | 6.91 s
    [Task 19/25]  Current/Best:   15.21/  20.51 GFLOPS | Progress: (8/20) | 10.67 s
    [Task 19/25]  Current/Best:   12.65/  20.51 GFLOPS | Progress: (12/20) | 14.60 s
    [Task 19/25]  Current/Best:   21.09/  21.09 GFLOPS | Progress: (16/20) | 18.06 s
    [Task 19/25]  Current/Best:    1.55/  21.09 GFLOPS | Progress: (20/20) | 23.16 s Done.
+
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:    9.33/  15.07 GFLOPS | Progress: (4/20) | 10.06 s
    [Task 20/25]  Current/Best:    6.18/  18.09 GFLOPS | Progress: (8/20) | 21.75 s
    [Task 20/25]  Current/Best:   16.95/  18.09 GFLOPS | Progress: (12/20) | 26.67 s
    [Task 20/25]  Current/Best:    9.44/  18.09 GFLOPS | Progress: (16/20) | 29.46 s
    [Task 20/25]  Current/Best:    6.38/  18.09 GFLOPS | Progress: (20/20) | 37.42 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:    2.70/  12.87 GFLOPS | Progress: (4/20) | 14.42 s
    [Task 21/25]  Current/Best:    5.37/  13.38 GFLOPS | Progress: (8/20) | 22.16 s
    [Task 21/25]  Current/Best:   16.50/  20.55 GFLOPS | Progress: (12/20) | 24.47 s
    [Task 21/25]  Current/Best:   20.18/  20.55 GFLOPS | Progress: (16/20) | 27.49 s
    [Task 21/25]  Current/Best:   22.51/  22.51 GFLOPS | Progress: (2
 0/20) | 29.66 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:   10.69/  19.76 GFLOPS | Progress: (4/20) | 10.14 s
    [Task 22/25]  Current/Best:   13.98/  19.76 GFLOPS | Progress: (8/20) | 17.19 s
    [Task 22/25]  Current/Best:   20.69/  20.69 GFLOPS | Progress: (12/20) | 18.90 s
    [Task 22/25]  Current/Best:   14.51/  20.69 GFLOPS | Progress: (16/20) | 21.64 s
    [Task 22/25]  Current/Best:   10.56/  21.85 GFLOPS | Progress: (20/20) | 23.93 s Done.
+
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:    8.98/  19.94 GFLOPS | Progress: (4/20) | 6.08 s
    [Task 23/25]  Current/Best:   17.71/  20.43 GFLOPS | Progress: (8/20) | 8.52 s
    [Task 23/25]  Current/Best:    7.87/  20.43 GFLOPS | Progress: (12/20) | 11.53 s
    [Task 23/25]  Current/Best:    1.55/  22.04 GFLOPS | Progress: (16/20) | 15.97 s
    [Task 23/25]  Current/Best:   17.57/  22.04 GFLOPS | Progress: (20/20) | 18.84 s Done.
+
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    3.74/   7.43 GFLOPS | Progress: (4/20) | 7.01 s
    [Task 24/25]  Current/Best:    2.38/   7.43 GFLOPS | Progress: (8/20) | 11.36 s
    [Task 24/25]  Current/Best:    9.45/   9.45 GFLOPS | Progress: (12/20) | 13.40 s
    [Task 24/25]  Current/Best:   10.08/  10.08 GFLOPS | Progress: (16/20) | 24.13 s
    [Task 24/25]  Current/Best:    7.81/  10.08 GFLOPS | Progress: (20/20) | 36.17 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+     Done.
+     Done.
+
    [Task 25/25]  Current/Best:    3.03/   5.61 GFLOPS | Progress: (4/20) | 5.60 s
    [Task 25/25]  Current/Best:    5.79/   6.28 GFLOPS | Progress: (8/20) | 10.90 s
    [Task 25/25]  Current/Best:    9.61/   9.61 GFLOPS | Progress: (12/20) | 13.13 s
    [Task 25/25]  Current/Best:    5.90/   9.61 GFLOPS | Progress: (16/20) | 22.46 s
    [Task 25/25]  Current/Best:    5.57/   9.61 GFLOPS | Progress: (20/20) | 24.02 s Done.
+
 
 
 
@@ -669,13 +671,6 @@ model using optimized operators to speed up our computations.
 
 
 
-.. rst-class:: sphx-glr-script-out
-
- .. code-block:: none
-
-     Done.
-     Done.
-
 
 
 
@@ -708,8 +703,8 @@ Verify that the optimized model runs and produces the same results:
 
  .. code-block:: none
 
-    class='n02123045 tabby, tabby cat' with probability=0.621104
-    class='n02123159 tiger cat' with probability=0.356378
+    class='n02123045 tabby, tabby cat' with probability=0.621103
+    class='n02123159 tiger cat' with probability=0.356379
     class='n02124075 Egyptian cat' with probability=0.019712
     class='n02129604 tiger, Panthera tigris' with probability=0.001215
     class='n04040759 radiator' with probability=0.000262
@@ -766,8 +761,8 @@ improvement in comparing the optimized model to the unoptimized model.
 
  .. code-block:: none
 
-    optimized: {'mean': 417.05924828005664, 'median': 417.1992093499284, 'std': 0.865296941702935}
-    unoptimized: {'mean': 496.5515389698703, 'median': 496.1619072502799, 'std': 2.0135947670101677}
+    optimized: {'mean': 418.33957764999013, 'median': 417.50189224999303, 'std': 3.236084066776782}
+    unoptimized: {'mean': 492.5452644500001, 'median': 492.4691212000198, 'std': 2.1286296796252318}
 
 
 
@@ -790,7 +785,7 @@ profiling/benchmarking.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 13 minutes  36.950 seconds)
+   **Total running time of the script:** ( 12 minutes  55.534 seconds)
 
 
 .. _sphx_glr_download_tutorial_autotvm_relay_x86.py:
diff --git a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
index 1b93802e8c..9810cf3f61 100644
--- a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
@@ -274,7 +274,7 @@ device and returns the measured cost. Network overhead is excluded.
 
  .. code-block:: none
 
-    1.183e-07 secs/op
+    1.153e-07 secs/op
 
 
 
diff --git a/docs/_sources/tutorial/intro_topi.rst.txt b/docs/_sources/tutorial/intro_topi.rst.txt
index e8a4aa09f5..a21a47636c 100644
--- a/docs/_sources/tutorial/intro_topi.rst.txt
+++ b/docs/_sources/tutorial/intro_topi.rst.txt
@@ -270,7 +270,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
 
  .. code-block:: none
 
-    [stage(a, placeholder(a, 0x109e8ba0)), stage(b, placeholder(b, 0x26601e90)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T [...]
+    [stage(a, placeholder(a, 0x2357aef0)), stage(b, placeholder(b, 0x1cb2b700)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T [...]
 
 
 
diff --git a/docs/_sources/tutorial/sg_execution_times.rst.txt b/docs/_sources/tutorial/sg_execution_times.rst.txt
index 4d90d41b86..fbb7f76af4 100644
--- a/docs/_sources/tutorial/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorial/sg_execution_times.rst.txt
@@ -5,31 +5,31 @@
 
 Computation times
 =================
-**17:23.432** total execution time for **tutorial** files:
+**16:42.819** total execution time for **tutorial** files:
 
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 13:36.950 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 12:55.534 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:47.147 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:33.068 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 00:58.020 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 01:01.924 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:40.685 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:42.006 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:18.589 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:28.269 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.974 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.980 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.866 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.204 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.173 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_uma.py` (``uma.py``)                                             | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tvmc_python.py` (``tvmc_python.py``)                             | 00:00.000 | 0.0 MB |
-+------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)   | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
+| :ref:`sphx_glr_tutorial_tvmc_python.py` (``tvmc_python.py``)                             | 00:00.000 | 0.0 MB |
++------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_introduction.py` (``introduction.py``)                           | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_install.py` (``install.py``)                                     | 00:00.000 | 0.0 MB |
diff --git a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
index 85e61e2aaa..847b673b8c 100644
--- a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
@@ -286,7 +286,7 @@ helper function to run a profile of the TVM generated code.
  .. code-block:: none
 
     Numpy running time: 0.000007
-    naive: 0.000008
+    naive: 0.000007
 
 
 
@@ -389,7 +389,7 @@ compile and run this new schedule with the parallel operation applied:
 
  .. code-block:: none
 
-    parallel: 0.000007
+    parallel: 0.000008
 
 
 
@@ -498,10 +498,10 @@ We can now compare the different schedules
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                   numpy    6.792819986003451e-06                    1.0
-                   naive              8.2517e-06        1.21476794865793
-                parallel              7.3922e-06      1.0882372880823528
-                  vector    3.9302900000000006e-05     5.785947527092328
+                   numpy    7.468819999303377e-06                    1.0
+                   naive              6.8698e-06      0.9197972371326062
+                parallel    8.180999999999998e-06     1.0953537507615723
+                  vector             3.92637e-05      5.2570151648670285
 
 
 
@@ -922,7 +922,7 @@ matrix multiplication.
 
  .. code-block:: none
 
-    Numpy running time: 0.019301
+    Numpy running time: 0.018051
 
 
 
@@ -980,7 +980,7 @@ optimizations.
 
  .. code-block:: none
 
-    none: 3.233708
+    none: 3.534415
 
 
 
@@ -1080,7 +1080,7 @@ schedule.
 
  .. code-block:: none
 
-    blocking: 0.297653
+    blocking: 0.306064
 
 
 
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.
 
  .. code-block:: none
 
-    vectorization: 0.295293
+    vectorization: 0.283986
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1230,7 +1230,7 @@ more cache friendly.
 
  .. code-block:: none
 
-    loop permutation: 0.114099
+    loop permutation: 0.116977
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1321,7 +1321,7 @@ optimized schedule.
 
  .. code-block:: none
 
-    array packing: 0.105082
+    array packing: 0.108061
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1404,7 +1404,7 @@ to `C` when all the block results are ready.
 
  .. code-block:: none
 
-    block caching: 0.111125
+    block caching: 0.111314
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1478,7 +1478,7 @@ of thread-level parallelization.
 
  .. code-block:: none
 
-    parallelization: 0.131606
+    parallelization: 0.133272
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1548,13 +1548,13 @@ working, we can compare the results.
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                    none            3.2337080406                     1.0
-                blocking            0.2976526778      0.0920468620119372
-           vectorization             0.295293496      0.0913173027040529
-        loop permutation     0.11409895959999998    0.035284248969746024
-           array packing     0.10508163239999999     0.03249570804805946
-           block caching     0.11112465340000002     0.03436446704674716
-         parallelization            0.1316057406     0.04069808991648521
+                    none      3.5344147400000003                     1.0
+                blocking            0.3060639061     0.08659535697273603
+           vectorization             0.283986188     0.08034885798376903
+        loop permutation            0.1169772386     0.03309663613501114
+           array packing            0.1080614719    0.030574077987237002
+           block caching            0.1113136881    0.031494234912567164
+         parallelization            0.1332718711      0.0377069135638564
 
 
 
@@ -1594,6 +1594,11 @@ operations with tunable parameters that allows you to automatically optimize
 the computation for specific platforms.
 
 
+.. rst-class:: sphx-glr-timing
+
+   **Total running time of the script:** ( 1 minutes  1.924 seconds)
+
+
 .. _sphx_glr_download_tutorial_tensor_expr_get_started.py:
 
 .. only:: html
diff --git a/docs/commit_hash b/docs/commit_hash
index 79ce801ae9..d2bf494ca6 100644
--- a/docs/commit_hash
+++ b/docs/commit_hash
@@ -1 +1 @@
-d776bccc1edb90b06d06b4ea4a94157115b1d972
+35ac2262ebf84f150ceada047325b768ad772f61
diff --git a/docs/how_to/compile_models/from_darknet.html b/docs/how_to/compile_models/from_darknet.html
index b7dc2ba99e..43de0dceb0 100644
--- a/docs/how_to/compile_models/from_darknet.html
+++ b/docs/how_to/compile_models/from_darknet.html
@@ -595,7 +595,7 @@ class:[&#39;truck 0.9266&#39;] left:471 top:83 right:689 bottom:169
 class:[&#39;bicycle 0.9984&#39;] left:111 top:113 right:577 bottom:447
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  33.328 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  34.838 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-darknet-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7716f96385bd5abb6e822041e285be54/from_darknet.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_darknet.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_mxnet.html b/docs/how_to/compile_models/from_mxnet.html
index c6f8e97d55..8db2be037a 100644
--- a/docs/how_to/compile_models/from_mxnet.html
+++ b/docs/how_to/compile_models/from_mxnet.html
@@ -449,7 +449,7 @@
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;x&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#tuple" title="builtins.tuple" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">x</span><span class="o">.</span><span class="n">shape</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip72932c81-1807-4862-8c11-9468138eecda from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipbf978a80-d444-4877-972b-6637229f1c9f from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
 x (1, 3, 224, 224)
 </pre></div>
 </div>
diff --git a/docs/how_to/compile_models/from_oneflow.html b/docs/how_to/compile_models/from_oneflow.html
index c4b27017fb..ccd55f9635 100644
--- a/docs/how_to/compile_models/from_oneflow.html
+++ b/docs/how_to/compile_models/from_oneflow.html
@@ -459,14 +459,13 @@ Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdo
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip&quot; to /workspace/.oneflow/flowvision_cache/resnet18.zip
 
   0%|          | 0.00/41.5M [00:00&lt;?, ?B/s]
- 15%|#5        | 6.33M/41.5M [00:00&lt;00:00, 57.5MB/s]
- 28%|##8       | 11.8M/41.5M [00:00&lt;00:00, 46.4MB/s]
- 39%|###9      | 16.3M/41.5M [00:00&lt;00:00, 33.8MB/s]
- 54%|#####3    | 22.3M/41.5M [00:00&lt;00:00, 37.7MB/s]
- 63%|######2   | 26.1M/41.5M [00:00&lt;00:00, 35.9MB/s]
- 77%|#######7  | 32.0M/41.5M [00:00&lt;00:00, 33.9MB/s]
- 92%|#########2| 38.3M/41.5M [00:01&lt;00:00, 38.1MB/s]
-100%|##########| 41.5M/41.5M [00:01&lt;00:00, 37.1MB/s]
+ 19%|#9        | 7.99M/41.5M [00:00&lt;00:01, 32.5MB/s]
+ 39%|###8      | 16.0M/41.5M [00:00&lt;00:00, 44.1MB/s]
+ 54%|#####3    | 22.3M/41.5M [00:00&lt;00:00, 43.4MB/s]
+ 64%|######4   | 26.7M/41.5M [00:00&lt;00:00, 36.7MB/s]
+ 77%|#######7  | 32.0M/41.5M [00:00&lt;00:00, 39.7MB/s]
+ 96%|#########6| 40.0M/41.5M [00:00&lt;00:00, 48.4MB/s]
+100%|##########| 41.5M/41.5M [00:00&lt;00:00, 44.7MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_paddle.html b/docs/how_to/compile_models/from_paddle.html
index 0b6c51a68d..767b851086 100644
--- a/docs/how_to/compile_models/from_paddle.html
+++ b/docs/how_to/compile_models/from_paddle.html
@@ -494,7 +494,7 @@ To begin, we’ll install PaddlePaddle&gt;=2.1.3:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>TVM prediction top-1 id: 282, class name:  282: &#39;tiger cat&#39;,
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  2.537 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  7.851 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-paddle-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/16269b77359771348d507395692524cf/from_paddle.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_paddle.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_pytorch.html b/docs/how_to/compile_models/from_pytorch.html
index 971094de1b..a202d9a629 100644
--- a/docs/how_to/compile_models/from_pytorch.html
+++ b/docs/how_to/compile_models/from_pytorch.html
@@ -442,15 +442,12 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/resnet18-f37072fd.pth&quot; to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
 
   0%|          | 0.00/44.7M [00:00&lt;?, ?B/s]
- 14%|#4        | 6.30M/44.7M [00:00&lt;00:00, 64.0MB/s]
- 28%|##7       | 12.4M/44.7M [00:00&lt;00:00, 47.0MB/s]
- 38%|###8      | 17.1M/44.7M [00:00&lt;00:00, 37.1MB/s]
- 54%|#####3    | 24.0M/44.7M [00:00&lt;00:00, 40.0MB/s]
- 63%|######2   | 28.0M/44.7M [00:00&lt;00:00, 38.3MB/s]
- 72%|#######1  | 32.0M/44.7M [00:00&lt;00:00, 33.8MB/s]
- 86%|########5 | 38.3M/44.7M [00:01&lt;00:00, 36.8MB/s]
- 94%|#########3| 41.9M/44.7M [00:01&lt;00:00, 34.0MB/s]
-100%|##########| 44.7M/44.7M [00:01&lt;00:00, 39.3MB/s]
+ 18%|#7        | 7.99M/44.7M [00:00&lt;00:00, 54.7MB/s]
+ 36%|###5      | 16.0M/44.7M [00:00&lt;00:00, 57.5MB/s]
+ 54%|#####3    | 24.0M/44.7M [00:00&lt;00:00, 61.0MB/s]
+ 72%|#######1  | 32.0M/44.7M [00:00&lt;00:00, 56.1MB/s]
+ 90%|########9 | 40.0M/44.7M [00:00&lt;00:00, 55.8MB/s]
+100%|##########| 44.7M/44.7M [00:00&lt;00:00, 61.8MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_tensorflow.html b/docs/how_to/compile_models/from_tensorflow.html
index fe2b5dd34b..866fcc9dd7 100644
--- a/docs/how_to/compile_models/from_tensorflow.html
+++ b/docs/how_to/compile_models/from_tensorflow.html
@@ -662,7 +662,7 @@ banana (score = 0.00022)
 desk (score = 0.00019)
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  31.252 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  35.118 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-tensorflow-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7f1d3d1b878694c201c614c807cdebc8/from_tensorflow.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_tensorflow.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/sg_execution_times.html b/docs/how_to/compile_models/sg_execution_times.html
index 1ebb1cab2d..a919b3bc13 100644
--- a/docs/how_to/compile_models/sg_execution_times.html
+++ b/docs/how_to/compile_models/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-compile-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>07:06.181</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
+<p><strong>07:33.246</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -358,44 +358,44 @@
 <col style="width: 8%" />
 </colgroup>
 <tbody>
-<tr class="row-odd"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
-<td><p>01:33.328</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
+<td><p>01:35.118</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
-<td><p>01:31.252</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
+<td><p>01:34.838</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></td>
-<td><p>01:02.537</p></td>
+<td><p>01:07.851</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></td>
-<td><p>00:41.395</p></td>
+<td><p>00:45.640</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
-<td><p>00:36.935</p></td>
+<td><p>00:41.115</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></td>
-<td><p>00:32.847</p></td>
+<td><p>00:35.919</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
-<td><p>00:28.257</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
+<td><p>00:29.268</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
-<td><p>00:24.956</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
+<td><p>00:27.657</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></td>
-<td><p>00:11.819</p></td>
+<td><p>00:12.757</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></td>
-<td><p>00:02.854</p></td>
+<td><p>00:03.084</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno.html b/docs/how_to/deploy_models/deploy_model_on_adreno.html
index 39f7b4d6bf..fee8ff627f 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno.html
@@ -840,10 +840,10 @@ Top5 predictions:
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
- 4226.2542    4224.7779    4235.2199    4222.6341      3.6576
+ 4227.5565    4227.5888    4234.6745    4223.7105      3.1666
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  20.132 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  22.678 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-model-on-adreno-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/2387d8448da213eb625e6b3d916327d4/deploy_model_on_adreno.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_model_on_adreno.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
index 7388d935f6..66fa163eed 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
@@ -448,25 +448,34 @@ to run this tutorial with a real device over rpc.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
 
      8192/102967424 [..............................] - ETA: 0s
-  2187264/102967424 [..............................] - ETA: 3s
-  5439488/102967424 [&gt;.............................] - ETA: 2s
+  6635520/102967424 [&gt;.............................] - ETA: 2s
   8380416/102967424 [=&gt;............................] - ETA: 2s
- 16769024/102967424 [===&gt;..........................] - ETA: 2s
+ 15360000/102967424 [===&gt;..........................] - ETA: 1s
+ 16769024/102967424 [===&gt;..........................] - ETA: 1s
+ 23412736/102967424 [=====&gt;........................] - ETA: 1s
  25157632/102967424 [======&gt;.......................] - ETA: 1s
+ 25296896/102967424 [======&gt;.......................] - ETA: 2s
+ 30605312/102967424 [=======&gt;......................] - ETA: 1s
  33546240/102967424 [========&gt;.....................] - ETA: 1s
- 41934848/102967424 [===========&gt;..................] - ETA: 0s
- 44130304/102967424 [===========&gt;..................] - ETA: 0s
- 48578560/102967424 [=============&gt;................] - ETA: 0s
- 50700288/102967424 [=============&gt;................] - ETA: 0s
- 56967168/102967424 [===============&gt;..............] - ETA: 0s
- 67100672/102967424 [==================&gt;...........] - ETA: 0s
- 72540160/102967424 [====================&gt;.........] - ETA: 0s
+ 38461440/102967424 [==========&gt;...................] - ETA: 1s
+ 41934848/102967424 [===========&gt;..................] - ETA: 1s
+ 48578560/102967424 [=============&gt;................] - ETA: 1s
+ 50323456/102967424 [=============&gt;................] - ETA: 1s
+ 56967168/102967424 [===============&gt;..............] - ETA: 1s
+ 58712064/102967424 [================&gt;.............] - ETA: 1s
+ 64864256/102967424 [=================&gt;............] - ETA: 1s
+ 65355776/102967424 [==================&gt;...........] - ETA: 1s
+ 67100672/102967424 [==================&gt;...........] - ETA: 1s
+ 69296128/102967424 [===================&gt;..........] - ETA: 1s
  75489280/102967424 [====================&gt;.........] - ETA: 0s
- 84017152/102967424 [=======================&gt;......] - ETA: 0s
+ 82124800/102967424 [======================&gt;.......] - ETA: 0s
+ 83877888/102967424 [=======================&gt;......] - ETA: 0s
+ 90521600/102967424 [=========================&gt;....] - ETA: 0s
  92266496/102967424 [=========================&gt;....] - ETA: 0s
  98910208/102967424 [===========================&gt;..] - ETA: 0s
-100368384/102967424 [============================&gt;.] - ETA: 0s
-102967424/102967424 [==============================] - 2s 0us/step
+100646912/102967424 [============================&gt;.] - ETA: 0s
+102850560/102967424 [============================&gt;.] - ETA: 0s
+102967424/102967424 [==============================] - 3s 0us/step
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_model_on_android.html b/docs/how_to/deploy_models/deploy_model_on_android.html
index ddf3cc11b4..57efdabe4d 100644
--- a/docs/how_to/deploy_models/deploy_model_on_android.html
+++ b/docs/how_to/deploy_models/deploy_model_on_android.html
@@ -672,7 +672,7 @@ to the remote android device.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  15.7718      15.8008      16.3740      15.0997       0.3673
+  15.8922      15.8378      16.6459      15.5476       0.3041
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
index 4e0dd98c0f..24221e5a1c 100644
--- a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
+++ b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
@@ -464,35 +464,37 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth&quot; to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
 
   0%|          | 0.00/170M [00:00&lt;?, ?B/s]
-  5%|4         | 7.99M/170M [00:00&lt;00:03, 50.3MB/s]
-  9%|9         | 16.0M/170M [00:00&lt;00:02, 54.8MB/s]
- 14%|#4        | 24.0M/170M [00:00&lt;00:02, 51.0MB/s]
- 18%|#7        | 30.3M/170M [00:00&lt;00:03, 39.3MB/s]
- 20%|##        | 34.3M/170M [00:00&lt;00:04, 35.1MB/s]
- 24%|##3       | 40.0M/170M [00:01&lt;00:03, 38.2MB/s]
- 28%|##8       | 48.0M/170M [00:01&lt;00:03, 40.0MB/s]
- 33%|###2      | 56.0M/170M [00:01&lt;00:02, 44.3MB/s]
- 38%|###7      | 64.0M/170M [00:01&lt;00:02, 42.9MB/s]
- 42%|####2     | 72.0M/170M [00:01&lt;00:02, 44.3MB/s]
- 46%|####6     | 78.3M/170M [00:01&lt;00:02, 42.3MB/s]
- 49%|####8     | 82.4M/170M [00:02&lt;00:02, 42.1MB/s]
- 52%|#####1    | 88.0M/170M [00:02&lt;00:02, 41.0MB/s]
- 56%|#####5    | 94.3M/170M [00:02&lt;00:01, 44.7MB/s]
- 58%|#####8    | 98.6M/170M [00:02&lt;00:01, 44.8MB/s]
- 61%|######    | 103M/170M [00:02&lt;00:01, 40.8MB/s]
- 63%|######2   | 107M/170M [00:02&lt;00:01, 36.2MB/s]
- 66%|######5   | 112M/170M [00:02&lt;00:01, 36.6MB/s]
- 68%|######8   | 116M/170M [00:02&lt;00:01, 35.2MB/s]
- 71%|#######   | 120M/170M [00:03&lt;00:01, 33.1MB/s]
- 74%|#######4  | 126M/170M [00:03&lt;00:01, 27.5MB/s]
- 76%|#######6  | 129M/170M [00:03&lt;00:01, 27.0MB/s]
- 80%|########  | 136M/170M [00:03&lt;00:01, 30.8MB/s]
- 85%|########4 | 144M/170M [00:03&lt;00:00, 34.6MB/s]
- 89%|########9 | 151M/170M [00:04&lt;00:00, 42.7MB/s]
- 92%|#########1| 156M/170M [00:04&lt;00:00, 39.9MB/s]
- 94%|#########4| 160M/170M [00:04&lt;00:00, 30.6MB/s]
- 98%|#########7| 166M/170M [00:04&lt;00:00, 33.4MB/s]
-100%|##########| 170M/170M [00:04&lt;00:00, 38.2MB/s]
+  4%|3         | 6.30M/170M [00:00&lt;00:04, 35.6MB/s]
+  6%|5         | 9.70M/170M [00:00&lt;00:06, 27.1MB/s]
+  8%|8         | 14.3M/170M [00:00&lt;00:04, 33.3MB/s]
+ 10%|#         | 17.7M/170M [00:00&lt;00:05, 29.5MB/s]
+ 14%|#4        | 24.0M/170M [00:00&lt;00:05, 26.9MB/s]
+ 19%|#8        | 32.0M/170M [00:01&lt;00:04, 33.8MB/s]
+ 24%|##3       | 40.0M/170M [00:01&lt;00:03, 40.3MB/s]
+ 28%|##8       | 48.0M/170M [00:01&lt;00:02, 43.9MB/s]
+ 33%|###2      | 56.0M/170M [00:01&lt;00:02, 47.6MB/s]
+ 37%|###6      | 62.3M/170M [00:01&lt;00:02, 51.7MB/s]
+ 40%|###9      | 67.5M/170M [00:01&lt;00:02, 49.0MB/s]
+ 43%|####2     | 72.4M/170M [00:01&lt;00:02, 42.0MB/s]
+ 46%|####6     | 78.3M/170M [00:02&lt;00:02, 45.9MB/s]
+ 49%|####8     | 82.9M/170M [00:02&lt;00:02, 44.8MB/s]
+ 51%|#####1    | 87.4M/170M [00:02&lt;00:02, 40.8MB/s]
+ 54%|#####3    | 91.4M/170M [00:02&lt;00:02, 39.5MB/s]
+ 57%|#####6    | 96.0M/170M [00:02&lt;00:02, 35.6MB/s]
+ 60%|######    | 102M/170M [00:02&lt;00:01, 41.5MB/s]
+ 63%|######2   | 106M/170M [00:02&lt;00:01, 36.8MB/s]
+ 66%|######5   | 112M/170M [00:02&lt;00:01, 37.2MB/s]
+ 69%|######8   | 117M/170M [00:03&lt;00:01, 38.0MB/s]
+ 71%|#######1  | 121M/170M [00:03&lt;00:01, 37.0MB/s]
+ 75%|#######5  | 128M/170M [00:03&lt;00:01, 39.2MB/s]
+ 80%|########  | 136M/170M [00:03&lt;00:00, 39.6MB/s]
+ 85%|########4 | 144M/170M [00:03&lt;00:00, 43.3MB/s]
+ 88%|########8 | 150M/170M [00:03&lt;00:00, 46.4MB/s]
+ 91%|#########1| 155M/170M [00:04&lt;00:00, 45.8MB/s]
+ 94%|#########3| 159M/170M [00:04&lt;00:00, 37.9MB/s]
+ 96%|#########6| 163M/170M [00:04&lt;00:00, 39.2MB/s]
+ 99%|#########8| 167M/170M [00:04&lt;00:00, 38.8MB/s]
+100%|##########| 170M/170M [00:04&lt;00:00, 39.8MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
   (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -586,7 +588,7 @@ torchvision rcnn models.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Get 9 valid boxes
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  44.380 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  54.779 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-object-detection-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7795da4b258c8feff986668b95ef57ad/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized.html b/docs/how_to/deploy_models/deploy_prequantized.html
index 418a47a3b7..a4f0c99e00 100644
--- a/docs/how_to/deploy_models/deploy_prequantized.html
+++ b/docs/how_to/deploy_models/deploy_prequantized.html
@@ -505,9 +505,9 @@ training. Other models require a full post training calibration.</p>
 Downloading: &quot;https://download.pytorch.org/models/mobilenet_v2-b0353104.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
 
   0%|          | 0.00/13.6M [00:00&lt;?, ?B/s]
- 59%|#####8    | 7.99M/13.6M [00:00&lt;00:00, 50.0MB/s]
- 94%|#########4| 12.8M/13.6M [00:00&lt;00:00, 44.1MB/s]
-100%|##########| 13.6M/13.6M [00:00&lt;00:00, 47.3MB/s]
+ 47%|####6     | 6.30M/13.6M [00:00&lt;00:00, 48.7MB/s]
+ 81%|########  | 10.9M/13.6M [00:00&lt;00:00, 30.6MB/s]
+100%|##########| 13.6M/13.6M [00:00&lt;00:00, 32.9MB/s]
 </pre></div>
 </div>
 </div>
@@ -598,7 +598,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  89.2425      89.1767      90.6769      88.7424       0.2954
+  89.1701      89.1335      93.3542      88.6979       0.4907
 </pre></div>
 </div>
 <div class="admonition note">
@@ -637,7 +637,7 @@ This includes support for the VNNI 8 bit dot product instruction (CascadeLake or
 <div class="section" id="deploy-a-quantized-tflite-model">
 <h2>Deploy a quantized TFLite Model<a class="headerlink" href="#deploy-a-quantized-tflite-model" title="Permalink to this headline">¶</a></h2>
 <p>TODO</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  28.431 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  35.572 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/fb8217c13f4351224c6cf3aacf1a87fc/deploy_prequantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized_tflite.html b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
index 081d829fc7..3e5cdfb103 100644
--- a/docs/how_to/deploy_models/deploy_prequantized_tflite.html
+++ b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
@@ -590,7 +590,7 @@ TFLite Top-5 labels: [387 102 386 341 349]
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  109.4787     109.4029     110.4955     108.8343      0.3101
+  110.1091     109.9590     115.1783     109.5069      0.6258
 </pre></div>
 </div>
 <div class="admonition note">
diff --git a/docs/how_to/deploy_models/deploy_quantized.html b/docs/how_to/deploy_models/deploy_quantized.html
index 173f11b62c..531361b3cb 100644
--- a/docs/how_to/deploy_models/deploy_quantized.html
+++ b/docs/how_to/deploy_models/deploy_quantized.html
@@ -531,7 +531,7 @@ for calibration. But the accuracy might be impacted.</p>
   warnings.warn(
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  0.059 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  19.368 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-quantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7810ecf51bfc05f7d5e8a400ac3e815d/deploy_quantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_quantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/sg_execution_times.html b/docs/how_to/deploy_models/sg_execution_times.html
index eeb8567886..9fbba40094 100644
--- a/docs/how_to/deploy_models/sg_execution_times.html
+++ b/docs/how_to/deploy_models/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-deploy-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>12:02.027</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
+<p><strong>12:59.608</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 86%" />
@@ -359,39 +359,39 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></td>
-<td><p>03:44.380</p></td>
+<td><p>03:54.779</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></td>
-<td><p>02:00.059</p></td>
+<td><p>02:19.368</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></td>
-<td><p>01:28.431</p></td>
+<td><p>01:35.572</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_adreno.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno.py</span></code>)</p></td>
-<td><p>01:20.132</p></td>
+<td><p>01:22.678</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></td>
-<td><p>00:52.139</p></td>
+<td><p>00:56.278</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></td>
-<td><p>00:51.155</p></td>
+<td><p>00:55.071</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_adreno_tvmc.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-tvmc-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™ with tvmc Interface</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno_tvmc.py</span></code>)</p></td>
-<td><p>00:45.288</p></td>
+<td><p>00:50.872</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_nano.html#sphx-glr-how-to-deploy-models-deploy-model-on-nano-py"><span class="std std-ref">Deploy the Pretrained Model on Jetson Nano</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_nano.py</span></code>)</p></td>
-<td><p>00:30.331</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></td>
+<td><p>00:32.525</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></td>
-<td><p>00:30.106</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_nano.html#sphx-glr-how-to-deploy-models-deploy-model-on-nano-py"><span class="std std-ref">Deploy the Pretrained Model on Jetson Nano</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_nano.py</span></code>)</p></td>
+<td><p>00:32.458</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/bring_your_own_datatypes.html b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
index 41b23cb041..0d7db660a7 100644
--- a/docs/how_to/extend_tvm/bring_your_own_datatypes.html
+++ b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
@@ -629,7 +629,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
 <span class="n">module</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">params</span></a> <span class="o">=</span> <span class="n">get_mobilenet</span><span class="p">()</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip486dd616-2c9d-4de8-8edd-5728f7951673 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip7d141282-fd43-4445-b2d9-aef2526ed0be from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 </pre></div>
 </div>
 <p>It’s easy to execute MobileNet with native TVM:</p>
diff --git a/docs/how_to/extend_tvm/sg_execution_times.html b/docs/how_to/extend_tvm/sg_execution_times.html
index 360800e175..143ad504e8 100644
--- a/docs/how_to/extend_tvm/sg_execution_times.html
+++ b/docs/how_to/extend_tvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-extend-tvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:58.305</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
+<p><strong>00:58.576</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></td>
-<td><p>00:54.272</p></td>
+<td><p>00:54.588</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></td>
-<td><p>00:02.830</p></td>
+<td><p>00:02.804</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></td>
-<td><p>00:01.196</p></td>
+<td><p>00:01.177</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/use_pass_instrument.html b/docs/how_to/extend_tvm/use_pass_instrument.html
index 38158fbd79..79080ba182 100644
--- a/docs/how_to/extend_tvm/use_pass_instrument.html
+++ b/docs/how_to/extend_tvm/use_pass_instrument.html
@@ -536,10 +536,10 @@ profile the execution time of each passes.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 24091us [24091us] (48.23%; 48.23%)
-FoldScaleAxis: 25860us [9us] (51.77%; 51.77%)
-        FoldConstant: 25851us [1882us] (51.75%; 99.97%)
-                InferType: 23969us [23969us] (47.98%; 92.72%)
+InferType: 23974us [23974us] (44.05%; 44.05%)
+FoldScaleAxis: 30448us [10us] (55.95%; 55.95%)
+        FoldConstant: 30438us [2762us] (55.93%; 99.97%)
+                InferType: 27677us [27677us] (50.86%; 90.93%)
 </pre></div>
 </div>
 </div>
@@ -561,10 +561,10 @@ Refer to following sections and <a class="reference internal" href="../../refere
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 23960us [23960us] (48.04%; 48.04%)
-FoldScaleAxis: 25920us [9us] (51.96%; 51.96%)
-        FoldConstant: 25911us [1925us] (51.95%; 99.96%)
-                InferType: 23986us [23986us] (48.09%; 92.57%)
+InferType: 23430us [23430us] (48.26%; 48.26%)
+FoldScaleAxis: 25123us [8us] (51.74%; 51.74%)
+        FoldConstant: 25116us [1871us] (51.73%; 99.97%)
+                InferType: 23245us [23245us] (47.88%; 92.55%)
 </pre></div>
 </div>
 <p>Register empty list to clear existing instruments.</p>
diff --git a/docs/how_to/optimize_operators/opt_conv_cuda.html b/docs/how_to/optimize_operators/opt_conv_cuda.html
index 0ed422f842..de34995222 100644
--- a/docs/how_to/optimize_operators/opt_conv_cuda.html
+++ b/docs/how_to/optimize_operators/opt_conv_cuda.html
@@ -585,7 +585,7 @@ latency of convolution.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Convolution: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">*</span> <span cl [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 33.582592 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 53.581855 ms
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-optimize-operators-opt-conv-cuda-py">
diff --git a/docs/how_to/optimize_operators/opt_conv_tensorcore.html b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
index cc944a10fa..feea91ef36 100644
--- a/docs/how_to/optimize_operators/opt_conv_tensorcore.html
+++ b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
@@ -867,7 +867,7 @@ be able to run on our build server</p>
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;conv2d with tensor core: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">* [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 12.269376 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 12.276531 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/optimize_operators/opt_gemm.html b/docs/how_to/optimize_operators/opt_gemm.html
index 1a044329be..e9ba3e4fad 100644
--- a/docs/how_to/optimize_operators/opt_gemm.html
+++ b/docs/how_to/optimize_operators/opt_gemm.html
@@ -482,8 +482,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Baseline: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019010
-Baseline: 3.180731
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018542
+Baseline: 3.545863
 </pre></div>
 </div>
 <p>In TVM, we can always inspect lower level IR to debug or optimize our schedule.
@@ -542,7 +542,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt1: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.307755
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.297697
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -599,7 +599,7 @@ vastly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt2: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.283842
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.289679
 </pre></div>
 </div>
 <p>Here is the generated IR after vectorization.</p>
@@ -654,7 +654,7 @@ the access pattern for A matrix is more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt3: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.120883
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.118086
 </pre></div>
 </div>
 <p>Here is the generated IR after loop permutation.</p>
@@ -731,7 +731,7 @@ flattening.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt4: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.107620
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.108616
 </pre></div>
 </div>
 <p>Here is the generated IR after array packing.</p>
@@ -809,7 +809,7 @@ write to C when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt5: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111692
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.112535
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -889,7 +889,7 @@ class Module:
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt6: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">opt6_time</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.133280
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.134580
 </pre></div>
 </div>
 <p>Here is the generated IR after parallelization.</p>
diff --git a/docs/how_to/optimize_operators/sg_execution_times.html b/docs/how_to/optimize_operators/sg_execution_times.html
index 38f9f7f51d..dc80c6bc89 100644
--- a/docs/how_to/optimize_operators/sg_execution_times.html
+++ b/docs/how_to/optimize_operators/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-optimize-operators-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:33.751</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
+<p><strong>00:34.893</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></td>
-<td><p>00:30.490</p></td>
+<td><p>00:31.614</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></td>
-<td><p>00:01.963</p></td>
+<td><p>00:02.039</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></td>
-<td><p>00:01.299</p></td>
+<td><p>00:01.240</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
index 0cc9ebf90f..4240b5b839 100644
--- a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
+++ b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autoscheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>03:35.538</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
+<p><strong>03:46.816</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 85%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></td>
-<td><p>01:29.749</p></td>
+<td><p>01:33.828</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></td>
-<td><p>01:16.274</p></td>
+<td><p>01:19.762</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></td>
-<td><p>00:17.408</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></td>
+<td><p>00:18.140</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></td>
-<td><p>00:16.348</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></td>
+<td><p>00:17.591</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></td>
-<td><p>00:15.654</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></td>
+<td><p>00:17.392</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></td>
-<td><p>00:00.104</p></td>
+<td><p>00:00.103</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
index babacfa779..ca55c677d9 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
@@ -1023,7 +1023,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.356 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.340 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
index 4e3b559963..cca8cbf1c7 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
@@ -926,7 +926,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   8.1405       8.1461       8.1480       8.1274       0.0093
+   8.1221       8.1217       8.1298       8.1149       0.0061
 </pre></div>
 </div>
 </div>
@@ -948,7 +948,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  16.274 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  19.762 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-cuda-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/eafe360d52540634c9eea0fa89e804bd/tune_network_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
index 16a542c6b2..8f6fbab23a 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
@@ -945,7 +945,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  750.1840     749.0816     752.9914     748.4790      2.0003
+  756.9340     757.8507     757.9034     755.0479      1.3339
 </pre></div>
 </div>
 </div>
@@ -967,7 +967,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.749 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  33.828 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/e416b94ca1090b0897c0f6e0df95b911/tune_network_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_x86.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autotvm/sg_execution_times.html b/docs/how_to/tune_with_autotvm/sg_execution_times.html
index 57ef78e0e5..099341f2e0 100644
--- a/docs/how_to/tune_with_autotvm/sg_execution_times.html
+++ b/docs/how_to/tune_with_autotvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:23.526</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
+<p><strong>00:24.587</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,22 +359,22 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></td>
-<td><p>00:23.489</p></td>
+<td><p>00:24.547</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></td>
-<td><p>00:00.022</p></td>
+<td><p>00:00.024</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a Convolutional Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></td>
 <td><p>00:00.006</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></td>
 <td><p>00:00.005</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></td>
 <td><p>00:00.005</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
diff --git a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
index c29c94791c..fa5e8f8f86 100644
--- a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
@@ -620,7 +620,7 @@ and measure running time.</p>
 
 Best config:
 ,None
-Time cost of this operator: 0.037251
+Time cost of this operator: 0.037097
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autotvm-tune-conv2d-cuda-py">
diff --git a/docs/how_to/work_with_microtvm/micro_autotune.html b/docs/how_to/work_with_microtvm/micro_autotune.html
index 39d4ecad81..ef92363b26 100644
--- a/docs/how_to/work_with_microtvm/micro_autotune.html
+++ b/docs/how_to/work_with_microtvm/micro_autotune.html
@@ -654,10 +654,10 @@ the tuned operator.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build without Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  309.1     98.737   (1, 2, 10, 10, 3)  2       1        [309.1]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.956     0.944    (1, 6, 10, 10)     1       1        [2.956]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.999     0.319    (1, 1, 10, 10, 3)  1       1        [0.999]
-Total_time                                    -                                             313.055   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  308.9     98.757   (1, 2, 10, 10, 3)  2       1        [308.9]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.918     0.933    (1, 6, 10, 10)     1       1        [2.918]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.971     0.31     (1, 1, 10, 10, 3)  1       1        [0.971]
+Total_time                                    -                                             312.789   -        -                  -       -        -
 </pre></div>
 </div>
 </div>
@@ -709,13 +709,13 @@ Total_time                                    -
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build with Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  136.7     98.081   (1, 6, 10, 10, 1)  2       1        [136.7]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.81      1.299    (1, 6, 10, 10)     1       1        [1.81]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.865     0.621    (1, 3, 10, 10, 1)  1       1        [0.865]
-Total_time                                    -                                             139.375   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  101.7     97.466   (1, 6, 10, 10, 1)  2       1        [101.7]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.799     1.724    (1, 6, 10, 10)     1       1        [1.799]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.846     0.811    (1, 3, 10, 10, 1)  1       1        [0.846]
+Total_time                                    -                                             104.344   -        -                  -       -        -
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  26.459 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  28.926 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-autotune-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/9ccca8fd489a1486ac71b55a55c320c5/micro_autotune.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_autotune.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_pytorch.html b/docs/how_to/work_with_microtvm/micro_pytorch.html
index 6cf13ba2d7..ba8ae07206 100644
--- a/docs/how_to/work_with_microtvm/micro_pytorch.html
+++ b/docs/how_to/work_with_microtvm/micro_pytorch.html
@@ -465,8 +465,8 @@ download a cat image and preprocess it to use as the model input.</p>
 Downloading: &quot;https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
 
   0%|          | 0.00/3.42M [00:00&lt;?, ?B/s]
- 67%|######6   | 2.29M/3.42M [00:00&lt;00:00, 24.0MB/s]
-100%|##########| 3.42M/3.42M [00:00&lt;00:00, 27.4MB/s]
+ 61%|######    | 2.09M/3.42M [00:00&lt;00:00, 14.8MB/s]
+100%|##########| 3.42M/3.42M [00:00&lt;00:00, 23.4MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
   device=storage.device,
 /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -594,7 +594,7 @@ via the host <cite>main.cc`</cite> or if a Zephyr emulated board is selected as
 Torch top-1 id: 282, class name: tiger cat
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  28.613 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  28.633 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/12b9ecc04c41abaa12022061771821d1/micro_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_train.html b/docs/how_to/work_with_microtvm/micro_train.html
index f1b5457dfb..3db8072380 100644
--- a/docs/how_to/work_with_microtvm/micro_train.html
+++ b/docs/how_to/work_with_microtvm/micro_train.html
@@ -533,7 +533,7 @@ take about <strong>2 minutes</strong> to download the Stanford Cars, while COCO
 <a href="https://docs.python.org/3/library/shutil.html#shutil.move" title="shutil.move" class="sphx-glr-backref-module-shutil sphx-glr-backref-type-py-function"><span class="n">shutil</span><span class="o">.</span><span class="n">move</span></a><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx-glr-backref-typ [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmpt_udtumq/images/random&#39;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmp2dfhyzae/images/random&#39;
 </pre></div>
 </div>
 </div>
@@ -593,8 +593,8 @@ objects to other stuff? We can display some examples from our datasets using <co
     <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmpt_udtumq/images/target contains 8144 images
-/tmp/tmpt_udtumq/images/random contains 5000 images
+<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmp2dfhyzae/images/target contains 8144 images
+/tmp/tmp2dfhyzae/images/random contains 5000 images
 </pre></div>
 </div>
 </div>
@@ -706,13 +706,13 @@ the time on our validation set).</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Epoch 1/3
-328/328 - 41s - loss: 0.2102 - accuracy: 0.9292 - val_loss: 0.1365 - val_accuracy: 0.9520 - 41s/epoch - 124ms/step
+328/328 - 41s - loss: 0.2184 - accuracy: 0.9255 - val_loss: 0.1313 - val_accuracy: 0.9528 - 41s/epoch - 124ms/step
 Epoch 2/3
-328/328 - 35s - loss: 0.1001 - accuracy: 0.9637 - val_loss: 0.0911 - val_accuracy: 0.9728 - 35s/epoch - 108ms/step
+328/328 - 36s - loss: 0.1026 - accuracy: 0.9633 - val_loss: 0.1049 - val_accuracy: 0.9660 - 36s/epoch - 109ms/step
 Epoch 3/3
-328/328 - 35s - loss: 0.0684 - accuracy: 0.9740 - val_loss: 0.1135 - val_accuracy: 0.9687 - 35s/epoch - 108ms/step
+328/328 - 36s - loss: 0.0669 - accuracy: 0.9751 - val_loss: 0.1600 - val_accuracy: 0.9528 - 36s/epoch - 108ms/step
 
-&lt;keras.callbacks.History object at 0x7f4afa9f3850&gt;
+&lt;keras.callbacks.History object at 0x7f4cf695beb0&gt;
 </pre></div>
 </div>
 </div>
@@ -976,7 +976,7 @@ as intended.</p>
 <p>From here, we could modify the model to read live images from the camera - we have another
 Arduino tutorial for how to do that <a class="reference external" href="https://github.com/guberti/tvm-arduino-demos/tree/master/examples/person_detection">on GitHub</a>. Alternatively, we could also
 <a class="reference external" href="https://tvm.apache.org/docs/how_to/work_with_microtvm/micro_autotune.html">use TVM’s autotuning capabilities</a> to dramatically improve the model’s performance.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  39.162 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  55.393 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-train-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/b52cec46baf4f78d6bcd94cbe269c8a6/micro_train.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_train.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/sg_execution_times.html b/docs/how_to/work_with_microtvm/sg_execution_times.html
index 562d59de78..e69f531eca 100644
--- a/docs/how_to/work_with_microtvm/sg_execution_times.html
+++ b/docs/how_to/work_with_microtvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-microtvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>08:04.172</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
+<p><strong>08:23.233</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_train.html#sphx-glr-how-to-work-with-microtvm-micro-train-py"><span class="std std-ref">5. Training Vision Models for microTVM on Arduino</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_train.py</span></code>)</p></td>
-<td><p>04:39.162</p></td>
+<td><p>04:55.393</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="micro_pytorch.html#sphx-glr-how-to-work-with-microtvm-micro-pytorch-py"><span class="std std-ref">4. microTVM PyTorch Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_pytorch.py</span></code>)</p></td>
-<td><p>01:28.613</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">6. Model Tuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
+<td><p>01:28.926</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">6. Model Tuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
-<td><p>01:26.459</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="micro_pytorch.html#sphx-glr-how-to-work-with-microtvm-micro-pytorch-py"><span class="std std-ref">4. microTVM PyTorch Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_pytorch.py</span></code>)</p></td>
+<td><p>01:28.633</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_aot.html#sphx-glr-how-to-work-with-microtvm-micro-aot-py"><span class="std std-ref">3. microTVM Ahead-of-Time (AOT) Compilation</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_aot.py</span></code>)</p></td>
-<td><p>00:12.023</p></td>
+<td><p>00:12.170</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">2. microTVM TFLite Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></td>
-<td><p>00:09.325</p></td>
+<td><p>00:09.516</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_custom_ide.html#sphx-glr-how-to-work-with-microtvm-micro-custom-ide-py"><span class="std std-ref">9. Bring microTVM to your own development environment</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_custom_ide.py</span></code>)</p></td>
-<td><p>00:08.591</p></td>
+<td><p>00:08.595</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">7. Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_relay/sg_execution_times.html b/docs/how_to/work_with_relay/sg_execution_times.html
index eaeecd77ca..77cd2ca5da 100644
--- a/docs/how_to/work_with_relay/sg_execution_times.html
+++ b/docs/how_to/work_with_relay/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-relay-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:40.856</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
+<p><strong>00:41.706</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="using_pipeline_executor.html#sphx-glr-how-to-work-with-relay-using-pipeline-executor-py"><span class="std std-ref">Using Pipeline Executor in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_pipeline_executor.py</span></code>)</p></td>
-<td><p>00:35.436</p></td>
+<td><p>00:36.210</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></td>
-<td><p>00:03.254</p></td>
+<td><p>00:03.364</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></td>
-<td><p>00:02.161</p></td>
+<td><p>00:02.125</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/intrin_math.html b/docs/how_to/work_with_schedules/intrin_math.html
index 4c17e23500..411b8cce9d 100644
--- a/docs/how_to/work_with_schedules/intrin_math.html
+++ b/docs/how_to/work_with_schedules/intrin_math.html
@@ -559,7 +559,7 @@ The following example customizes CUDA lowering rule for <code class="code docuti
 <a href="../../reference/api/python/ir.html#tvm.ir.register_intrin_lowering" title="tvm.ir.register_intrin_lowering" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-function"><span class="n">register_intrin_lowering</span></a><span class="p">(</span><span class="s2">&quot;tir.exp&quot;</span><span class="p">,</span> <span class="n">target</span><span class="o">=</span><span class="s2">&quot;cuda&quot;</span><span class="p">,</span> <span class="n">f</span><span class="o">= [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7f498c799e50&gt;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7f4b80713af0&gt;
 </pre></div>
 </div>
 <p>Register the rule to TVM with override option to override existing rule.
diff --git a/docs/how_to/work_with_schedules/sg_execution_times.html b/docs/how_to/work_with_schedules/sg_execution_times.html
index 018c456ad9..4d462276fc 100644
--- a/docs/how_to/work_with_schedules/sg_execution_times.html
+++ b/docs/how_to/work_with_schedules/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-schedules-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:06.398</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
+<p><strong>00:09.018</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></td>
-<td><p>00:03.409</p></td>
+<td><p>00:05.720</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></td>
-<td><p>00:01.236</p></td>
+<td><p>00:01.558</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></td>
-<td><p>00:00.747</p></td>
+<td><p>00:00.732</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></td>
-<td><p>00:00.744</p></td>
+<td><p>00:00.731</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></td>
-<td><p>00:00.121</p></td>
+<td><p>00:00.123</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></td>
-<td><p>00:00.060</p></td>
+<td><p>00:00.070</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></td>
@@ -387,7 +387,7 @@
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tuple_inputs.html#sphx-glr-how-to-work-with-schedules-tuple-inputs-py"><span class="std std-ref">Compute and Reduce with Tuple Inputs</span></a> (<code class="docutils literal notranslate"><span class="pre">tuple_inputs.py</span></code>)</p></td>
-<td><p>00:00.029</p></td>
+<td><p>00:00.031</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/reference/api/python/auto_scheduler.html b/docs/reference/api/python/auto_scheduler.html
index acd0151005..d27bb23c0d 100644
--- a/docs/reference/api/python/auto_scheduler.html
+++ b/docs/reference/api/python/auto_scheduler.html
@@ -1627,7 +1627,7 @@ history states as starting point to perform Evolutionary Search).</p></li>
 
 <dl class="py class">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.SketchPolicy">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
 <dd><p>The search policy that searches in a hierarchical search space defined by sketches.
 The policy randomly samples programs from the space defined by sketches and use evolutionary
 search to fine-tune them.</p>
@@ -1911,7 +1911,7 @@ Candidates:
 
 <dl class="py function">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.auto_schedule">
-<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
+<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
 <dd><p>THIS API IS DEPRECATED.</p>
 <p>Run auto scheduling search for a task.</p>
 <dl class="field-list simple">
diff --git a/docs/reference/api/typedoc/classes/bytestreamreader.html b/docs/reference/api/typedoc/classes/bytestreamreader.html
index b33a5f5fc8..4060820bee 100644
--- a/docs/reference/api/typedoc/classes/bytestreamreader.html
+++ b/docs/reference/api/typedoc/classes/bytestreamreader.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -141,7 +141,7 @@
 					<div class="tsd-signature tsd-kind-icon">bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Uint8Array</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -151,7 +151,7 @@
 					<div class="tsd-signature tsd-kind-icon">offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -168,7 +168,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">Uint8Array</span></h4>
@@ -185,7 +185,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -202,7 +202,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/cachedcallstack.html b/docs/reference/api/typedoc/classes/cachedcallstack.html
index dc8e6da5d6..1dab1304b1 100644
--- a/docs/reference/api/typedoc/classes/cachedcallstack.html
+++ b/docs/reference/api/typedoc/classes/cachedcallstack.html
@@ -144,7 +144,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L223">memory.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L223">memory.ts:223</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">temp<wbr>Args<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><a href="../interfaces/disposable.html" class="tsd-signature-type">Disposable</a><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L208">memory.ts:208</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L208">memory.ts:208</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -194,7 +194,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L312">memory.ts:312</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L312">memory.ts:312</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L284">memory.ts:284</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L284">memory.ts:284</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L388">memory.ts:388</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L388">memory.ts:388</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -300,7 +300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L376">memory.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L376">memory.ts:376</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -340,7 +340,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L267">memory.ts:267</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L267">memory.ts:267</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -373,7 +373,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L243">memory.ts:243</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L243">memory.ts:243</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -390,7 +390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L321">memory.ts:321</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L321">memory.ts:321</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -422,7 +422,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L252">memory.ts:252</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L252">memory.ts:252</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -444,7 +444,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L359">memory.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L359">memory.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -470,7 +470,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L342">memory.ts:342</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L342">memory.ts:342</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -496,7 +496,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L350">memory.ts:350</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L350">memory.ts:350</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -522,7 +522,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L326">memory.ts:326</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L326">memory.ts:326</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -548,7 +548,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L363">memory.ts:363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L363">memory.ts:363</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -574,7 +574,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L346">memory.ts:346</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L346">memory.ts:346</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -600,7 +600,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L334">memory.ts:334</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L334">memory.ts:334</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/dldatatype.html b/docs/reference/api/typedoc/classes/dldatatype.html
index 11be12095e..250d12b60f 100644
--- a/docs/reference/api/typedoc/classes/dldatatype.html
+++ b/docs/reference/api/typedoc/classes/dldatatype.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">bits<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L357">runtime.ts:357</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L357">runtime.ts:357</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">code<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L355">runtime.ts:355</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L355">runtime.ts:355</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -177,7 +177,7 @@
 					<div class="tsd-signature tsd-kind-icon">lanes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -199,7 +199,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L376">runtime.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L376">runtime.ts:376</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -216,7 +216,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L367">runtime.ts:367</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L367">runtime.ts:367</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/dldevice.html b/docs/reference/api/typedoc/classes/dldevice.html
index ab4013a5ee..15439b0213 100644
--- a/docs/reference/api/typedoc/classes/dldevice.html
+++ b/docs/reference/api/typedoc/classes/dldevice.html
@@ -118,7 +118,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L299">runtime.ts:299</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L299">runtime.ts:299</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L297">runtime.ts:297</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L297">runtime.ts:297</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -161,7 +161,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L295">runtime.ts:295</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L295">runtime.ts:295</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -183,7 +183,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L320">runtime.ts:320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L320">runtime.ts:320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -205,7 +205,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L327">runtime.ts:327</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L327">runtime.ts:327</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/environment.html b/docs/reference/api/typedoc/classes/environment.html
index 30b50b076a..daf3018459 100644
--- a/docs/reference/api/typedoc/classes/environment.html
+++ b/docs/reference/api/typedoc/classes/environment.html
@@ -125,7 +125,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/environment.ts#L86">environment.ts:86</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/environment.ts#L86">environment.ts:86</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 					<aside class="tsd-sources">
 						<p>Implementation of <a href="../interfaces/libraryprovider.html">LibraryProvider</a>.<a href="../interfaces/libraryprovider.html#imports">imports</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/environment.ts#L70">environment.ts:70</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/environment.ts#L70">environment.ts:70</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/environment.ts#L69">environment.ts:69</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/environment.ts#L69">environment.ts:69</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -210,7 +210,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">ctypes.FTVMWasmPackedCFunc</span><span class="tsd-signature-symbol"> | </span><span class="tsd-signature-type">undefined</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = [undefined,]</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/environment.ts#L78">environment.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/environment.ts#L78">environment.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -228,7 +228,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<wbr>Free<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/environment.ts#L84">environment.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/environment.ts#L84">environment.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/environment.ts#L105">environment.ts:105</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/environment.ts#L105">environment.ts:105</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ffilibrary.html b/docs/reference/api/typedoc/classes/ffilibrary.html
index a16224a07b..8e51a6abd1 100644
--- a/docs/reference/api/typedoc/classes/ffilibrary.html
+++ b/docs/reference/api/typedoc/classes/ffilibrary.html
@@ -131,7 +131,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L50">runtime.ts:50</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L50">runtime.ts:50</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L47">runtime.ts:47</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L47">runtime.ts:47</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L46">runtime.ts:46</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L46">runtime.ts:46</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L45">runtime.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L45">runtime.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">webGPUContext<span class="tsd-signature-symbol">:</span> <a href="webgpucontext.html" class="tsd-signature-type">WebGPUContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L48">runtime.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L48">runtime.ts:48</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -203,7 +203,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L77">runtime.ts:77</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L77">runtime.ts:77</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L67">runtime.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L67">runtime.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -243,7 +243,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L85">runtime.ts:85</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L85">runtime.ts:85</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <a href="cachedcallstack.html" class="tsd-signature-type">CachedCallStack</a></h4>
@@ -260,7 +260,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L96">runtime.ts:96</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L96">runtime.ts:96</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -283,7 +283,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L73">runtime.ts:73</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L73">runtime.ts:73</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/instance.html b/docs/reference/api/typedoc/classes/instance.html
index a2e1a2f84e..c368e8e89b 100644
--- a/docs/reference/api/typedoc/classes/instance.html
+++ b/docs/reference/api/typedoc/classes/instance.html
@@ -161,7 +161,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L844">runtime.ts:844</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L844">runtime.ts:844</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L834">runtime.ts:834</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L834">runtime.ts:834</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -234,7 +234,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L833">runtime.ts:833</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L833">runtime.ts:833</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -251,7 +251,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L973">runtime.ts:973</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L973">runtime.ts:973</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -296,7 +296,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L932">runtime.ts:932</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L932">runtime.ts:932</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -318,7 +318,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L901">runtime.ts:901</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L901">runtime.ts:901</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -381,7 +381,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -412,7 +412,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -453,7 +453,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -491,7 +491,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L922">runtime.ts:922</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L922">runtime.ts:922</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -508,7 +508,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -552,7 +552,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L943">runtime.ts:943</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L943">runtime.ts:943</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -577,7 +577,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -609,7 +609,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -640,7 +640,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -672,7 +672,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -695,7 +695,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -729,7 +729,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L986">runtime.ts:986</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L986">runtime.ts:986</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -769,7 +769,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -817,7 +817,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -857,7 +857,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -900,7 +900,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -938,7 +938,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -990,7 +990,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1014,7 +1014,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1046,7 +1046,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1078,7 +1078,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1110,7 +1110,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1141,7 +1141,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L957">runtime.ts:957</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L957">runtime.ts:957</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/memory.html b/docs/reference/api/typedoc/classes/memory.html
index 2a6106fc95..bed918450b 100644
--- a/docs/reference/api/typedoc/classes/memory.html
+++ b/docs/reference/api/typedoc/classes/memory.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L40">memory.ts:40</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L40">memory.ts:40</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Memory</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L32">memory.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L32">memory.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span><span class="tsd-signature-symbol"> = true</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L33">memory.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L33">memory.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L154">memory.ts:154</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L154">memory.ts:154</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -210,7 +210,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L90">memory.ts:90</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L90">memory.ts:90</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -233,7 +233,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L97">memory.ts:97</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L97">memory.ts:97</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -256,7 +256,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L74">memory.ts:74</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L74">memory.ts:74</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -279,7 +279,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L81">memory.ts:81</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L81">memory.ts:81</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -302,7 +302,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L104">memory.ts:104</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L104">memory.ts:104</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -325,7 +325,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L132">memory.ts:132</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L132">memory.ts:132</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -362,7 +362,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L145">memory.ts:145</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L145">memory.ts:145</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -393,7 +393,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L60">memory.ts:60</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L60">memory.ts:60</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -416,7 +416,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L67">memory.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L67">memory.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -439,7 +439,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L53">memory.ts:53</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L53">memory.ts:53</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -462,7 +462,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L114">memory.ts:114</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L114">memory.ts:114</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -485,7 +485,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L124">memory.ts:124</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L124">memory.ts:124</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -502,7 +502,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/memory.ts#L175">memory.ts:175</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/memory.ts#L175">memory.ts:175</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/module.html b/docs/reference/api/typedoc/classes/module.html
index 7cdc33a6fe..cbb927d868 100644
--- a/docs/reference/api/typedoc/classes/module.html
+++ b/docs/reference/api/typedoc/classes/module.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L614">runtime.ts:614</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L614">runtime.ts:614</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L626">runtime.ts:626</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L626">runtime.ts:626</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -186,7 +186,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L653">runtime.ts:653</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L653">runtime.ts:653</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -218,7 +218,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L641">runtime.ts:641</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L641">runtime.ts:641</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L687">runtime.ts:687</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L687">runtime.ts:687</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ndarray.html b/docs/reference/api/typedoc/classes/ndarray.html
index 00bd44da6f..66a7db7dec 100644
--- a/docs/reference/api/typedoc/classes/ndarray.html
+++ b/docs/reference/api/typedoc/classes/ndarray.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L401">runtime.ts:401</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L401">runtime.ts:401</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <a href="dldevice.html" class="tsd-signature-type">DLDevice</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L394">runtime.ts:394</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L394">runtime.ts:394</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -173,7 +173,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L390">runtime.ts:390</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L390">runtime.ts:390</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -188,7 +188,7 @@
 					<div class="tsd-signature tsd-kind-icon">ndim<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L388">runtime.ts:388</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L388">runtime.ts:388</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -203,7 +203,7 @@
 					<div class="tsd-signature tsd-kind-icon">shape<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L392">runtime.ts:392</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L392">runtime.ts:392</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -225,7 +225,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L480">runtime.ts:480</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L480">runtime.ts:480</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -258,7 +258,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L524">runtime.ts:524</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L524">runtime.ts:524</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -290,7 +290,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L465">runtime.ts:465</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L465">runtime.ts:465</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -307,7 +307,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L458">runtime.ts:458</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L458">runtime.ts:458</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -339,7 +339,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L584">runtime.ts:584</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L584">runtime.ts:584</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -363,7 +363,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L553">runtime.ts:553</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L553">runtime.ts:553</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/packedfunccell.html b/docs/reference/api/typedoc/classes/packedfunccell.html
index 3b8c74822e..862009ae21 100644
--- a/docs/reference/api/typedoc/classes/packedfunccell.html
+++ b/docs/reference/api/typedoc/classes/packedfunccell.html
@@ -117,7 +117,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L248">runtime.ts:248</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L248">runtime.ts:248</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L255">runtime.ts:255</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L255">runtime.ts:255</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -163,7 +163,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L264">runtime.ts:264</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L264">runtime.ts:264</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/rpcserver.html b/docs/reference/api/typedoc/classes/rpcserver.html
index 845008be00..6687ac3a91 100644
--- a/docs/reference/api/typedoc/classes/rpcserver.html
+++ b/docs/reference/api/typedoc/classes/rpcserver.html
@@ -115,7 +115,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">unknown</span><span class="tsd-signat [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -201,7 +201,7 @@
 					<div class="tsd-signature tsd-kind-icon">key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -211,7 +211,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -242,7 +242,7 @@
 					<div class="tsd-signature tsd-kind-icon">socket<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">WebSocket</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -252,7 +252,7 @@
 					<div class="tsd-signature tsd-kind-icon">state<span class="tsd-signature-symbol">:</span> <a href="../enums/rpcserverstate.html" class="tsd-signature-type">RPCServerState</a><span class="tsd-signature-symbol"> = RPCServerState.InitHeader</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -262,7 +262,7 @@
 					<div class="tsd-signature tsd-kind-icon">url<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/classes/runtimecontext.html b/docs/reference/api/typedoc/classes/runtimecontext.html
index d4d1aa080e..b067b15126 100644
--- a/docs/reference/api/typedoc/classes/runtimecontext.html
+++ b/docs/reference/api/typedoc/classes/runtimecontext.html
@@ -132,7 +132,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L148">runtime.ts:148</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L148">runtime.ts:148</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Item<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L143">runtime.ts:143</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L143">runtime.ts:143</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Size<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L144">runtime.ts:144</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L144">runtime.ts:144</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -192,7 +192,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Make<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L145">runtime.ts:145</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -202,7 +202,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Sys<wbr>Lib<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L146">runtime.ts:146</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L146">runtime.ts:146</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -219,7 +219,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L189">runtime.ts:189</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L189">runtime.ts:189</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -263,7 +263,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L163">runtime.ts:163</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L163">runtime.ts:163</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -280,7 +280,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L208">runtime.ts:208</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L208">runtime.ts:208</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
@@ -309,7 +309,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L157">runtime.ts:157</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L157">runtime.ts:157</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -326,7 +326,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L167">runtime.ts:167</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L167">runtime.ts:167</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -343,7 +343,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L198">runtime.ts:198</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L198">runtime.ts:198</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/scalar.html b/docs/reference/api/typedoc/classes/scalar.html
index 4d80270006..f0e80d0e18 100644
--- a/docs/reference/api/typedoc/classes/scalar.html
+++ b/docs/reference/api/typedoc/classes/scalar.html
@@ -112,7 +112,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -137,7 +137,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">value<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L233">runtime.ts:233</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L233">runtime.ts:233</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmarray.html b/docs/reference/api/typedoc/classes/tvmarray.html
index 04c2d2c282..2a34696262 100644
--- a/docs/reference/api/typedoc/classes/tvmarray.html
+++ b/docs/reference/api/typedoc/classes/tvmarray.html
@@ -133,7 +133,7 @@
 							<aside class="tsd-sources">
 								<p>Overrides <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#constructor">constructor</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L784">runtime.ts:784</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L784">runtime.ts:784</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -162,7 +162,7 @@
 					<aside class="tsd-sources">
 						<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#ctx">ctx</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#dispose">dispose</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -197,7 +197,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L804">runtime.ts:804</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L804">runtime.ts:804</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -230,7 +230,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#gethandle">getHandle</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L796">runtime.ts:796</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L796">runtime.ts:796</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -283,7 +283,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typeindex">typeIndex</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -306,7 +306,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typekey">typeKey</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmobject.html b/docs/reference/api/typedoc/classes/tvmobject.html
index 145fba8b8b..2e9690bd21 100644
--- a/docs/reference/api/typedoc/classes/tvmobject.html
+++ b/docs/reference/api/typedoc/classes/tvmobject.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">ctx<span class="tsd-signature-symbol">:</span> <a href="runtimecontext.html" class="tsd-signature-type">RuntimeContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -175,7 +175,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -192,7 +192,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -246,7 +246,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/webgpucontext.html b/docs/reference/api/typedoc/classes/webgpucontext.html
index a52f41d0c0..b68d53648e 100644
--- a/docs/reference/api/typedoc/classes/webgpucontext.html
+++ b/docs/reference/api/typedoc/classes/webgpucontext.html
@@ -120,7 +120,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -145,7 +145,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">GPUDevice</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -155,7 +155,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -172,7 +172,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -209,7 +209,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/enums/argtypecode.html b/docs/reference/api/typedoc/enums/argtypecode.html
index aea5127cae..81cbf57fd7 100644
--- a/docs/reference/api/typedoc/enums/argtypecode.html
+++ b/docs/reference/api/typedoc/enums/argtypecode.html
@@ -106,7 +106,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 6</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -116,7 +116,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -126,7 +126,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -136,7 +136,7 @@
 					<div class="tsd-signature tsd-kind-icon">Null<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 12</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMDLTensor<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 7</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMModule<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 9</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMNDArray<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 13</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -196,7 +196,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -206,7 +206,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObjectRValue<wbr>Ref<wbr>Arg<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 14</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -216,7 +216,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMOpaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -226,7 +226,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMPacked<wbr>Func<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 10</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -236,7 +236,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 11</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -246,7 +246,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/aynccallbackcode.html b/docs/reference/api/typedoc/enums/aynccallbackcode.html
index 0f91b6dee9..820a973e3a 100644
--- a/docs/reference/api/typedoc/enums/aynccallbackcode.html
+++ b/docs/reference/api/typedoc/enums/aynccallbackcode.html
@@ -93,7 +93,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Exception<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L812">runtime.ts:812</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L812">runtime.ts:812</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -103,7 +103,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L811">runtime.ts:811</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L811">runtime.ts:811</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/dldatatypecode.html b/docs/reference/api/typedoc/enums/dldatatypecode.html
index c34e705470..bb82eac524 100644
--- a/docs/reference/api/typedoc/enums/dldatatypecode.html
+++ b/docs/reference/api/typedoc/enums/dldatatypecode.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L339">runtime.ts:339</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L339">runtime.ts:339</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L337">runtime.ts:337</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L337">runtime.ts:337</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">Opaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L340">runtime.ts:340</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L340">runtime.ts:340</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -125,7 +125,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L338">runtime.ts:338</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L338">runtime.ts:338</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/rpcserverstate.html b/docs/reference/api/typedoc/enums/rpcserverstate.html
index 5c22a97d1a..21063cc875 100644
--- a/docs/reference/api/typedoc/enums/rpcserverstate.html
+++ b/docs/reference/api/typedoc/enums/rpcserverstate.html
@@ -90,7 +90,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<wbr>Key<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Server<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Body<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">Wait<wbr>For<wbr>Callback<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/sizeof.html b/docs/reference/api/typedoc/enums/sizeof.html
index 1be50b1e7c..b584a06be8 100644
--- a/docs/reference/api/typedoc/enums/sizeof.html
+++ b/docs/reference/api/typedoc/enums/sizeof.html
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32 + I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">F32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">F64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">I32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -150,7 +150,7 @@
 					<div class="tsd-signature tsd-kind-icon">I64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -160,7 +160,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMValue<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -170,7 +170,7 @@
 					<div class="tsd-signature tsd-kind-icon">U16<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 					<div class="tsd-signature tsd-kind-icon">U8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/index.html b/docs/reference/api/typedoc/index.html
index 1b11a3be6e..aa3f7cfcad 100644
--- a/docs/reference/api/typedoc/index.html
+++ b/docs/reference/api/typedoc/index.html
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">FObject<wbr>Constructor<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, lib<span class="tsd-signature-symbol">: </span><a href="classes/ffilibrary.html" class="tsd-signature-type">FFILibrary</a>, ctx<span class="tsd-signature-symbol">: </span><a href="classes/runtimecontext.html" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L778">runtime.ts:778</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L778">runtime.ts:778</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Alloc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>shape<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, ndim<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeCode<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeBits<span class="tsd [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -288,7 +288,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>Bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">num [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -332,7 +332,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>To<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>from<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, to<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-sig [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -376,7 +376,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>ToBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</sp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -420,7 +420,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -456,7 +456,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMBackend<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number< [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -508,7 +508,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCFunc<wbr>Set<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ret<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -556,7 +556,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCb<wbr>Arg<wbr>ToReturn<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, code<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span c [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -595,7 +595,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Call<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -651,7 +651,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -687,7 +687,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Get<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span cla [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -726,7 +726,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>List<wbr>Global<wbr>Names<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>outSize<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, outArray<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -765,7 +765,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Register<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, f<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, override<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -808,7 +808,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMGet<wbr>Last<wbr>Error<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -838,7 +838,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -874,7 +874,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Get<wbr>Function<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, funcName<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, queryImports<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">numbe [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -922,7 +922,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Import<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, dep<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-si [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -962,7 +962,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -998,7 +998,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Get<wbr>Type<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt;  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1037,7 +1037,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Index2<wbr>Key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_index<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, out_type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1076,7 +1076,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Key2<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol">  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1115,7 +1115,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMSynchronize<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>deviceType<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, deviceId<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signatur [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1157,7 +1157,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Alloc<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>size<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1193,7 +1193,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Free<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ptr<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1229,7 +1229,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Func<wbr>Create<wbr>FromCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resource<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1269,7 +1269,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>args<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1321,7 +1321,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<wbr>Finalizer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resourceHandle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1357,7 +1357,7 @@
 					<div class="tsd-signature tsd-kind-icon">GPUPointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1372,7 +1372,7 @@
 					<div class="tsd-signature tsd-kind-icon">Packed<wbr>Func<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">...</span>args<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol"> &amp; </span><a href="interfaces/disp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L37">runtime.ts:37</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L37">runtime.ts:37</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1387,7 +1387,7 @@
 					<div class="tsd-signature tsd-kind-icon">Pointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1402,7 +1402,7 @@
 					<div class="tsd-signature tsd-kind-icon">Ptr<wbr>Offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1417,7 +1417,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Base<span class="tsd-signature-symbol">:</span> <a href="classes/tvmobject.html" class="tsd-signature-type">TVMObject</a><span class="tsd-signature-symbol"> | </span><a href="classes/ndarray.html" class="tsd-signature-type">NDArray</a><span class="tsd-signature-symbol"> | </span><a href="classes/module.html" class="tsd-signature-type">Module</a><span class="tsd-signature-symbol"> | </span><a href="index.html#packedfunc" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L781">runtime.ts:781</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L781">runtime.ts:781</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1435,7 +1435,7 @@
 					<div class="tsd-signature tsd-kind-icon">RPC_<wbr>MAGIC<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">1045105</span><span class="tsd-signature-symbol"> = 1045105</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1457,7 +1457,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/support.ts#L25">support.ts:25</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/support.ts#L25">support.ts:25</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1489,7 +1489,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/support.ts#L39">support.ts:39</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/support.ts#L39">support.ts:39</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1518,7 +1518,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/support.ts#L52">support.ts:52</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/support.ts#L52">support.ts:52</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1555,7 +1555,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/compact.ts#L38">compact.ts:38</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/compact.ts#L38">compact.ts:38</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1586,7 +1586,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1608,7 +1608,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/environment.ts#L32">environment.ts:32</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/environment.ts#L32">environment.ts:32</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1639,7 +1639,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/compact.ts#L24">compact.ts:24</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/compact.ts#L24">compact.ts:24</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1661,7 +1661,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1726,7 +1726,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/support.ts#L62">support.ts:62</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/support.ts#L62">support.ts:62</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1748,7 +1748,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<wbr>Code<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L343">runtime.ts:343</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L343">runtime.ts:343</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1757,7 +1757,7 @@
 						<div class="tsd-signature tsd-kind-icon">0<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;int&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L344">runtime.ts:344</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L344">runtime.ts:344</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1767,7 +1767,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;uint&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L345">runtime.ts:345</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L345">runtime.ts:345</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1777,7 +1777,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;float&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L346">runtime.ts:346</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L346">runtime.ts:346</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1787,7 +1787,7 @@
 						<div class="tsd-signature tsd-kind-icon">3<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;handle&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L347">runtime.ts:347</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L347">runtime.ts:347</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1798,7 +1798,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Enum<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L272">runtime.ts:272</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L272">runtime.ts:272</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1807,7 +1807,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L273">runtime.ts:273</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L273">runtime.ts:273</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1817,7 +1817,7 @@
 						<div class="tsd-signature tsd-kind-icon">15<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;webgpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L277">runtime.ts:277</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L277">runtime.ts:277</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1827,7 +1827,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cuda&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L274">runtime.ts:274</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L274">runtime.ts:274</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1837,7 +1837,7 @@
 						<div class="tsd-signature tsd-kind-icon">4<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;opencl&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L275">runtime.ts:275</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L275">runtime.ts:275</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1847,7 +1847,7 @@
 						<div class="tsd-signature tsd-kind-icon">8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;metal&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L276">runtime.ts:276</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L276">runtime.ts:276</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1858,7 +1858,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Str<wbr>ToEnum<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L280">runtime.ts:280</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L280">runtime.ts:280</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1867,7 +1867,7 @@
 						<div class="tsd-signature tsd-kind-icon">cl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L283">runtime.ts:283</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L283">runtime.ts:283</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1877,7 +1877,7 @@
 						<div class="tsd-signature tsd-kind-icon">cpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 1</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L281">runtime.ts:281</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L281">runtime.ts:281</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1887,7 +1887,7 @@
 						<div class="tsd-signature tsd-kind-icon">cuda<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 2</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L282">runtime.ts:282</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L282">runtime.ts:282</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1897,7 +1897,7 @@
 						<div class="tsd-signature tsd-kind-icon">metal<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 8</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L286">runtime.ts:286</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L286">runtime.ts:286</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1907,7 +1907,7 @@
 						<div class="tsd-signature tsd-kind-icon">opencl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L284">runtime.ts:284</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L284">runtime.ts:284</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1917,7 +1917,7 @@
 						<div class="tsd-signature tsd-kind-icon">vulkan<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 7</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L285">runtime.ts:285</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L285">runtime.ts:285</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1927,7 +1927,7 @@
 						<div class="tsd-signature tsd-kind-icon">webgpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 15</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/runtime.ts#L287">runtime.ts:287</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/runtime.ts#L287">runtime.ts:287</a></li>
 							</ul>
 						</aside>
 					</section>
diff --git a/docs/reference/api/typedoc/interfaces/disposable.html b/docs/reference/api/typedoc/interfaces/disposable.html
index e8ffa3920f..e0b12181a4 100644
--- a/docs/reference/api/typedoc/interfaces/disposable.html
+++ b/docs/reference/api/typedoc/interfaces/disposable.html
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">dispose<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/types.ts#L52">types.ts:52</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/types.ts#L52">types.ts:52</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/interfaces/functioninfo.html b/docs/reference/api/typedoc/interfaces/functioninfo.html
index 68bdb6efbd..4414d33cf0 100644
--- a/docs/reference/api/typedoc/interfaces/functioninfo.html
+++ b/docs/reference/api/typedoc/interfaces/functioninfo.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">arg_<wbr>types<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">launch_<wbr>param_<wbr>tags<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">name<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/interfaces/libraryprovider.html b/docs/reference/api/typedoc/interfaces/libraryprovider.html
index 309b35975a..2c77b1db0a 100644
--- a/docs/reference/api/typedoc/interfaces/libraryprovider.html
+++ b/docs/reference/api/typedoc/interfaces/libraryprovider.html
@@ -112,7 +112,7 @@
 					<div class="tsd-signature tsd-kind-icon">imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/types.ts#L34">types.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/types.ts#L34">types.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -127,7 +127,7 @@
 					<div class="tsd-signature tsd-kind-icon">start<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>inst<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">Instance</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/d776bccc1/web/src/types.ts#L39">types.ts:39</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/35ac2262e/web/src/types.ts#L39">types.ts:39</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/searchindex.js b/docs/searchindex.js
index 96fb803095..10fcb59d75 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
+Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
diff --git a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
index f75aeb908d..7058c4f2fe 100644
--- a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:35.288</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
+<p><strong>00:37.828</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -359,7 +359,7 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-relay-vta-py"><span class="std std-ref">Auto-tuning a convolutional network on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_vta.py</span></code>)</p></td>
-<td><p>00:35.280</p></td>
+<td><p>00:37.820</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_alu_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-alu-vta-py"><span class="std std-ref">Auto-tuning a ALU fused op on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_alu_vta.py</span></code>)</p></td>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_classification.html b/docs/topic/vta/tutorials/frontend/deploy_classification.html
index b2f537ed15..4928281275 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_classification.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_classification.html
@@ -593,7 +593,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
   warnings.warn(
 /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
   graph, lib, params = relay.build(
-resnet18_v1 inference graph built in 37.88s!
+resnet18_v1 inference graph built in 42.06s!
 </pre></div>
 </div>
 </div>
@@ -690,7 +690,7 @@ resnet18_v1 prediction for sample 0
         #5: weasel
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  5.199 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  9.415 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-topic-vta-tutorials-frontend-deploy-classification-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../../../_downloads/9e8de33a5822b31748bfd76861009f92/deploy_classification.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_classification.py</span></code></a></p>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_detection.html b/docs/topic/vta/tutorials/frontend/deploy_detection.html
index e2b1824287..ba8cd3ee08 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_detection.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_detection.html
@@ -611,7 +611,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
   warnings.warn(
-yolov3-tiny inference graph built in 25.82s!
+yolov3-tiny inference graph built in 28.81s!
 </pre></div>
 </div>
 </div>
@@ -696,7 +696,7 @@ Download test image</p>
         alu_counter     :           849056
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  9.293 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  12.453 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-topic-vta-tutorials-frontend-deploy-detection-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../../../_downloads/65b9451c8de050d7cd9da2fe5a49acc6/deploy_detection.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_detection.py</span></code></a></p>
diff --git a/docs/topic/vta/tutorials/frontend/sg_execution_times.html b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
index 6c47d5f7b8..4ca7419ae5 100644
--- a/docs/topic/vta/tutorials/frontend/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-frontend-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>02:14.491</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
+<p><strong>02:21.869</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_detection.html#sphx-glr-topic-vta-tutorials-frontend-deploy-detection-py"><span class="std std-ref">Deploy Pretrained Vision Detection Model from Darknet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_detection.py</span></code>)</p></td>
-<td><p>01:09.293</p></td>
+<td><p>01:12.453</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_classification.html#sphx-glr-topic-vta-tutorials-frontend-deploy-classification-py"><span class="std std-ref">Deploy Pretrained Vision Model from MxNet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_classification.py</span></code>)</p></td>
-<td><p>01:05.199</p></td>
+<td><p>01:09.415</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/optimize/sg_execution_times.html b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
index eaa8ad4c9f..5295316525 100644
--- a/docs/topic/vta/tutorials/optimize/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-optimize-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:03.388</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
+<p><strong>00:03.344</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="convolution_opt.html#sphx-glr-topic-vta-tutorials-optimize-convolution-opt-py"><span class="std std-ref">2D Convolution Optimization</span></a> (<code class="docutils literal notranslate"><span class="pre">convolution_opt.py</span></code>)</p></td>
-<td><p>00:02.863</p></td>
+<td><p>00:02.829</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="matrix_multiply_opt.html#sphx-glr-topic-vta-tutorials-optimize-matrix-multiply-opt-py"><span class="std std-ref">Matrix Multiply Blocking</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply_opt.py</span></code>)</p></td>
-<td><p>00:00.525</p></td>
+<td><p>00:00.515</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/sg_execution_times.html b/docs/topic/vta/tutorials/sg_execution_times.html
index 87daf4bdcb..08afc8a62e 100644
--- a/docs/topic/vta/tutorials/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:00.887</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
+<p><strong>00:00.900</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="matrix_multiply.html#sphx-glr-topic-vta-tutorials-matrix-multiply-py"><span class="std std-ref">Simple Matrix Multiply</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply.py</span></code>)</p></td>
-<td><p>00:00.456</p></td>
+<td><p>00:00.467</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="vta_get_started.html#sphx-glr-topic-vta-tutorials-vta-get-started-py"><span class="std std-ref">Get Started with VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">vta_get_started.py</span></code>)</p></td>
-<td><p>00:00.432</p></td>
+<td><p>00:00.433</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/tutorial/auto_scheduler_matmul_x86.html b/docs/tutorial/auto_scheduler_matmul_x86.html
index b4bf58a3e9..0ef52d9feb 100644
--- a/docs/tutorial/auto_scheduler_matmul_x86.html
+++ b/docs/tutorial/auto_scheduler_matmul_x86.html
@@ -579,7 +579,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 93.784 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 97.395 ms
 </pre></div>
 </div>
 </div>
@@ -641,7 +641,6 @@ resume the status and do more 5 trials.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Resume search:
-*E
 </pre></div>
 </div>
 </div>
@@ -652,7 +651,7 @@ automatically optimize a matrix multiplication, without the need to specify a
 search template.  It ends a series of examples that starts from the Tensor
 Expression (TE) language that demonstrates how TVM can optimize computational
 operations.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  47.147 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  33.068 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-auto-scheduler-matmul-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/eac4389b114db015e95cb3cdf8b86b83/auto_scheduler_matmul_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">auto_scheduler_matmul_x86.py</span></code></a></p>
diff --git a/docs/tutorial/autotvm_matmul_x86.html b/docs/tutorial/autotvm_matmul_x86.html
index 4b130c1b4e..0a4afcb19a 100644
--- a/docs/tutorial/autotvm_matmul_x86.html
+++ b/docs/tutorial/autotvm_matmul_x86.html
@@ -690,16 +690,16 @@ reduce variance, we take 5 measurements and average them.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>waiting for device...
 device available
 Get devices for measurement successfully!
-No: 1   GFLOPS: 4.71/4.71       result: MeasureResult(costs=(0.05694622020000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.1869642734527588, timestamp=1684922502.4330118)        [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 2])],None,13
-No: 2   GFLOPS: 10.76/10.76     result: MeasureResult(costs=(0.024956706000000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.813683032989502, timestamp=1684922503.1108992)        [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 128])],None,71
-No: 3   GFLOPS: 10.83/10.83     result: MeasureResult(costs=(0.024791978,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6751875877380371, timestamp=1684922503.7719996)        [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 16])],None,42
-No: 4   GFLOPS: 4.13/10.83      result: MeasureResult(costs=(0.065054261,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.301656723022461, timestamp=1684922505.0782185) [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 2])],None,12
-No: 5   GFLOPS: 2.89/10.83      result: MeasureResult(costs=(0.0927725522,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.751298427581787, timestamp=1684922506.9588819)        [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 2])],None,14
-No: 6   GFLOPS: 3.30/10.83      result: MeasureResult(costs=(0.0814160686,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.5724496841430664, timestamp=1684922508.5319996)       [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 8])],None,39
-No: 7   GFLOPS: 8.62/10.83      result: MeasureResult(costs=(0.031157640400000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7739567756652832, timestamp=1684922509.2955496)       [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 16])],None,48
-No: 8   GFLOPS: 2.05/10.83      result: MeasureResult(costs=(0.1309710022,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.358995199203491, timestamp=1684922511.65407)  [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 4])],None,25
-No: 9   GFLOPS: 11.15/11.15     result: MeasureResult(costs=(0.0240744876,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6144266128540039, timestamp=1684922512.3943744)       [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 512])],None,97
-No: 10  GFLOPS: 16.23/16.23     result: MeasureResult(costs=(0.016537032,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5081222057342529, timestamp=1684922512.9238546)        [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 64])],None,64
+No: 1   GFLOPS: 7.67/7.67       result: MeasureResult(costs=(0.0349940904,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.8255264759063721, timestamp=1684949662.1887333)       [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 16])],None,49
+No: 2   GFLOPS: 1.02/7.67       result: MeasureResult(costs=(0.2625456442,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.4924726486206055, timestamp=1684949666.646765)        [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 2])],None,17
+No: 3   GFLOPS: 10.03/10.03     result: MeasureResult(costs=(0.026756031999999996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6823074817657471, timestamp=1684949667.3306112)       [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 8])],None,31
+No: 4   GFLOPS: 3.51/10.03      result: MeasureResult(costs=(0.07655771539999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4601919651031494, timestamp=1684949668.8121912)        [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 8])],None,39
+No: 5   GFLOPS: 0.50/10.03      result: MeasureResult(costs=(0.5372087728,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.894731283187866, timestamp=1684949677.8450584)        [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 1])],None,5
+No: 6   GFLOPS: 11.13/11.13     result: MeasureResult(costs=(0.0241146898,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6550126075744629, timestamp=1684949678.4965243)       [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 256])],None,84
+No: 7   GFLOPS: 9.96/11.13      result: MeasureResult(costs=(0.0269434458,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7120945453643799, timestamp=1684949679.1821337)       [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 16])],None,43
+No: 8   GFLOPS: 11.88/11.88     result: MeasureResult(costs=(0.0225995996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6406514644622803, timestamp=1684949679.8011682)       [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 16])],None,44
+No: 9   GFLOPS: 7.43/11.88      result: MeasureResult(costs=(0.036129904000000004,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.79469895362854, timestamp=1684949680.7161095) [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 4])],None,21
+No: 10  GFLOPS: 2.10/11.88      result: MeasureResult(costs=(0.12794936620000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.274810791015625, timestamp=1684949683.0235138) [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 4])],None,25
 </pre></div>
 </div>
 <p>With tuning completed, we can choose the configuration from the log file that
diff --git a/docs/tutorial/autotvm_relay_x86.html b/docs/tutorial/autotvm_relay_x86.html
index a41b43e198..914cfe95aa 100644
--- a/docs/tutorial/autotvm_relay_x86.html
+++ b/docs/tutorial/autotvm_relay_x86.html
@@ -568,7 +568,7 @@ standard deviation.</p>
 <span class="nb">print</span><span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 496.5515389698703, &#39;median&#39;: 496.1619072502799, &#39;std&#39;: 2.0135947670101677}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 492.5452644500001, &#39;median&#39;: 492.4691212000198, &#39;std&#39;: 2.1286296796252318}
 </pre></div>
 </div>
 </div>
@@ -757,178 +757,179 @@ depending on the specifics of the model and the target platform.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  1/25]  Current/Best:   14.46/  14.46 GFLOPS | Progress: (4/20) | 10.23 s
-[Task  1/25]  Current/Best:   16.78/  21.30 GFLOPS | Progress: (8/20) | 12.43 s
-[Task  1/25]  Current/Best:   20.59/  21.30 GFLOPS | Progress: (12/20) | 14.61 s
-[Task  1/25]  Current/Best:   10.08/  21.74 GFLOPS | Progress: (16/20) | 17.09 s
-[Task  1/25]  Current/Best:    9.32/  23.65 GFLOPS | Progress: (20/20) | 24.94 s Done.
+[Task  1/25]  Current/Best:   20.79/  20.79 GFLOPS | Progress: (4/20) | 9.59 s
+[Task  1/25]  Current/Best:   21.97/  23.53 GFLOPS | Progress: (8/20) | 13.02 s
+[Task  1/25]  Current/Best:   21.77/  23.53 GFLOPS | Progress: (12/20) | 16.09 s
+[Task  1/25]  Current/Best:   22.09/  23.53 GFLOPS | Progress: (16/20) | 19.68 s
+[Task  1/25]  Current/Best:    3.45/  23.53 GFLOPS | Progress: (20/20) | 22.50 s Done.
 
 [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  2/25]  Current/Best:    5.62/  18.68 GFLOPS | Progress: (4/20) | 4.78 s
-[Task  2/25]  Current/Best:    5.67/  18.68 GFLOPS | Progress: (8/20) | 6.57 s
-[Task  2/25]  Current/Best:    6.61/  18.68 GFLOPS | Progress: (12/20) | 8.23 s
-[Task  2/25]  Current/Best:    6.87/  18.68 GFLOPS | Progress: (16/20) | 10.23 s
-[Task  2/25]  Current/Best:    9.36/  18.68 GFLOPS | Progress: (20/20) | 11.80 s Done.
+[Task  2/25]  Current/Best:   13.58/  19.49 GFLOPS | Progress: (4/20) | 4.32 s
+[Task  2/25]  Current/Best:   14.88/  20.13 GFLOPS | Progress: (8/20) | 5.87 s
+[Task  2/25]  Current/Best:   10.68/  20.13 GFLOPS | Progress: (12/20) | 7.73 s
+[Task  2/25]  Current/Best:   14.22/  20.13 GFLOPS | Progress: (16/20) | 9.24 s
+[Task  2/25]  Current/Best:   13.08/  20.13 GFLOPS | Progress: (20/20) | 10.84 s Done.
 
 [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  3/25]  Current/Best:   20.36/  23.36 GFLOPS | Progress: (4/20) | 4.88 s
-[Task  3/25]  Current/Best:   19.56/  23.36 GFLOPS | Progress: (8/20) | 6.75 s
-[Task  3/25]  Current/Best:    5.41/  23.36 GFLOPS | Progress: (12/20) | 8.96 s
-[Task  3/25]  Current/Best:   15.69/  23.36 GFLOPS | Progress: (16/20) | 12.05 s
-[Task  3/25]  Current/Best:    7.75/  23.90 GFLOPS | Progress: (20/20) | 14.99 s Done.
+[Task  3/25]  Current/Best:    6.13/  20.39 GFLOPS | Progress: (4/20) | 6.56 s
+[Task  3/25]  Current/Best:    8.90/  20.39 GFLOPS | Progress: (8/20) | 8.75 s
+[Task  3/25]  Current/Best:   12.58/  23.09 GFLOPS | Progress: (12/20) | 12.52 s
+[Task  3/25]  Current/Best:   14.19/  23.09 GFLOPS | Progress: (16/20) | 15.00 s
+[Task  3/25]  Current/Best:   13.70/  23.09 GFLOPS | Progress: (20/20) | 17.49 s Done.
 
 [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  4/25]  Current/Best:   15.51/  22.02 GFLOPS | Progress: (4/20) | 4.45 s
-[Task  4/25]  Current/Best:    5.99/  22.02 GFLOPS | Progress: (8/20) | 6.82 s
-[Task  4/25]  Current/Best:    8.69/  22.02 GFLOPS | Progress: (12/20) | 12.03 s
-[Task  4/25]  Current/Best:   17.95/  22.02 GFLOPS | Progress: (16/20) | 13.59 s
-[Task  4/25]  Current/Best:   21.86/  22.02 GFLOPS | Progress: (20/20) | 15.77 s Done.
+[Task  4/25]  Current/Best:   10.32/  16.15 GFLOPS | Progress: (4/20) | 6.30 s
+[Task  4/25]  Current/Best:   14.38/  16.15 GFLOPS | Progress: (8/20) | 8.49 s
+[Task  4/25]  Current/Best:    6.41/  16.15 GFLOPS | Progress: (12/20) | 10.38 s
+[Task  4/25]  Current/Best:   11.79/  16.52 GFLOPS | Progress: (16/20) | 12.98 s
+[Task  4/25]  Current/Best:   14.47/  20.39 GFLOPS | Progress: (20/20) | 15.68 s Done.
 
 [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  5/25]  Current/Best:    8.32/  14.36 GFLOPS | Progress: (4/20) | 5.21 s
-[Task  5/25]  Current/Best:    8.21/  17.08 GFLOPS | Progress: (8/20) | 7.40 s
-[Task  5/25]  Current/Best:    2.94/  21.66 GFLOPS | Progress: (12/20) | 9.39 s
-[Task  5/25]  Current/Best:   10.85/  21.66 GFLOPS | Progress: (16/20) | 11.86 s
-[Task  5/25]  Current/Best:   14.99/  21.66 GFLOPS | Progress: (20/20) | 14.07 s Done.
+[Task  5/25]  Current/Best:    5.84/  23.28 GFLOPS | Progress: (4/20) | 5.51 s
+[Task  5/25]  Current/Best:   13.45/  23.28 GFLOPS | Progress: (8/20) | 8.10 s
+[Task  5/25]  Current/Best:   12.77/  23.28 GFLOPS | Progress: (12/20) | 9.91 s
+[Task  5/25]  Current/Best:    3.52/  23.28 GFLOPS | Progress: (16/20) | 12.06 s
+[Task  5/25]  Current/Best:    5.48/  23.28 GFLOPS | Progress: (20/20) | 14.64 s Done.
 
 [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  6/25]  Current/Best:   10.88/  17.82 GFLOPS | Progress: (4/20) | 5.37 s
-[Task  6/25]  Current/Best:   15.59/  17.82 GFLOPS | Progress: (8/20) | 8.05 s
-[Task  6/25]  Current/Best:    7.12/  17.82 GFLOPS | Progress: (12/20) | 10.59 s
-[Task  6/25]  Current/Best:   10.42/  17.82 GFLOPS | Progress: (16/20) | 15.78 s
-[Task  6/25]  Current/Best:   16.01/  17.82 GFLOPS | Progress: (20/20) | 18.39 s Done.
+[Task  6/25]  Current/Best:   21.76/  21.76 GFLOPS | Progress: (4/20) | 6.69 s
+[Task  6/25]  Current/Best:   17.01/  21.76 GFLOPS | Progress: (8/20) | 9.78 s
+[Task  6/25]  Current/Best:   15.78/  21.76 GFLOPS | Progress: (12/20) | 12.46 s
+[Task  6/25]  Current/Best:    6.58/  21.76 GFLOPS | Progress: (16/20) | 15.66 s
+[Task  6/25]  Current/Best:   16.28/  21.76 GFLOPS | Progress: (20/20) | 19.17 s Done.
 
 [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  7/25]  Current/Best:    6.36/  21.98 GFLOPS | Progress: (4/20) | 5.36 s
-[Task  7/25]  Current/Best:   12.28/  21.98 GFLOPS | Progress: (8/20) | 8.07 s
-[Task  7/25]  Current/Best:   20.61/  21.98 GFLOPS | Progress: (12/20) | 10.68 s
-[Task  7/25]  Current/Best:   19.94/  21.98 GFLOPS | Progress: (16/20) | 12.80 s
-[Task  7/25]  Current/Best:   17.60/  21.98 GFLOPS | Progress: (20/20) | 15.01 s Done.
+[Task  7/25]  Current/Best:    1.58/  14.92 GFLOPS | Progress: (4/20) | 6.88 s
+[Task  7/25]  Current/Best:   16.37/  18.75 GFLOPS | Progress: (8/20) | 9.16 s
+[Task  7/25]  Current/Best:   13.98/  18.75 GFLOPS | Progress: (12/20) | 12.16 s
+[Task  7/25]  Current/Best:   12.25/  18.75 GFLOPS | Progress: (16/20) | 15.57 s
+[Task  7/25]  Current/Best:    6.99/  18.75 GFLOPS | Progress: (20/20) | 18.77 s Done.
 
 [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  8/25]  Current/Best:   13.06/  14.08 GFLOPS | Progress: (4/20) | 5.47 s
-[Task  8/25]  Current/Best:    6.34/  14.08 GFLOPS | Progress: (8/20) | 8.23 s
-[Task  8/25]  Current/Best:   12.58/  15.43 GFLOPS | Progress: (12/20) | 19.81 s
-[Task  8/25]  Current/Best:    5.11/  15.43 GFLOPS | Progress: (16/20) | 23.54 s
-[Task  8/25]  Current/Best:    6.65/  15.85 GFLOPS | Progress: (20/20) | 29.78 s Done.
+[Task  8/25]  Current/Best:   16.03/  16.03 GFLOPS | Progress: (4/20) | 6.60 s
+[Task  8/25]  Current/Best:   10.15/  16.27 GFLOPS | Progress: (8/20) | 11.87 s
+[Task  8/25]  Current/Best:   16.24/  22.06 GFLOPS | Progress: (12/20) | 14.46 s
+[Task  8/25]  Current/Best:   20.39/  22.06 GFLOPS | Progress: (16/20) | 16.85 s
+[Task  8/25]  Current/Best:   10.87/  22.06 GFLOPS | Progress: (20/20) | 22.58 s Done.
 
 [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  9/25]  Current/Best:    6.56/  16.05 GFLOPS | Progress: (4/20) | 5.61 s
-[Task  9/25]  Current/Best:    6.84/  18.34 GFLOPS | Progress: (8/20) | 10.90 s
-[Task  9/25]  Current/Best:    6.75/  19.13 GFLOPS | Progress: (12/20) | 12.98 s
-[Task  9/25]  Current/Best:    9.51/  19.13 GFLOPS | Progress: (16/20) | 17.20 s
-[Task  9/25]  Current/Best:    6.31/  19.13 GFLOPS | Progress: (20/20) | 19.72 s Done.
+[Task  9/25]  Current/Best:   18.09/  18.09 GFLOPS | Progress: (4/20) | 7.59 s
+[Task  9/25]  Current/Best:   11.39/  18.09 GFLOPS | Progress: (8/20) | 9.78 s
+[Task  9/25]  Current/Best:   12.60/  22.60 GFLOPS | Progress: (12/20) | 11.85 s
+[Task  9/25]  Current/Best:   10.65/  22.60 GFLOPS | Progress: (16/20) | 15.09 s
+[Task  9/25]  Current/Best:   12.91/  22.60 GFLOPS | Progress: (20/20) | 18.69 s Done.
 
 [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 10/25]  Current/Best:   20.19/  20.19 GFLOPS | Progress: (4/20) | 4.86 s
-[Task 10/25]  Current/Best:   20.50/  20.50 GFLOPS | Progress: (8/20) | 6.94 s
-[Task 10/25]  Current/Best:    4.48/  21.37 GFLOPS | Progress: (12/20) | 8.88 s
-[Task 10/25]  Current/Best:    8.52/  21.37 GFLOPS | Progress: (16/20) | 11.31 s
-[Task 10/25]  Current/Best:    9.55/  21.37 GFLOPS | Progress: (20/20) | 13.22 s Done.
+[Task 10/25]  Current/Best:   18.24/  18.24 GFLOPS | Progress: (4/20) | 4.71 s
+[Task 10/25]  Current/Best:   16.31/  18.24 GFLOPS | Progress: (8/20) | 7.31 s
+[Task 10/25]  Current/Best:   10.53/  18.24 GFLOPS | Progress: (12/20) | 9.61 s
+[Task 10/25]  Current/Best:    6.37/  18.86 GFLOPS | Progress: (16/20) | 11.44 s
+[Task 10/25]  Current/Best:    9.64/  18.86 GFLOPS | Progress: (20/20) | 15.73 s Done.
 
 [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 11/25]  Current/Best:   16.07/  16.07 GFLOPS | Progress: (4/20) | 5.65 s
-[Task 11/25]  Current/Best:   21.14/  21.14 GFLOPS | Progress: (8/20) | 7.83 s
-[Task 11/25]  Current/Best:   20.71/  21.14 GFLOPS | Progress: (12/20) | 10.19 s
-[Task 11/25]  Current/Best:    7.89/  21.14 GFLOPS | Progress: (16/20) | 14.37 s
-[Task 11/25]  Current/Best:   15.67/  22.38 GFLOPS | Progress: (20/20) | 16.70 s Done.
+[Task 11/25]  Current/Best:    6.18/  21.92 GFLOPS | Progress: (4/20) | 5.35 s
+[Task 11/25]  Current/Best:   12.93/  21.92 GFLOPS | Progress: (8/20) | 7.47 s
+[Task 11/25]  Current/Best:   19.37/  21.92 GFLOPS | Progress: (12/20) | 9.55 s
+[Task 11/25]  Current/Best:    1.58/  21.92 GFLOPS | Progress: (16/20) | 13.93 s
+[Task 11/25]  Current/Best:    7.23/  21.92 GFLOPS | Progress: (20/20) | 16.53 s Done.
 
 [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 12/25]  Current/Best:   13.95/  18.86 GFLOPS | Progress: (4/20) | 6.26 s
-[Task 12/25]  Current/Best:    5.66/  18.86 GFLOPS | Progress: (8/20) | 11.56 s
-[Task 12/25]  Current/Best:    9.46/  18.86 GFLOPS | Progress: (12/20) | 16.60 s
-[Task 12/25]  Current/Best:   15.71/  21.18 GFLOPS | Progress: (16/20) | 19.18 s
-[Task 12/25]  Current/Best:   11.95/  21.18 GFLOPS | Progress: (20/20) | 22.45 s Done.
+[Task 12/25]  Current/Best:   17.20/  17.20 GFLOPS | Progress: (4/20) | 6.26 s
+[Task 12/25]  Current/Best:    1.59/  17.41 GFLOPS | Progress: (8/20) | 9.75 s
+[Task 12/25]  Current/Best:   14.71/  17.41 GFLOPS | Progress: (12/20) | 12.65 s
+[Task 12/25]  Current/Best:    5.90/  20.72 GFLOPS | Progress: (16/20) | 14.78 s
+[Task 12/25]  Current/Best:   21.80/  21.80 GFLOPS | Progress: (20/20) | 17.35 s Done.
 
 [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 13/25]  Current/Best:    5.19/  19.72 GFLOPS | Progress: (4/20) | 5.31 s
-[Task 13/25]  Current/Best:   21.53/  21.53 GFLOPS | Progress: (8/20) | 9.84 s
-[Task 13/25]  Current/Best:   18.64/  21.53 GFLOPS | Progress: (12/20) | 12.12 s
-[Task 13/25]  Current/Best:    8.49/  21.53 GFLOPS | Progress: (16/20) | 15.98 s
-[Task 13/25]  Current/Best:   18.49/  21.53 GFLOPS | Progress: (20/20) | 18.69 s Done.
+[Task 13/25]  Current/Best:   19.66/  19.66 GFLOPS | Progress: (4/20) | 6.29 s
+[Task 13/25]  Current/Best:    9.46/  19.66 GFLOPS | Progress: (8/20) | 9.57 s
+[Task 13/25]  Current/Best:   10.87/  20.73 GFLOPS | Progress: (12/20) | 12.13 s
+[Task 13/25]  Current/Best:   12.05/  20.73 GFLOPS | Progress: (16/20) | 15.38 s
+[Task 13/25]  Current/Best:   10.31/  20.73 GFLOPS | Progress: (20/20) | 18.01 s Done.
 
 [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 14/25]  Current/Best:   21.21/  21.21 GFLOPS | Progress: (4/20) | 4.92 s
-[Task 14/25]  Current/Best:   14.63/  21.21 GFLOPS | Progress: (8/20) | 16.43 s
-[Task 14/25]  Current/Best:   11.09/  21.21 GFLOPS | Progress: (12/20) | 29.28 s
-[Task 14/25]  Current/Best:   17.13/  21.21 GFLOPS | Progress: (16/20) | 32.47 s
-[Task 14/25]  Current/Best:   14.55/  21.21 GFLOPS | Progress: (20/20) | 35.22 s
+[Task 14/25]  Current/Best:   14.75/  14.75 GFLOPS | Progress: (4/20) | 13.81 s
+[Task 14/25]  Current/Best:    7.20/  14.75 GFLOPS | Progress: (8/20) | 17.98 s
+[Task 14/25]  Current/Best:   10.90/  19.35 GFLOPS | Progress: (12/20) | 20.58 s
+[Task 14/25]  Current/Best:   18.87/  19.35 GFLOPS | Progress: (16/20) | 24.53 s
+[Task 14/25]  Current/Best:    7.03/  19.35 GFLOPS | Progress: (20/20) | 28.57 s Done.
+
 [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 15/25]  Current/Best:   10.85/  16.46 GFLOPS | Progress: (4/20) | 5.82 s
-[Task 15/25]  Current/Best:   20.40/  20.40 GFLOPS | Progress: (8/20) | 9.98 s
-[Task 15/25]  Current/Best:    7.83/  20.40 GFLOPS | Progress: (12/20) | 17.19 s
-[Task 15/25]  Current/Best:   18.46/  20.40 GFLOPS | Progress: (16/20) | 19.59 s
-[Task 15/25]  Current/Best:    7.88/  20.40 GFLOPS | Progress: (20/20) | 24.06 s Done.
+[Task 15/25]  Current/Best:    8.89/   8.89 GFLOPS | Progress: (4/20) | 5.30 s
+[Task 15/25]  Current/Best:   19.40/  20.06 GFLOPS | Progress: (8/20) | 6.84 s
+[Task 15/25]  Current/Best:   18.18/  20.06 GFLOPS | Progress: (12/20) | 13.26 s
+[Task 15/25]  Current/Best:    7.21/  20.06 GFLOPS | Progress: (16/20) | 16.38 s
+[Task 15/25]  Current/Best:   12.79/  20.06 GFLOPS | Progress: (20/20) | 24.28 s Done.
 
 [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 16/25]  Current/Best:   14.35/  18.51 GFLOPS | Progress: (4/20) | 4.86 s
-[Task 16/25]  Current/Best:    5.74/  18.51 GFLOPS | Progress: (8/20) | 6.73 s
-[Task 16/25]  Current/Best:   16.94/  18.51 GFLOPS | Progress: (12/20) | 10.68 s
-[Task 16/25]  Current/Best:   15.94/  18.71 GFLOPS | Progress: (16/20) | 12.97 s
-[Task 16/25]  Current/Best:    1.57/  19.09 GFLOPS | Progress: (20/20) | 15.20 s Done.
+[Task 16/25]  Current/Best:   11.22/  17.89 GFLOPS | Progress: (4/20) | 4.96 s
+[Task 16/25]  Current/Best:   17.89/  18.25 GFLOPS | Progress: (8/20) | 6.58 s
+[Task 16/25]  Current/Best:    6.77/  18.25 GFLOPS | Progress: (12/20) | 8.95 s
+[Task 16/25]  Current/Best:   18.98/  18.98 GFLOPS | Progress: (16/20) | 12.16 s
+[Task 16/25]  Current/Best:    9.42/  18.98 GFLOPS | Progress: (20/20) | 13.86 s Done.
 
 [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 17/25]  Current/Best:   16.17/  16.17 GFLOPS | Progress: (4/20) | 6.02 s
-[Task 17/25]  Current/Best:    9.72/  16.17 GFLOPS | Progress: (8/20) | 8.53 s
-[Task 17/25]  Current/Best:    9.60/  22.58 GFLOPS | Progress: (12/20) | 12.00 s
-[Task 17/25]  Current/Best:   12.79/  22.58 GFLOPS | Progress: (16/20) | 14.86 s
-[Task 17/25]  Current/Best:    7.64/  22.58 GFLOPS | Progress: (20/20) | 17.77 s Done.
+[Task 17/25]  Current/Best:    4.79/  10.81 GFLOPS | Progress: (4/20) | 6.42 s
+[Task 17/25]  Current/Best:   18.77/  19.34 GFLOPS | Progress: (8/20) | 9.34 s
+[Task 17/25]  Current/Best:   20.64/  20.64 GFLOPS | Progress: (12/20) | 12.56 s
+[Task 17/25]  Current/Best:   20.18/  20.64 GFLOPS | Progress: (16/20) | 14.95 s
+[Task 17/25]  Current/Best:    9.58/  22.41 GFLOPS | Progress: (20/20) | 16.88 s Done.
 
 [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 18/25]  Current/Best:   14.86/  14.86 GFLOPS | Progress: (4/20) | 6.81 s
-[Task 18/25]  Current/Best:    5.71/  17.50 GFLOPS | Progress: (8/20) | 10.74 s
-[Task 18/25]  Current/Best:    4.89/  20.41 GFLOPS | Progress: (12/20) | 12.95 s
-[Task 18/25]  Current/Best:   18.21/  20.65 GFLOPS | Progress: (16/20) | 18.81 s
-[Task 18/25]  Current/Best:    9.95/  20.65 GFLOPS | Progress: (20/20) | 24.30 s Done.
+[Task 18/25]  Current/Best:    5.17/  13.13 GFLOPS | Progress: (4/20) | 6.72 s
+[Task 18/25]  Current/Best:   17.23/  17.23 GFLOPS | Progress: (8/20) | 10.45 s
+[Task 18/25]  Current/Best:   16.31/  17.23 GFLOPS | Progress: (12/20) | 13.96 s
+[Task 18/25]  Current/Best:    6.15/  17.28 GFLOPS | Progress: (16/20) | 16.37 s
+[Task 18/25]  Current/Best:    8.42/  17.28 GFLOPS | Progress: (20/20) | 21.31 s Done.
 
 [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 19/25]  Current/Best:   13.92/  18.55 GFLOPS | Progress: (4/20) | 7.55 s
-[Task 19/25]  Current/Best:   11.42/  20.97 GFLOPS | Progress: (8/20) | 12.19 s
-[Task 19/25]  Current/Best:    9.98/  20.97 GFLOPS | Progress: (12/20) | 17.32 s
-[Task 19/25]  Current/Best:   21.01/  21.01 GFLOPS | Progress: (16/20) | 19.89 s
-[Task 19/25]  Current/Best:   21.46/  21.61 GFLOPS | Progress: (20/20) | 23.63 s Done.
+[Task 19/25]  Current/Best:   11.55/  20.51 GFLOPS | Progress: (4/20) | 6.91 s
+[Task 19/25]  Current/Best:   15.21/  20.51 GFLOPS | Progress: (8/20) | 10.67 s
+[Task 19/25]  Current/Best:   12.65/  20.51 GFLOPS | Progress: (12/20) | 14.60 s
+[Task 19/25]  Current/Best:   21.09/  21.09 GFLOPS | Progress: (16/20) | 18.06 s
+[Task 19/25]  Current/Best:    1.55/  21.09 GFLOPS | Progress: (20/20) | 23.16 s Done.
 
 [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 20/25]  Current/Best:   11.10/  16.33 GFLOPS | Progress: (4/20) | 5.37 s
-[Task 20/25]  Current/Best:   16.48/  22.10 GFLOPS | Progress: (8/20) | 13.41 s
-[Task 20/25]  Current/Best:   14.78/  22.10 GFLOPS | Progress: (12/20) | 19.89 s
-[Task 20/25]  Current/Best:    8.61/  22.10 GFLOPS | Progress: (16/20) | 23.75 s
-[Task 20/25]  Current/Best:    6.51/  22.10 GFLOPS | Progress: (20/20) | 26.00 s Done.
-
+[Task 20/25]  Current/Best:    9.33/  15.07 GFLOPS | Progress: (4/20) | 10.06 s
+[Task 20/25]  Current/Best:    6.18/  18.09 GFLOPS | Progress: (8/20) | 21.75 s
+[Task 20/25]  Current/Best:   16.95/  18.09 GFLOPS | Progress: (12/20) | 26.67 s
+[Task 20/25]  Current/Best:    9.44/  18.09 GFLOPS | Progress: (16/20) | 29.46 s
+[Task 20/25]  Current/Best:    6.38/  18.09 GFLOPS | Progress: (20/20) | 37.42 s
 [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 21/25]  Current/Best:    5.04/  20.62 GFLOPS | Progress: (4/20) | 4.57 s
-[Task 21/25]  Current/Best:   19.70/  20.62 GFLOPS | Progress: (8/20) | 16.20 s
-[Task 21/25]  Current/Best:    7.00/  20.62 GFLOPS | Progress: (12/20) | 20.77 s
-[Task 21/25]  Current/Best:    8.47/  20.62 GFLOPS | Progress: (16/20) | 26.80 s
-[Task 21/25]  Current/Best:   21.04/  21.04 GFLOPS | Progress: (20/20) | 30.10 s Done.
-
+[Task 21/25]  Current/Best:    2.70/  12.87 GFLOPS | Progress: (4/20) | 14.42 s
+[Task 21/25]  Current/Best:    5.37/  13.38 GFLOPS | Progress: (8/20) | 22.16 s
+[Task 21/25]  Current/Best:   16.50/  20.55 GFLOPS | Progress: (12/20) | 24.47 s
+[Task 21/25]  Current/Best:   20.18/  20.55 GFLOPS | Progress: (16/20) | 27.49 s
+[Task 21/25]  Current/Best:   22.51/  22.51 GFLOPS | Progress: (20/20) | 29.66 s
 [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 22/25]  Current/Best:   10.39/  21.83 GFLOPS | Progress: (4/20) | 4.99 s
-[Task 22/25]  Current/Best:   14.05/  21.83 GFLOPS | Progress: (8/20) | 6.75 s
-[Task 22/25]  Current/Best:   10.89/  21.83 GFLOPS | Progress: (12/20) | 10.41 s
-[Task 22/25]  Current/Best:   13.56/  21.83 GFLOPS | Progress: (16/20) | 12.35 s
-[Task 22/25]  Current/Best:    5.24/  21.83 GFLOPS | Progress: (20/20) | 15.47 s Done.
+[Task 22/25]  Current/Best:   10.69/  19.76 GFLOPS | Progress: (4/20) | 10.14 s
+[Task 22/25]  Current/Best:   13.98/  19.76 GFLOPS | Progress: (8/20) | 17.19 s
+[Task 22/25]  Current/Best:   20.69/  20.69 GFLOPS | Progress: (12/20) | 18.90 s
+[Task 22/25]  Current/Best:   14.51/  20.69 GFLOPS | Progress: (16/20) | 21.64 s
+[Task 22/25]  Current/Best:   10.56/  21.85 GFLOPS | Progress: (20/20) | 23.93 s Done.
 
 [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 23/25]  Current/Best:   21.86/  22.39 GFLOPS | Progress: (4/20) | 5.91 s
-[Task 23/25]  Current/Best:    9.38/  22.39 GFLOPS | Progress: (8/20) | 12.21 s
-[Task 23/25]  Current/Best:    3.09/  22.39 GFLOPS | Progress: (12/20) | 15.45 s
-[Task 23/25]  Current/Best:   13.88/  22.39 GFLOPS | Progress: (16/20) | 19.75 s
-[Task 23/25]  Current/Best:   19.23/  22.39 GFLOPS | Progress: (20/20) | 22.00 s Done.
+[Task 23/25]  Current/Best:    8.98/  19.94 GFLOPS | Progress: (4/20) | 6.08 s
+[Task 23/25]  Current/Best:   17.71/  20.43 GFLOPS | Progress: (8/20) | 8.52 s
+[Task 23/25]  Current/Best:    7.87/  20.43 GFLOPS | Progress: (12/20) | 11.53 s
+[Task 23/25]  Current/Best:    1.55/  22.04 GFLOPS | Progress: (16/20) | 15.97 s
+[Task 23/25]  Current/Best:   17.57/  22.04 GFLOPS | Progress: (20/20) | 18.84 s Done.
 
 [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 24/25]  Current/Best:    9.62/   9.62 GFLOPS | Progress: (4/20) | 13.84 s
-[Task 24/25]  Current/Best:    6.46/   9.62 GFLOPS | Progress: (8/20) | 26.08 s
-[Task 24/25]  Current/Best:    7.90/   9.62 GFLOPS | Progress: (12/20) | 30.96 s
-[Task 24/25]  Current/Best:    5.84/  10.12 GFLOPS | Progress: (16/20) | 41.66 s
-[Task 24/25]  Current/Best:    8.43/  10.12 GFLOPS | Progress: (20/20) | 54.39 s
+[Task 24/25]  Current/Best:    3.74/   7.43 GFLOPS | Progress: (4/20) | 7.01 s
+[Task 24/25]  Current/Best:    2.38/   7.43 GFLOPS | Progress: (8/20) | 11.36 s
+[Task 24/25]  Current/Best:    9.45/   9.45 GFLOPS | Progress: (12/20) | 13.40 s
+[Task 24/25]  Current/Best:   10.08/  10.08 GFLOPS | Progress: (16/20) | 24.13 s
+[Task 24/25]  Current/Best:    7.81/  10.08 GFLOPS | Progress: (20/20) | 36.17 s
 [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
-
-[Task 25/25]  Current/Best:    2.93/   9.25 GFLOPS | Progress: (4/20) | 8.12 s
-[Task 25/25]  Current/Best:    5.49/   9.31 GFLOPS | Progress: (8/20) | 16.67 s
-[Task 25/25]  Current/Best:    5.28/   9.31 GFLOPS | Progress: (12/20) | 27.68 s
-[Task 25/25]  Current/Best:    7.99/   9.31 GFLOPS | Progress: (16/20) | 38.68 s
-[Task 25/25]  Current/Best:    9.03/   9.31 GFLOPS | Progress: (20/20) | 41.47 s
+ Done.
+ Done.
+
+[Task 25/25]  Current/Best:    3.03/   5.61 GFLOPS | Progress: (4/20) | 5.60 s
+[Task 25/25]  Current/Best:    5.79/   6.28 GFLOPS | Progress: (8/20) | 10.90 s
+[Task 25/25]  Current/Best:    9.61/   9.61 GFLOPS | Progress: (12/20) | 13.13 s
+[Task 25/25]  Current/Best:    5.90/   9.61 GFLOPS | Progress: (16/20) | 22.46 s
+[Task 25/25]  Current/Best:    5.57/   9.61 GFLOPS | Progress: (20/20) | 24.02 s Done.
 </pre></div>
 </div>
 <p>The output from this tuning process will look something like this:</p>
@@ -975,10 +976,6 @@ model using optimized operators to speed up our computations.</p>
 <a href="../reference/api/python/graph_executor.html#tvm.contrib.graph_executor.GraphModule" title="tvm.contrib.graph_executor.GraphModule" class="sphx-glr-backref-module-tvm-contrib-graph_executor sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">module</span></a> <span class="o">=</span> <a href="../reference/api/python/graph_executor.html#tvm.contrib.graph_executor.GraphModule" title="tvm.contrib.graph_executor.GraphModule" class="sphx-glr-backref-module-tvm-co [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Done.
-Done.
-</pre></div>
-</div>
 <p>Verify that the optimized model runs and produces the same results:</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">dtype</span></a> <span class="o">=</span> <span class="s2">&quot;float32&quot;</span>
 <a href="../reference/api/python/graph_executor.html#tvm.contrib.graph_executor.GraphModule.set_input" title="tvm.contrib.graph_executor.GraphModule.set_input" class="sphx-glr-backref-module-tvm-contrib-graph_executor sphx-glr-backref-type-py-method"><span class="n">module</span><span class="o">.</span><span class="n">set_input</span></a><span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx- [...]
@@ -993,8 +990,8 @@ Done.
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;class=&#39;</span><span class="si">%s</span><span class="s2">&#39; with probability=</span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#list" title="builtins.list" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">labels</span></a [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>class=&#39;n02123045 tabby, tabby cat&#39; with probability=0.621104
-class=&#39;n02123159 tiger cat&#39; with probability=0.356378
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>class=&#39;n02123045 tabby, tabby cat&#39; with probability=0.621103
+class=&#39;n02123159 tiger cat&#39; with probability=0.356379
 class=&#39;n02124075 Egyptian cat&#39; with probability=0.019712
 class=&#39;n02129604 tiger, Panthera tigris&#39; with probability=0.001215
 class=&#39;n04040759 radiator&#39; with probability=0.000262
@@ -1031,8 +1028,8 @@ improvement in comparing the optimized model to the unoptimized model.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;unoptimized: </span><span class="si">%s</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">))</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 417.05924828005664, &#39;median&#39;: 417.1992093499284, &#39;std&#39;: 0.865296941702935}
-unoptimized: {&#39;mean&#39;: 496.5515389698703, &#39;median&#39;: 496.1619072502799, &#39;std&#39;: 2.0135947670101677}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 418.33957764999013, &#39;median&#39;: 417.50189224999303, &#39;std&#39;: 3.236084066776782}
+unoptimized: {&#39;mean&#39;: 492.5452644500001, &#39;median&#39;: 492.4691212000198, &#39;std&#39;: 2.1286296796252318}
 </pre></div>
 </div>
 </div>
@@ -1046,7 +1043,7 @@ models.</p>
 <p>Here we presented a simple example using ResNet-50 v2 locally. However, TVM
 supports many more features including cross-compilation, remote execution and
 profiling/benchmarking.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 13 minutes  36.950 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 12 minutes  55.534 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-autotvm-relay-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/57a45d9bef1af358191e7d50043e652c/autotvm_relay_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">autotvm_relay_x86.py</span></code></a></p>
diff --git a/docs/tutorial/cross_compilation_and_rpc.html b/docs/tutorial/cross_compilation_and_rpc.html
index afc5bb5a35..fc4ff546ee 100644
--- a/docs/tutorial/cross_compilation_and_rpc.html
+++ b/docs/tutorial/cross_compilation_and_rpc.html
@@ -548,7 +548,7 @@ device and returns the measured cost. Network overhead is excluded.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;</span><span class="si">%g</span><span class="s2"> secs/op&quot;</span> <span class="o">%</span> <span class="n">cost</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.183e-07 secs/op
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.153e-07 secs/op
 </pre></div>
 </div>
 </div>
diff --git a/docs/tutorial/intro_topi.html b/docs/tutorial/intro_topi.html
index 0d1763079a..6e4c22a170 100644
--- a/docs/tutorial/intro_topi.html
+++ b/docs/tutorial/intro_topi.html
@@ -518,7 +518,7 @@ class Module:
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/ir.html#tvm.ir.Array" title="tvm.ir.Array" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">sg</span><span class="o">.</span><span class="n">stages</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0x109e8ba0)), stage(b, placeholder(b, 0x26601e90)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attr [...]
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0x2357aef0)), stage(b, placeholder(b, 0x1cb2b700)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attr [...]
 </pre></div>
 </div>
 <p>We can test the correctness by comparing with <code class="code docutils literal notranslate"><span class="pre">numpy</span></code> result as follows</p>
diff --git a/docs/tutorial/sg_execution_times.html b/docs/tutorial/sg_execution_times.html
index ffb260db3d..df02b0845b 100644
--- a/docs/tutorial/sg_execution_times.html
+++ b/docs/tutorial/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-tutorial-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>17:23.432</strong> total execution time for <strong>tutorial</strong> files:</p>
+<p><strong>16:42.819</strong> total execution time for <strong>tutorial</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_relay_x86.html#sphx-glr-tutorial-autotvm-relay-x86-py"><span class="std std-ref">Compiling and Optimizing a Model with the Python Interface (AutoTVM)</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_relay_x86.py</span></code>)</p></td>
-<td><p>13:36.950</p></td>
+<td><p>12:55.534</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="auto_scheduler_matmul_x86.html#sphx-glr-tutorial-auto-scheduler-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Auto-scheduling</span></a> (<code class="docutils literal notranslate"><span class="pre">auto_scheduler_matmul_x86.py</span></code>)</p></td>
-<td><p>01:47.147</p></td>
+<td><p>01:33.068</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_expr_get_started.html#sphx-glr-tutorial-tensor-expr-get-started-py"><span class="std std-ref">Working with Operators Using Tensor Expression</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_expr_get_started.py</span></code>)</p></td>
-<td><p>00:58.020</p></td>
+<td><p>01:01.924</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="relay_quick_start.html#sphx-glr-tutorial-relay-quick-start-py"><span class="std std-ref">Quick Start Tutorial for Compiling Deep Learning Models</span></a> (<code class="docutils literal notranslate"><span class="pre">relay_quick_start.py</span></code>)</p></td>
-<td><p>00:40.685</p></td>
+<td><p>00:42.006</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_matmul_x86.html#sphx-glr-tutorial-autotvm-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Schedule Templates and AutoTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_matmul_x86.py</span></code>)</p></td>
-<td><p>00:18.589</p></td>
+<td><p>00:28.269</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="intro_topi.html#sphx-glr-tutorial-intro-topi-py"><span class="std std-ref">Introduction to TOPI</span></a> (<code class="docutils literal notranslate"><span class="pre">intro_topi.py</span></code>)</p></td>
-<td><p>00:00.974</p></td>
+<td><p>00:00.980</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_ir_blitz_course.html#sphx-glr-tutorial-tensor-ir-blitz-course-py"><span class="std std-ref">Blitz Course to TensorIR</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_ir_blitz_course.py</span></code>)</p></td>
@@ -387,18 +387,18 @@
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="cross_compilation_and_rpc.html#sphx-glr-tutorial-cross-compilation-and-rpc-py"><span class="std std-ref">Cross Compilation and RPC</span></a> (<code class="docutils literal notranslate"><span class="pre">cross_compilation_and_rpc.py</span></code>)</p></td>
-<td><p>00:00.204</p></td>
+<td><p>00:00.173</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="uma.html#sphx-glr-tutorial-uma-py"><span class="std std-ref">Making your Hardware Accelerator TVM-ready with UMA</span></a> (<code class="docutils literal notranslate"><span class="pre">uma.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="tvmc_python.html#sphx-glr-tutorial-tvmc-python-py"><span class="std std-ref">Getting Starting using TVMC Python: a high-level API for TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_python.py</span></code>)</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="tvmc_command_line_driver.html#sphx-glr-tutorial-tvmc-command-line-driver-py"><span class="std std-ref">Compiling and Optimizing a Model with TVMC</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_command_line_driver.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tvmc_command_line_driver.html#sphx-glr-tutorial-tvmc-command-line-driver-py"><span class="std std-ref">Compiling and Optimizing a Model with TVMC</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_command_line_driver.py</span></code>)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="tvmc_python.html#sphx-glr-tutorial-tvmc-python-py"><span class="std std-ref">Getting Starting using TVMC Python: a high-level API for TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_python.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
diff --git a/docs/tutorial/tensor_expr_get_started.html b/docs/tutorial/tensor_expr_get_started.html
index c791789a64..bb26bf2f98 100644
--- a/docs/tutorial/tensor_expr_get_started.html
+++ b/docs/tutorial/tensor_expr_get_started.html
@@ -560,7 +560,7 @@ helper function to run a profile of the TVM generated code.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.000007
-naive: 0.000008
+naive: 0.000007
 </pre></div>
 </div>
 </div>
@@ -615,7 +615,7 @@ compile and run this new schedule with the parallel operation applied:</p>
 <span class="n">evaluate_addition</span><span class="p">(</span><span class="n">fadd_parallel</span><span class="p">,</span> <a href="../reference/api/python/target.html#tvm.target.Target" title="tvm.target.Target" class="sphx-glr-backref-module-tvm-target sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">tgt</span></a><span class="p">,</span> <span class="s2">&quot;parallel&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.h [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallel: 0.000007
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallel: 0.000008
 </pre></div>
 </div>
 </div>
@@ -691,10 +691,10 @@ class Module:
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Operator                  Timing             Performance
-   numpy    6.792819986003451e-06                    1.0
-   naive              8.2517e-06        1.21476794865793
-parallel              7.3922e-06      1.0882372880823528
-  vector    3.9302900000000006e-05     5.785947527092328
+   numpy    7.468819999303377e-06                    1.0
+   naive              6.8698e-06      0.9197972371326062
+parallel    8.180999999999998e-06     1.0953537507615723
+  vector             3.92637e-05      5.2570151648670285
 </pre></div>
 </div>
 <div class="admonition-code-specialization admonition">
@@ -1010,7 +1010,7 @@ matrix multiplication.</p>
 <span class="n">answer</span> <span class="o">=</span> <span class="n">numpy</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a</span><span class="o">.</span><span class="n">numpy</span><span class="p">(),</span> <span class="n">b</span><span class="o">.</span><span class="n">numpy</span><span class="p">())</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019301
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018051
 </pre></div>
 </div>
 <p>Now we write a basic matrix multiplication using TVM TE and verify that it
@@ -1051,7 +1051,7 @@ optimizations.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.233708
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.534415
 </pre></div>
 </div>
 <p>Let’s take a look at the intermediate representation of the operator and
@@ -1115,7 +1115,7 @@ schedule.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.297653
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.306064
 </pre></div>
 </div>
 <p>By reordering the computation to take advantage of caching, you should see a
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.295293
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.283986
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1213,7 +1213,7 @@ more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.114099
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.116977
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1283,7 +1283,7 @@ optimized schedule.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.105082
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.108061
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1349,7 +1349,7 @@ to `C</cite> when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.111125
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.111314
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1406,7 +1406,7 @@ of thread-level parallelization.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.131606
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.133272
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1459,13 +1459,13 @@ working, we can compare the results.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>        Operator                  Timing             Performance
-            none            3.2337080406                     1.0
-        blocking            0.2976526778      0.0920468620119372
-   vectorization             0.295293496      0.0913173027040529
-loop permutation     0.11409895959999998    0.035284248969746024
-   array packing     0.10508163239999999     0.03249570804805946
-   block caching     0.11112465340000002     0.03436446704674716
- parallelization            0.1316057406     0.04069808991648521
+            none      3.5344147400000003                     1.0
+        blocking            0.3060639061     0.08659535697273603
+   vectorization             0.283986188     0.08034885798376903
+loop permutation            0.1169772386     0.03309663613501114
+   array packing            0.1080614719    0.030574077987237002
+   block caching            0.1113136881    0.031494234912567164
+ parallelization            0.1332718711      0.0377069135638564
 </pre></div>
 </div>
 <p>Note that the outputs on the web page reflect the running times on a
@@ -1497,6 +1497,7 @@ is</p>
 you can build generic templates of the matrix multiplication and other
 operations with tunable parameters that allows you to automatically optimize
 the computation for specific platforms.</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  1.924 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-tensor-expr-get-started-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/40a01cffb015a67aaec0fad7e27cf80d/tensor_expr_get_started.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tensor_expr_get_started.py</span></code></a></p>