You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2023/07/18 01:20:41 UTC

[tvm-site] branch asf-site updated: deploying docs (apache/tvm@6c63e0db53a9fee57b22b53941a8e634c1c90227)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new e7832bf7de deploying docs (apache/tvm@6c63e0db53a9fee57b22b53941a8e634c1c90227)
e7832bf7de is described below

commit e7832bf7de784dbb2af3aa8d3304576526f05b62
Author: tvm-bot <95...@users.noreply.github.com>
AuthorDate: Tue Jul 18 01:20:35 2023 +0000

    deploying docs (apache/tvm@6c63e0db53a9fee57b22b53941a8e634c1c90227)
---
 .../how_to/compile_models/from_darknet.rst.txt     |   2 +-
 .../how_to/compile_models/from_mxnet.rst.txt       |   2 +-
 .../how_to/compile_models/from_oneflow.rst.txt     |   2 +-
 .../how_to/compile_models/from_paddle.rst.txt      |   2 +-
 .../how_to/compile_models/from_pytorch.rst.txt     |   2 +-
 .../how_to/compile_models/from_tensorflow.rst.txt  |   2 +-
 .../compile_models/sg_execution_times.rst.txt      |  22 +-
 .../deploy_models/deploy_model_on_adreno.rst.txt   |   4 +-
 .../deploy_model_on_adreno_tvmc.rst.txt            |   2 +-
 .../deploy_models/deploy_model_on_android.rst.txt  |   2 +-
 .../deploy_object_detection_pytorch.rst.txt        |   4 +-
 .../deploy_models/deploy_prequantized.rst.txt      |   6 +-
 .../deploy_prequantized_tflite.rst.txt             |   2 +-
 .../how_to/deploy_models/deploy_quantized.rst.txt  |   2 +-
 .../deploy_models/sg_execution_times.rst.txt       |  22 +-
 .../extend_tvm/bring_your_own_datatypes.rst.txt    |   2 +-
 .../how_to/extend_tvm/sg_execution_times.rst.txt   |   8 +-
 .../how_to/extend_tvm/use_pass_instrument.rst.txt  |  16 +-
 .../optimize_operators/opt_conv_cuda.rst.txt       |   2 +-
 .../optimize_operators/opt_conv_tensorcore.rst.txt |   2 +-
 .../how_to/optimize_operators/opt_gemm.rst.txt     |  16 +-
 .../optimize_operators/sg_execution_times.rst.txt  |   8 +-
 .../sg_execution_times.rst.txt                     |  14 +-
 .../tune_conv2d_layer_cuda.rst.txt                 |   2 +-
 .../tune_network_cuda.rst.txt                      |   4 +-
 .../tune_network_x86.rst.txt                       |   4 +-
 .../tune_with_autotvm/sg_execution_times.rst.txt   |  10 +-
 .../tune_with_autotvm/tune_conv2d_cuda.rst.txt     |   2 +-
 .../work_with_microtvm/micro_autotune.rst.txt      |  18 +-
 .../work_with_microtvm/micro_pytorch.rst.txt       |   4 +-
 .../how_to/work_with_microtvm/micro_train.rst.txt  |  16 +-
 .../work_with_microtvm/sg_execution_times.rst.txt  |  14 +-
 .../work_with_relay/sg_execution_times.rst.txt     |   8 +-
 .../how_to/work_with_schedules/intrin_math.rst.txt |   2 +-
 .../work_with_schedules/sg_execution_times.rst.txt |  18 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |   6 +-
 .../frontend/deploy_classification.rst.txt         |   4 +-
 .../tutorials/frontend/deploy_detection.rst.txt    |   4 +-
 .../tutorials/frontend/sg_execution_times.rst.txt  |   6 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |   6 +-
 .../topic/vta/tutorials/sg_execution_times.rst.txt |   6 +-
 .../tutorial/auto_scheduler_matmul_x86.rst.txt     |   4 +-
 docs/_sources/tutorial/autotvm_matmul_x86.rst.txt  |  20 +-
 docs/_sources/tutorial/autotvm_relay_x86.rst.txt   |  58 ++---
 .../tutorial/cross_compilation_and_rpc.rst.txt     |   2 +-
 docs/_sources/tutorial/intro_topi.rst.txt          |   2 +-
 docs/_sources/tutorial/sg_execution_times.rst.txt  |  18 +-
 .../tutorial/tensor_expr_get_started.rst.txt       |  51 ++--
 docs/api/rust/help.html                            |   2 +-
 docs/api/rust/settings.html                        |   2 +-
 docs/commit_hash                                   |   2 +-
 docs/how_to/compile_models/from_darknet.html       |   2 +-
 docs/how_to/compile_models/from_mxnet.html         |   2 +-
 docs/how_to/compile_models/from_oneflow.html       |  16 +-
 docs/how_to/compile_models/from_paddle.html        |   2 +-
 docs/how_to/compile_models/from_pytorch.html       |  18 +-
 docs/how_to/compile_models/from_tensorflow.html    |   2 +-
 docs/how_to/compile_models/sg_execution_times.html |  26 +-
 .../deploy_models/deploy_model_on_adreno.html      |   4 +-
 .../deploy_models/deploy_model_on_adreno_tvmc.html |  33 ++-
 .../deploy_models/deploy_model_on_android.html     |   2 +-
 .../deploy_object_detection_pytorch.html           |  70 +++---
 docs/how_to/deploy_models/deploy_prequantized.html |   9 +-
 .../deploy_models/deploy_prequantized_tflite.html  |   2 +-
 docs/how_to/deploy_models/deploy_quantized.html    |   2 +-
 docs/how_to/deploy_models/sg_execution_times.html  |  22 +-
 .../extend_tvm/bring_your_own_datatypes.html       |   2 +-
 docs/how_to/extend_tvm/sg_execution_times.html     |   8 +-
 docs/how_to/extend_tvm/use_pass_instrument.html    |  16 +-
 docs/how_to/optimize_operators/opt_conv_cuda.html  |   2 +-
 .../optimize_operators/opt_conv_tensorcore.html    |   2 +-
 docs/how_to/optimize_operators/opt_gemm.html       |  16 +-
 .../optimize_operators/sg_execution_times.html     |   8 +-
 .../sg_execution_times.html                        |  14 +-
 .../tune_conv2d_layer_cuda.html                    |   2 +-
 .../tune_with_autoscheduler/tune_network_cuda.html |   4 +-
 .../tune_with_autoscheduler/tune_network_x86.html  |   4 +-
 .../tune_with_autotvm/sg_execution_times.html      |  10 +-
 .../how_to/tune_with_autotvm/tune_conv2d_cuda.html |   2 +-
 docs/how_to/work_with_microtvm/micro_autotune.html |  18 +-
 docs/how_to/work_with_microtvm/micro_pytorch.html  |   6 +-
 docs/how_to/work_with_microtvm/micro_train.html    |  16 +-
 .../work_with_microtvm/sg_execution_times.html     |  14 +-
 .../how_to/work_with_relay/sg_execution_times.html |   8 +-
 docs/how_to/work_with_schedules/intrin_math.html   |   2 +-
 .../work_with_schedules/sg_execution_times.html    |  18 +-
 docs/reference/api/python/auto_scheduler.html      |   4 +-
 .../api/typedoc/classes/bytestreamreader.html      |  12 +-
 .../api/typedoc/classes/cachedcallstack.html       |  34 +--
 docs/reference/api/typedoc/classes/dldatatype.html |  12 +-
 docs/reference/api/typedoc/classes/dldevice.html   |  10 +-
 .../reference/api/typedoc/classes/environment.html |  12 +-
 docs/reference/api/typedoc/classes/ffilibrary.html |  20 +-
 docs/reference/api/typedoc/classes/instance.html   |  58 ++---
 docs/reference/api/typedoc/classes/memory.html     |  34 +--
 docs/reference/api/typedoc/classes/module.html     |  10 +-
 docs/reference/api/typedoc/classes/ndarray.html    |  22 +-
 .../api/typedoc/classes/packedfunccell.html        |   6 +-
 docs/reference/api/typedoc/classes/rpcserver.html  |  14 +-
 .../api/typedoc/classes/runtimecontext.html        |  22 +-
 docs/reference/api/typedoc/classes/scalar.html     |   6 +-
 docs/reference/api/typedoc/classes/tvmarray.html   |  16 +-
 docs/reference/api/typedoc/classes/tvmobject.html  |  12 +-
 .../api/typedoc/classes/webgpucontext.html         |  12 +-
 docs/reference/api/typedoc/enums/argtypecode.html  |  30 +--
 .../api/typedoc/enums/aynccallbackcode.html        |   4 +-
 .../api/typedoc/enums/dldatatypecode.html          |   8 +-
 .../api/typedoc/enums/rpcserverstate.html          |  12 +-
 docs/reference/api/typedoc/enums/sizeof.html       |  18 +-
 docs/reference/api/typedoc/index.html              | 124 +++++-----
 .../api/typedoc/interfaces/disposable.html         |   2 +-
 .../api/typedoc/interfaces/functioninfo.html       |   6 +-
 .../api/typedoc/interfaces/libraryprovider.html    |   4 +-
 docs/searchindex.js                                |   2 +-
 .../vta/tutorials/autotvm/sg_execution_times.html  |   6 +-
 .../tutorials/frontend/deploy_classification.html  |   4 +-
 .../vta/tutorials/frontend/deploy_detection.html   |   4 +-
 .../vta/tutorials/frontend/sg_execution_times.html |   6 +-
 .../vta/tutorials/optimize/sg_execution_times.html |   6 +-
 docs/topic/vta/tutorials/sg_execution_times.html   |   6 +-
 docs/tutorial/auto_scheduler_matmul_x86.html       |   4 +-
 docs/tutorial/autotvm_matmul_x86.html              |  20 +-
 docs/tutorial/autotvm_relay_x86.html               | 268 ++++++++++-----------
 docs/tutorial/cross_compilation_and_rpc.html       |   2 +-
 docs/tutorial/intro_topi.html                      |   2 +-
 docs/tutorial/sg_execution_times.html              |  18 +-
 docs/tutorial/tensor_expr_get_started.html         |  47 ++--
 127 files changed, 873 insertions(+), 871 deletions(-)

diff --git a/docs/_sources/how_to/compile_models/from_darknet.rst.txt b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
index 003d23271f..a5e5d8457c 100644
--- a/docs/_sources/how_to/compile_models/from_darknet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
@@ -318,7 +318,7 @@ The process is no different from other examples.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  34.000 seconds)
+   **Total running time of the script:** ( 1 minutes  31.935 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_darknet.py:
diff --git a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
index 9718a82460..1793aa3011 100644
--- a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
@@ -116,7 +116,7 @@ In this section, we download a pretrained imagenet model and classify an image.
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip2b33c744-77bf-440e-8f69-394f9aac9175 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip3c17ccdc-7b05-4710-bc61-77fca5fb5fb0 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
     x (1, 3, 224, 224)
 
 
diff --git a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
index 8a6695f709..3a69b6463b 100644
--- a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
@@ -121,7 +121,7 @@ Load a pretrained OneFlow model and save model
  .. code-block:: none
 
     Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
-
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     19%|#9        | 7.99M/41.5M [00:00<00:00, 52.2MB/s]
     35%|###4      | 14.3M/41.5M [00:00<00:00, 44.8MB/s]
     45%|####4     | 18.6M/41.5M [00:00<00:00, 42.7MB/s]
     58%|#####7    | 24.0M/41.5M [00:00<00:00, 32.8MB/s]
     77%|#######7  | 32.0M/41.5M [00:00<00:00, 36.9MB/s]
     92%|#########2| 38.3M/41.5M [00:01<00:00, 32.1MB/s]
    100%|##########| 41.5M/41.5M [00:01<00:00, 36.0MB/s]
+
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     16%|#5        | 6.58M/41.5M [00:00<00:00, 69.0MB/s]
     32%|###1      | 13.2M/41.5M [00:00<00:00, 57.4MB/s]
     45%|####5     | 18.8M/41.5M [00:00<00:00, 30.9MB/s]
     54%|#####4    | 22.6M/41.5M [00:00<00:00, 28.6MB/s]
     62%|######2   | 25.9M/41.5M [00:00<00:00, 26.0MB/s]
     77%|#######7  | 32.0M/41.5M [00:01<00:00, 29.4MB/s]
     92%|#########2| 38.3M/41.5M [00:01<00:00, 31.5MB/s]
    100%|#########9| 41.5M/41.5M [00:01<00:00, 25.9MB/s]
    100%|##########| 41.5M/41.5M [00:01<00:00, 29.9MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_paddle.rst.txt b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
index e2fdb124cd..3fa0b7dcd3 100644
--- a/docs/_sources/how_to/compile_models/from_paddle.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
@@ -209,7 +209,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  3.873 seconds)
+   **Total running time of the script:** ( 1 minutes  0.979 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_paddle.py:
diff --git a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
index 5b7190a62f..a8cb9dee78 100644
--- a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
@@ -101,7 +101,7 @@ Load a pretrained PyTorch model
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     15%|#5        | 6.72M/44.7M [00:00<00:00, 70.5MB/s]
     30%|###       | 13.4M/44.7M [00:00<00:01, 28.8MB/s]
     39%|###8      | 17.4M/44.7M [00:00<00:01, 24.4MB/s]
     54%|#####3    | 24.0M/44.7M [00:00<00:00, 29.3MB/s]
     68%|######7   | 30.3M/44.7M [00:01<00:00, 29.1MB/s]
     75%|#######4  | 33.4M/44.7M [00:01<00:00, 27.2MB/s]
     86%|########5 | 38.3M/44.7M [00:01<00:00, 28.7MB/s]
     92%|#########2| 41.2M/44.7M [00:01<00:00, 27.4MB/s]
    100%|##########| 44.7M/44.7M [00:01<00:00, 30.8MB/s]
+
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     14%|#4        | 6.30M/44.7M [00:00<00:01, 33.2MB/s]
     21%|##1       | 9.48M/44.7M [00:00<00:01, 25.4MB/s]
     36%|###5      | 16.0M/44.7M [00:00<00:00, 30.4MB/s]
     54%|#####3    | 24.0M/44.7M [00:00<00:00, 38.6MB/s]
     62%|######2   | 27.8M/44.7M [00:00<00:00, 38.7MB/s]
     72%|#######1  | 32.0M/44.7M [00:01<00:00, 32.6MB/s]
     86%|########5 | 38.3M/44.7M [00:01<00:00, 31.8MB/s]
     93%|#########2| 41.4M/44.7M [00:01<00:00, 26.9MB/s]
    100%|##########| 44.7M/44.7M [00:01<00:00, 32.7MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
index 09caa87db6..e3febd5e6a 100644
--- a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
@@ -430,7 +430,7 @@ Run the corresponding model on tensorflow
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  37.992 seconds)
+   **Total running time of the script:** ( 1 minutes  34.035 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_tensorflow.py:
diff --git a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
index c603a3ccd7..8265b0567b 100644
--- a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**07:24.384** total execution time for **how_to_compile_models** files:
+**07:09.419** total execution time for **how_to_compile_models** files:
 
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:37.992 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:34.035 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:34.000 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:31.935 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:03.873 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:00.979 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:42.917 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:41.687 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:37.401 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:35.674 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:35.378 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:33.553 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:28.784 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:28.226 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:27.747 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:27.485 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:13.392 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:13.059 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.899 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.786 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
index a82baf7adc..cbe23977a0 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
@@ -673,7 +673,7 @@ well as provides information about the model's performance
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-     4229.4225    4223.6224    4271.7660    4221.5677     14.4009                  
+     4161.0131    4164.2507    4173.1748    4116.0565     15.5221                  
 
 
 
@@ -681,7 +681,7 @@ well as provides information about the model's performance
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  20.716 seconds)
+   **Total running time of the script:** ( 1 minutes  18.890 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_model_on_adreno.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
index 23d9dfe34d..03d1715a26 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
@@ -127,7 +127,7 @@ Make a Keras Resnet50 Model
  .. code-block:: none
 
     Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
-
         8192/102967424 [..............................] - ETA: 0s
      8380416/102967424 [=>............................] - ETA: 0s
     10575872/102967424 [==>...........................] - ETA: 1s
     16769024/102967424 [===>..........................] - ETA: 1s
     25157632/102967424 [======>.......................] - ETA: 0s
     33546240/102967424 [========>.....................] - ETA: 0s
     40189952/102967424 [==========>...................] - ETA: 0s
     41934848/102967424 [===========>..................] - ETA: 0s
 
     50323456/102967424 [=============>................] - ETA: 0s
     66035712/102967424 [==================>...........] - ETA: 0s
     69296128/102967424 [===================>..........] - ETA: 0s
     72540160/102967424 [====================>.........] - ETA: 0s
     75489280/102967424 [====================>.........] - ETA: 0s
     82124800/102967424 [======================>.......] - ETA: 0s
     83877888/102967424 [=======================>......] - ETA: 0s
     90521600/102967424 [=========================>....] -
  ETA: 0s
     92266496/102967424 [=========================>....] - ETA: 0s
    100646912/102967424 [============================>.] - ETA: 0s
    102850560/102967424 [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 1s 0us/step
+
         8192/102967424 [..............................] - ETA: 0s
      6684672/102967424 [>.............................] - ETA: 0s
      8380416/102967424 [=>............................] - ETA: 1s
     15024128/102967424 [===>..........................] - ETA: 1s
     16769024/102967424 [===>..........................] - ETA: 2s
     23412736/102967424 [=====>........................] - ETA: 1s
     25157632/102967424 [======>.......................] - ETA: 1s
     31580160/102967424 [========>.....................] - ETA: 1s
 
     33546240/102967424 [========>.....................] - ETA: 1s
     33816576/102967424 [========>.....................] - ETA: 1s
     40189952/102967424 [==========>...................] - ETA: 1s
     41934848/102967424 [===========>..................] - ETA: 1s
     45842432/102967424 [============>.................] - ETA: 1s
     50323456/102967424 [=============>................] - ETA: 1s
     56967168/102967424 [===============>..............] - ETA: 1s
     58712064/102967424 [================>.............] -
  ETA: 1s
     65355776/102967424 [==================>...........] - ETA: 0s
     67100672/102967424 [==================>...........] - ETA: 0s
     71540736/102967424 [===================>..........] - ETA: 0s
     75489280/102967424 [====================>.........] - ETA: 0s
     82124800/102967424 [======================>.......] - ETA: 0s
     83877888/102967424 [=======================>......] - ETA: 0s
     86999040/102967424 [========================>.....] - ETA: 0s
     92266496/102967424
  [=========================>....] - ETA: 0s
    100646912/102967424 [============================>.] - ETA: 0s
    102850560/102967424 [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 2s 0us/step
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
index 9476c1d122..f62af4a4db 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
@@ -437,7 +437,7 @@ Execute on TVM
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      16.3814      16.2922      17.8350      14.9727       0.7011                  
+      15.6872      15.6687      15.9568      15.4194       0.1526                  
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
index aca0b383c9..57d4e3065f 100644
--- a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
@@ -130,7 +130,7 @@ Load pre-trained maskrcnn from torchvision and do tracing
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MaskRCNN_ResNet50_FPN_Weights.COCO_V1`. You can also use `weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-
      0%|          | 0.00/170M [00:00<?, ?B/s]
      4%|3         | 6.30M/170M [00:00<00:03, 43.8MB/s]
      6%|6         | 10.5M/170M [00:00<00:05, 31.3MB/s]
      8%|8         | 14.3M/170M [00:00<00:05, 27.7MB/s]
     10%|#         | 17.0M/170M [00:00<00:06, 24.3MB/s]
     13%|#3        | 22.3M/170M [00:00<00:04, 31.2MB/s]
     15%|#5        | 25.5M/170M [00:00<00:05, 25.9MB/s]
     18%|#7        | 30.3M/170M [00:01<00:04, 31.4MB/s]
     20%|#9        | 33.6M/170M [00:01<00:04, 29.5MB/s]
     24%|##3       | 40.0M/170M [00:01<00:03, 35.6MB/s]
     27%|##7       | 46.3M/170M [00:01<00:03, 40.9MB/s]
     30%|##9       | 50.4M/170M [00:01<00:03, 36.7MB/s]
     33%|###2      | 56.0M/170M [00:01<00:03, 33.6MB/s]
     37%|###6      | 62.3M/170M [00:01<00:03, 34.9MB/s]
     39%|###8      | 65.7M/170M [00:02<00:03, 32.9MB/s]
     42%|####2     | 72.0M/170M [00:02<00:02, 37.3MB/s]
     46%|####6     | 78.3M/170M [00:02<00:02, 42.7MB/s]
     49%|####8     | 82.6M/170M [00:02<00:02, 30.7MB/
 s]
     52%|#####1    | 88.0M/170M [00:02<00:02, 30.5MB/s]
     57%|#####6    | 96.0M/170M [00:03<00:02, 33.0MB/s]
     61%|######1   | 104M/170M [00:03<00:01, 35.0MB/s] 
     66%|######5   | 112M/170M [00:03<00:01, 35.3MB/s]
     71%|#######   | 120M/170M [00:03<00:01, 37.6MB/s]
     74%|#######4  | 126M/170M [00:03<00:01, 40.9MB/s]
     77%|#######6  | 130M/170M [00:03<00:01, 39.5MB/s]
     80%|########  | 136M/170M [00:04<00:00, 38.3MB/s]
     85%|########4 | 144M/170M [00:04<00:00, 41.8MB/s]
     88%|########8 | 150M/170M [00:04<00:00, 43.1MB/s]
     91%|######### | 154M/170M [00:04<00:00, 37.3MB/s]
     93%|#########3| 158M/170M [00:04<00:00, 35.3MB/s]
     95%|#########5| 162M/170M [00:04<00:00, 28.4MB/s]
     97%|#########6| 165M/170M [00:05<00:00, 25.1MB/s]
     98%|#########8| 167M/170M [00:05<00:00, 22.1MB/s]
    100%|#########9| 169M/170M [00:05<00:00, 21.8MB/s]
    100%|##########| 170M/170M [00:05<00:00, 33.0MB/s]
+
      0%|          | 0.00/170M [00:00<?, ?B/s]
      4%|3         | 6.30M/170M [00:00<00:08, 19.4MB/s]
      5%|4         | 8.16M/170M [00:00<00:09, 18.5MB/s]
      8%|8         | 14.3M/170M [00:00<00:05, 29.7MB/s]
     13%|#3        | 22.3M/170M [00:00<00:05, 29.5MB/s]
     15%|#4        | 25.3M/170M [00:01<00:06, 24.8MB/s]
     18%|#8        | 30.6M/170M [00:01<00:04, 30.8MB/s]
     20%|##        | 34.1M/170M [00:01<00:04, 28.8MB/s]
     24%|##3       | 40.0M/170M [00:01<00:04, 31.4MB/s]
     27%|##7       | 46.3M/170M [00:01<00:05, 25.9MB/s]
     29%|##8       | 49.1M/170M [00:01<00:05, 24.8MB/s]
     33%|###2      | 56.0M/170M [00:02<00:03, 31.7MB/s]
     37%|###6      | 62.3M/170M [00:02<00:03, 36.0MB/s]
     39%|###8      | 66.1M/170M [00:02<00:03, 35.1MB/s]
     42%|####2     | 72.0M/170M [00:02<00:02, 37.4MB/s]
     46%|####6     | 78.7M/170M [00:02<00:02, 44.7MB/s]
     49%|####9     | 83.3M/170M [00:02<00:02, 42.9MB/s]
     52%|#####1    | 87.6M/170M [00:02<00:02, 33.7MB/
 s]
     54%|#####3    | 91.3M/170M [00:03<00:02, 27.8MB/s]
     57%|#####6    | 96.0M/170M [00:03<00:03, 24.9MB/s]
     60%|######    | 102M/170M [00:03<00:02, 28.2MB/s] 
     62%|######1   | 105M/170M [00:03<00:02, 24.9MB/s]
     66%|######5   | 112M/170M [00:03<00:02, 29.6MB/s]
     71%|#######   | 120M/170M [00:04<00:01, 33.5MB/s]
     74%|#######4  | 126M/170M [00:04<00:01, 35.9MB/s]
     76%|#######6  | 130M/170M [00:04<00:01, 30.9MB/s]
     80%|########  | 136M/170M [00:04<00:01, 29.2MB/s]
     83%|########3 | 141M/170M [00:04<00:00, 33.2MB/s]
     85%|########5 | 145M/170M [00:05<00:01, 26.1MB/s]
     88%|########8 | 150M/170M [00:05<00:00, 30.0MB/s]
     90%|######### | 154M/170M [00:05<00:00, 25.7MB/s]
     93%|#########3| 158M/170M [00:05<00:00, 28.4MB/s]
     95%|#########4| 161M/170M [00:05<00:00, 25.4MB/s]
     98%|#########7| 166M/170M [00:06<00:00, 22.0MB/s]
    100%|##########| 170M/170M [00:06<00:00, 29.3MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
       (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -295,7 +295,7 @@ Get boxes with score larger than 0.9
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 3 minutes  37.380 seconds)
+   **Total running time of the script:** ( 3 minutes  27.104 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
index 7347eb7f97..36149db291 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
@@ -227,7 +227,7 @@ training. Other models require a full post training calibration.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
-
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     47%|####6     | 6.30M/13.6M [00:00<00:00, 34.1MB/s]
     70%|#######   | 9.55M/13.6M [00:00<00:00, 32.9MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 38.5MB/s]
+
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     59%|#####8    | 7.99M/13.6M [00:00<00:00, 46.2MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 54.4MB/s]
 
 
 
@@ -409,7 +409,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      89.2496      89.1013      93.0435      88.6885       0.5769                  
+      86.2764      86.2405      90.3213      85.7954       0.4626                  
 
 
 
@@ -457,7 +457,7 @@ TODO
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  28.865 seconds)
+   **Total running time of the script:** ( 1 minutes  23.630 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_prequantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
index abfb534db3..bc6453472d 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
@@ -423,7 +423,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      110.3457     110.1860     114.1777     109.6324      0.6613                  
+      106.6157     106.5555     111.8867     105.0673      0.6880                  
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
index c53199a2a2..6f7ee27ddd 100644
--- a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
@@ -257,7 +257,7 @@ We create a Relay VM to build and execute the model.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  7.848 seconds)
+   **Total running time of the script:** ( 2 minutes  3.579 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_quantized.py:
diff --git a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
index f180920dcf..fc706f0218 100644
--- a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**12:14.481** total execution time for **how_to_deploy_models** files:
+**11:45.590** total execution time for **how_to_deploy_models** files:
 
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:37.380 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:27.104 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 02:07.848 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 02:03.579 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:28.865 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:23.630 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:20.716 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:18.890 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:58.138 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:55.450 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:50.618 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:48.602 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:49.382 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:48.339 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:31.179 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:30.183 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:30.348 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:29.808 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.007 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.006 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
index 8ae90b08f6..63752fdc32 100644
--- a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
@@ -463,7 +463,7 @@ First let us define two helper functions to get the mobilenet model and a cat im
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip87cdc82d-6e08-4ffe-be2a-0d1ba7100ffa from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip10646a4e-dfa9-4259-8360-dc6359136126 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 
 
 
diff --git a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
index 750a28af83..8bce758381 100644
--- a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:58.845** total execution time for **how_to_extend_tvm** files:
+**00:53.728** total execution time for **how_to_extend_tvm** files:
 
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:54.942 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:50.089 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.712 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.520 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.184 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.112 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.007 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
index 9a7fb13e91..d629cba751 100644
--- a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
@@ -220,10 +220,10 @@ profile the execution time of each passes.
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 23329us [23329us] (48.48%; 48.48%)
-    FoldScaleAxis: 24790us [8us] (51.52%; 51.52%)
-            FoldConstant: 24782us [1810us] (51.50%; 99.97%)
-                    InferType: 22973us [22973us] (47.74%; 92.70%)
+    InferType: 22292us [22292us] (48.38%; 48.38%)
+    FoldScaleAxis: 23780us [7us] (51.62%; 51.62%)
+            FoldConstant: 23772us [1763us] (51.60%; 99.97%)
+                    InferType: 22009us [22009us] (47.77%; 92.58%)
 
 
 
@@ -262,10 +262,10 @@ Refer to following sections and :py:func:`tvm.instrument.pass_instrument` for th
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 23478us [23478us] (48.81%; 48.81%)
-    FoldScaleAxis: 24625us [7us] (51.19%; 51.19%)
-            FoldConstant: 24618us [1917us] (51.18%; 99.97%)
-                    InferType: 22701us [22701us] (47.19%; 92.21%)
+    InferType: 21835us [21835us] (48.28%; 48.28%)
+    FoldScaleAxis: 23395us [5us] (51.72%; 51.72%)
+            FoldConstant: 23390us [1667us] (51.71%; 99.98%)
+                    InferType: 21722us [21722us] (48.03%; 92.87%)
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
index 2869581777..f8d085fe0d 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
@@ -331,7 +331,7 @@ latency of convolution.
 
  .. code-block:: none
 
-    Convolution: 53.471614 ms
+    Convolution: 53.542945 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
index 2311a45e2a..5008a733bb 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
@@ -598,7 +598,7 @@ be able to run on our build server
 
  .. code-block:: none
 
-    conv2d with tensor core: 12.270800 ms
+    conv2d with tensor core: 12.273936 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
index eeecab6ecf..e63d492efe 100644
--- a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
@@ -134,8 +134,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 
  .. code-block:: none
 
-    Numpy running time: 0.019034
-    Baseline: 3.413039
+    Numpy running time: 0.015656
+    Baseline: 3.396954
 
 
 
@@ -227,7 +227,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 
  .. code-block:: none
 
-    Opt1: 0.304455
+    Opt1: 0.280310
 
 
 
@@ -318,7 +318,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
 
  .. code-block:: none
 
-    Opt2: 0.285980
+    Opt2: 0.270487
 
 
 
@@ -406,7 +406,7 @@ the access pattern for A matrix is more cache friendly.
 
  .. code-block:: none
 
-    Opt3: 0.119482
+    Opt3: 0.110006
 
 
 
@@ -523,7 +523,7 @@ flattening.
 
  .. code-block:: none
 
-    Opt4: 0.107821
+    Opt4: 0.101239
 
 
 
@@ -635,7 +635,7 @@ write to C when all the block results are ready.
 
  .. code-block:: none
 
-    Opt5: 0.111831
+    Opt5: 0.101375
 
 
 
@@ -748,7 +748,7 @@ Furthermore, we can also utilize multi-core processors to do the thread-level pa
 
  .. code-block:: none
 
-    Opt6: 0.134663
+    Opt6: 0.125000
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
index 0f3c30b788..c8657582dc 100644
--- a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:34.873** total execution time for **how_to_optimize_operators** files:
+**00:32.888** total execution time for **how_to_optimize_operators** files:
 
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:31.273 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:29.525 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:02.149 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:02.012 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.450 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.351 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
index 6d006737b4..03578b17bb 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
@@ -5,18 +5,18 @@
 
 Computation times
 =================
-**03:52.223** total execution time for **how_to_tune_with_autoscheduler** files:
+**03:39.341** total execution time for **how_to_tune_with_autoscheduler** files:
 
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:41.250 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:35.935 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:19.678 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:15.472 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:18.242 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:16.809 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:16.624 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:15.721 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:16.326 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:15.305 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:00.103 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:00.099 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
index 6356898a60..d765945834 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
@@ -767,7 +767,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 0.347 ms
+    Execution time of this operator: 0.349 ms
 
 
 
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
index 7b46ce2013..2ad14dbe7f 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
@@ -647,7 +647,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       8.1179       8.1191       8.1217       8.1129       0.0037                  
+       8.1041       8.1042       8.1155       8.0928       0.0093                  
 
 
 
@@ -674,7 +674,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  19.678 seconds)
+   **Total running time of the script:** ( 1 minutes  15.472 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
index 061c8678d8..c9ff9ed190 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
@@ -666,7 +666,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      768.5636     768.9423     768.9955     767.7532      0.5735                  
+      717.0081     718.1229     719.1647     713.7368      2.3520                  
 
 
 
@@ -693,7 +693,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  41.250 seconds)
+   **Total running time of the script:** ( 1 minutes  35.935 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_x86.py:
diff --git a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
index 470837c862..38007682f2 100644
--- a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
@@ -5,16 +5,16 @@
 
 Computation times
 =================
-**00:23.643** total execution time for **how_to_tune_with_autotvm** files:
+**00:22.872** total execution time for **how_to_tune_with_autotvm** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:23.603 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:22.835 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.024 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.021 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)             | 00:00.006 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``) | 00:00.005 | 0.0 MB |
-+--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)               | 00:00.005 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``) | 00:00.005 | 0.0 MB |
++--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
index cf0f953221..41aa684890 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
@@ -326,7 +326,7 @@ and measure running time.
 
     Best config:
     ,None
-    Time cost of this operator: 0.037073
+    Time cost of this operator: 0.037159
 
 
 
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
index b2ce1cc322..6f30be87f8 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
@@ -360,10 +360,10 @@ Timing the untuned program
     ########## Build without Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  304.4     98.653   (1, 2, 10, 10, 3)  2       1        [304.4]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.936     0.952    (1, 6, 10, 10)     1       1        [2.936]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         1.22      0.395    (1, 1, 10, 10, 3)  1       1        [1.22]            
-    Total_time                                    -                                             308.557   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  294.3     98.747   (1, 2, 10, 10, 3)  2       1        [294.3]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.81      0.943    (1, 6, 10, 10)     1       1        [2.81]            
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.924     0.31     (1, 1, 10, 10, 3)  1       1        [0.924]           
+    Total_time                                    -                                             298.034   -        -                  -       -        -                 
 
 
 
@@ -428,10 +428,10 @@ Timing the tuned program
     ########## Build with Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  100.7     96.949   (1, 6, 10, 10, 1)  2       1        [100.7]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.016     1.941    (1, 6, 10, 10)     1       1        [2.016]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         1.153     1.11     (1, 1, 10, 10, 3)  1       1        [1.153]           
-    Total_time                                    -                                             103.869   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  99.062    97.396   (1, 6, 10, 10, 1)  2       1        [99.062]          
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.724     1.695    (1, 6, 10, 10)     1       1        [1.724]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.924     0.909    (1, 1, 10, 10, 3)  1       1        [0.924]           
+    Total_time                                    -                                             101.711   -        -                  -       -        -                 
 
 
 
@@ -439,7 +439,7 @@ Timing the tuned program
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  28.171 seconds)
+   **Total running time of the script:** ( 1 minutes  21.236 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_autotune.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
index 1592e1598b..ddeece98ac 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
@@ -118,7 +118,7 @@ download a cat image and preprocess it to use as the model input.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/ao/quantization/utils.py:310: UserWarning: must run observer before calling calculate_qparams. Returning default values.
       warnings.warn(
     Downloading: "https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
-
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
     61%|######    | 2.09M/3.42M [00:00<00:00, 13.5MB/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 21.4MB/s]
+
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
     95%|#########5| 3.27M/3.42M [00:00<00:00, 34.2MB/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 35.5MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
       device=storage.device,
     /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -326,7 +326,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  29.489 seconds)
+   **Total running time of the script:** ( 1 minutes  23.096 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_pytorch.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
index 30cb15be87..0612b498eb 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
@@ -217,7 +217,7 @@ take about **2 minutes** to download the Stanford Cars, while COCO 2017 validati
  .. code-block:: none
 
 
-    '/tmp/tmph22ojcyw/images/random'
+    '/tmp/tmp0i0p6kok/images/random'
 
 
 
@@ -317,8 +317,8 @@ objects to other stuff? We can display some examples from our datasets using ``m
 
  .. code-block:: none
 
-    /tmp/tmph22ojcyw/images/target contains 8144 images
-    /tmp/tmph22ojcyw/images/random contains 5000 images
+    /tmp/tmp0i0p6kok/images/target contains 8144 images
+    /tmp/tmp0i0p6kok/images/random contains 5000 images
 
 
 
@@ -493,13 +493,13 @@ the time on our validation set).
  .. code-block:: none
 
     Epoch 1/3
-    328/328 - 41s - loss: 0.2171 - accuracy: 0.9238 - val_loss: 0.1282 - val_accuracy: 0.9547 - 41s/epoch - 124ms/step
+    328/328 - 40s - loss: 0.2137 - accuracy: 0.9261 - val_loss: 0.1172 - val_accuracy: 0.9585 - 40s/epoch - 123ms/step
     Epoch 2/3
-    328/328 - 36s - loss: 0.0995 - accuracy: 0.9635 - val_loss: 0.0903 - val_accuracy: 0.9668 - 36s/epoch - 109ms/step
+    328/328 - 34s - loss: 0.1009 - accuracy: 0.9633 - val_loss: 0.1074 - val_accuracy: 0.9634 - 34s/epoch - 104ms/step
     Epoch 3/3
-    328/328 - 36s - loss: 0.0629 - accuracy: 0.9765 - val_loss: 0.1174 - val_accuracy: 0.9630 - 36s/epoch - 108ms/step
+    328/328 - 34s - loss: 0.0658 - accuracy: 0.9761 - val_loss: 0.1009 - val_accuracy: 0.9671 - 34s/epoch - 104ms/step
 
-    <keras.callbacks.History object at 0x7f1da3af9820>
+    <keras.callbacks.History object at 0x7f74e4ec8640>
 
 
 
@@ -860,7 +860,7 @@ Arduino tutorial for how to do that `on GitHub <https://github.com/guberti/tvm-a
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 4 minutes  53.923 seconds)
+   **Total running time of the script:** ( 4 minutes  39.033 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_train.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
index 0589037ebd..da654b9858 100644
--- a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
@@ -5,20 +5,20 @@
 
 Computation times
 =================
-**08:22.216** total execution time for **how_to_work_with_microtvm** files:
+**07:51.719** total execution time for **how_to_work_with_microtvm** files:
 
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 04:53.923 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 04:39.033 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:29.489 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:23.096 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:28.171 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:21.236 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:12.756 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:11.808 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:09.025 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:08.586 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.852 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:07.960 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)         | 00:00.000 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
index 938875fda5..a68bd69150 100644
--- a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:41.073** total execution time for **how_to_work_with_relay** files:
+**00:38.213** total execution time for **how_to_work_with_relay** files:
 
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:35.708 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:33.220 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.369 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.133 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.989 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.855 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``)                 | 00:00.006 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
index fb77778004..df647858d2 100644
--- a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
@@ -278,7 +278,7 @@ The following example customizes CUDA lowering rule for :code:`exp`.
  .. code-block:: none
 
 
-    <function my_cuda_math_rule at 0x7f207c723dc0>
+    <function my_cuda_math_rule at 0x7f74fda06b80>
 
 
 
diff --git a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
index f7dc90512e..7b969ff064 100644
--- a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
@@ -5,22 +5,22 @@
 
 Computation times
 =================
-**00:06.854** total execution time for **how_to_work_with_schedules** files:
+**00:07.963** total execution time for **how_to_work_with_schedules** files:
 
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:03.467 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:04.900 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.519 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.374 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.793 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.713 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.784 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.694 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.117 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.114 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.084 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.080 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.059 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.058 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.032 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.030 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
index b15e974434..53a1a5cb26 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:35.867** total execution time for **topic_vta_tutorials_autotvm** files:
+**00:33.102** total execution time for **topic_vta_tutorials_autotvm** files:
 
 +---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:35.860 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:33.095 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)     | 00:00.008 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)     | 00:00.007 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
index d4876f0508..565a1293e5 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -293,7 +293,7 @@ The compilation steps are:
       warnings.warn(
     /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
       graph, lib, params = relay.build(
-    resnet18_v1 inference graph built in 38.41s!
+    resnet18_v1 inference graph built in 35.27s!
 
 
 
@@ -416,7 +416,7 @@ and an input test image.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  5.801 seconds)
+   **Total running time of the script:** ( 1 minutes  1.619 seconds)
 
 
 .. _sphx_glr_download_topic_vta_tutorials_frontend_deploy_classification.py:
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
index ff0919da94..856bbb83f2 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
@@ -337,7 +337,7 @@ The compilation steps are:
 
     /workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
       warnings.warn(
-    yolov3-tiny inference graph built in 26.40s!
+    yolov3-tiny inference graph built in 24.40s!
 
 
 
@@ -447,7 +447,7 @@ Download test image
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  9.964 seconds)
+   **Total running time of the script:** ( 1 minutes  6.755 seconds)
 
 
 .. _sphx_glr_download_topic_vta_tutorials_frontend_deploy_detection.py:
diff --git a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
index 7dc88766d6..c7eeedd4eb 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**02:15.766** total execution time for **topic_vta_tutorials_frontend** files:
+**02:08.373** total execution time for **topic_vta_tutorials_frontend** files:
 
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 01:09.964 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 01:06.755 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 01:05.801 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 01:01.619 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
index 0ada71f4b8..88bef13234 100644
--- a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:03.483** total execution time for **topic_vta_tutorials_optimize** files:
+**00:03.300** total execution time for **topic_vta_tutorials_optimize** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.908 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.798 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.576 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.502 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
index 67a15b9dab..561d9b99a5 100644
--- a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:00.992** total execution time for **topic_vta_tutorials** files:
+**00:00.825** total execution time for **topic_vta_tutorials** files:
 
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.508 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.422 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.484 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.403 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
index d188c17a3b..a88728e624 100644
--- a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
@@ -318,7 +318,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 93.868 ms
+    Execution time of this operator: 93.718 ms
 
 
 
@@ -434,7 +434,7 @@ operations.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  14.973 seconds)
+   **Total running time of the script:** ( 1 minutes  30.142 seconds)
 
 
 .. _sphx_glr_download_tutorial_auto_scheduler_matmul_x86.py:
diff --git a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
index b4fbdeb8d6..f7286491f0 100644
--- a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
@@ -454,16 +454,16 @@ reduce variance, we take 5 measurements and average them.
     waiting for device...
     device available
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 1.35/1.35       result: MeasureResult(costs=(0.19876891800000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.439051389694214, timestamp=1689605227.7057743) [('tile_y', [-1, 2]), ('tile_x', [-1, 1])],None,1
-    No: 2   GFLOPS: 1.02/1.35       result: MeasureResult(costs=(0.2622365216,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.434575080871582, timestamp=1689605232.1642838)        [('tile_y', [-1, 256]), ('tile_x', [-1, 2])],None,18
-    No: 3   GFLOPS: 3.51/3.51       result: MeasureResult(costs=(0.0764074588,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.487936019897461, timestamp=1689605233.6547983)        [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
-    No: 4   GFLOPS: 10.07/10.07     result: MeasureResult(costs=(0.026649061799999995,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6629624366760254, timestamp=1689605234.3535738)       [('tile_y', [-1, 512]), ('tile_x', [-1, 512])],None,99
-    No: 5   GFLOPS: 2.03/10.07      result: MeasureResult(costs=(0.13247077440000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3836374282836914, timestamp=1689605236.8544886)        [('tile_y', [-1, 256]), ('tile_x', [-1, 4])],None,28
-    No: 6   GFLOPS: 11.08/11.08     result: MeasureResult(costs=(0.024221291,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6559348106384277, timestamp=1689605237.5138962)        [('tile_y', [-1, 4]), ('tile_x', [-1, 512])],None,92
-    No: 7   GFLOPS: 11.15/11.15     result: MeasureResult(costs=(0.0240818594,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6616318225860596, timestamp=1689605238.17214) [('tile_y', [-1, 16]), ('tile_x', [-1, 256])],None,84
-    No: 8   GFLOPS: 11.26/11.26     result: MeasureResult(costs=(0.02384155,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6424837112426758, timestamp=1689605238.8267894) [('tile_y', [-1, 32]), ('tile_x', [-1, 512])],None,95
-    No: 9   GFLOPS: 8.18/11.26      result: MeasureResult(costs=(0.0328242486,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7511351108551025, timestamp=1689605239.705403)        [('tile_y', [-1, 1]), ('tile_x', [-1, 8])],None,30
-    No: 10  GFLOPS: 11.64/11.64     result: MeasureResult(costs=(0.023060854,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6159284114837646, timestamp=1689605240.3469317)        [('tile_y', [-1, 64]), ('tile_x', [-1, 512])],None,96
+    No: 1   GFLOPS: 10.49/10.49     result: MeasureResult(costs=(0.0255916892,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7722578048706055, timestamp=1689639231.3839726)       [('tile_y', [-1, 8]), ('tile_x', [-1, 128])],None,73
+    No: 2   GFLOPS: 12.78/12.78     result: MeasureResult(costs=(0.0210088066,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5968716144561768, timestamp=1689639231.9832437)       [('tile_y', [-1, 4]), ('tile_x', [-1, 512])],None,92
+    No: 3   GFLOPS: 9.28/12.78      result: MeasureResult(costs=(0.028933529399999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.735072135925293, timestamp=1689639232.698494) [('tile_y', [-1, 256]), ('tile_x', [-1, 16])],None,48
+    No: 4   GFLOPS: 3.68/12.78      result: MeasureResult(costs=(0.0728832788,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.423401117324829, timestamp=1689639234.127702) [('tile_y', [-1, 4]), ('tile_x', [-1, 2])],None,12
+    No: 5   GFLOPS: 12.70/12.78     result: MeasureResult(costs=(0.0211333772,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.608086109161377, timestamp=1689639234.908141) [('tile_y', [-1, 8]), ('tile_x', [-1, 512])],None,93
+    No: 6   GFLOPS: 8.92/12.78      result: MeasureResult(costs=(0.0300933974,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7456381320953369, timestamp=1689639235.6465182)       [('tile_y', [-1, 8]), ('tile_x', [-1, 8])],None,33
+    No: 7   GFLOPS: 14.86/14.86     result: MeasureResult(costs=(0.018067073,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.60986328125, timestamp=1689639236.1922264)     [('tile_y', [-1, 32]), ('tile_x', [-1, 64])],None,65
+    No: 8   GFLOPS: 10.82/14.86     result: MeasureResult(costs=(0.024803384,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6664876937866211, timestamp=1689639236.8437579)        [('tile_y', [-1, 8]), ('tile_x', [-1, 16])],None,43
+    No: 9   GFLOPS: 10.64/14.86     result: MeasureResult(costs=(0.025232743600000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.664832592010498, timestamp=1689639237.6277812)        [('tile_y', [-1, 4]), ('tile_x', [-1, 64])],None,62
+    No: 10  GFLOPS: 11.70/14.86     result: MeasureResult(costs=(0.0229420726,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6484942436218262, timestamp=1689639238.2583177)       [('tile_y', [-1, 4]), ('tile_x', [-1, 128])],None,72
 
 
 
diff --git a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
index df4b9f3eb5..149fc45ee3 100644
--- a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
@@ -311,7 +311,7 @@ standard deviation.
 
  .. code-block:: none
 
-    {'mean': 494.39252055002726, 'median': 494.9026871498063, 'std': 3.1817584438598923}
+    {'mean': 495.16211137008213, 'median': 494.3268438999439, 'std': 3.6706157178015775}
 
 
 
@@ -582,31 +582,31 @@ the tuning data to.
 
  .. code-block:: none
 
-
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:    9.33/   9.33 GFLOPS | Progress: (4/20) | 9.75 s
    [Task  1/25]  Current/Best:   22.64/  22.64 GFLOPS | Progress: (8/20) | 11.85 s
    [Task  1/25]  Current/Best:   21.04/  22.64 GFLOPS | Progress: (12/20) | 15.05 s
    [Task  1/25]  Current/Best:   13.22/  22.86 GFLOPS | Progress: (16/20) | 19.87 s
    [Task  1/25]  Current/Best:   13.53/  22.86 GFLOPS | Progress: (20/20) | 24.25 s Done.
-
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   11.97/  22.81 GFLOPS | Progress: (4/20) | 4.60 s
    [Task  2/25]  Current/Best:   20.74/  22.81 GFLOPS | Progress: (8/20) | 6.23 s
    [Task  2/25]  Current/Best:    6.50/  22.81 GFLOPS | Progress: (12/20) | 7.76 s
    [Task  2/25]  Current/Best:   19.69/  22.81 GFLOPS | Progress: (16/20) | 9.34 s
    [Task  2/25]  Current/Best:   15.12/  22.81 GFLOPS | Progress: (20/20) | 10.94 s Done.
-
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:   11.72/  16.61 GFLOPS | Progress: (4/20) | 5.91 s
    [Task  3/25]  Current/Best:   13.26/  16.61 GFLOPS | Progress: (8/20) | 9.01 s
    [Task  3/25]  Current/Best:   12.77/  23.16 GFLOPS | Progress: (12/20) | 11.21 s
    [Task  3/25]  Current/Best:   10.97/  23.16 GFLOPS | Progress: (16/20) | 13.46 s
    [Task  3/25]  Current/Best:   10.50/  23.16 GFLOPS | Progress: (20/20) | 16.17 s Done.
-
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:    4.56/  21.47 GFLOPS | Progress: (4/20) | 4.97 s
    [Task  4/25]  Current/Best:   14.29/  21.47 GFLOPS | Progress: (8/20) | 7.17 s
    [Task  4/25]  Current/Best:   10.49/  21.47 GFLOPS | Progress: (12/20) | 14.87 s
    [Task  4/25]  Current/Best:    7.89/  21.47 GFLOPS | Progress: (16/20) | 22.75 s
    [Task  4/25]  Current/Best:   13.84/  21.47 GFLOPS | Progress: (20/20) | 25.48 s Done.
-
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:   17.67/  17.67 GFLOPS | Progress: (4/20) | 5.15 s
    [Task  5/25]  Current/Best:   17.67/  19.15 GFLOPS | Progress: (8/20) | 7.22 s
    [Task  5/25]  Current/Best:   19.82/  19.82 GFLOPS | Progress: (12/20) | 9.25 s
    [Task  5/25]  Current/Best:   16.12/  19.82 GFLOPS | Progress: (16/20) | 11.31 s
    [Task  5/25]  Current/Best:   22.42/  22.42 GFLOPS | Progress: (20/20) | 13.76 s Done.
-
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   18.39/  18.39 GFLOPS | Progress: (4/20) | 4.83 s
    [Task  6/25]  Current/Best:   10.81/  18.39 GFLOPS | Progress: (8/20) | 7.11 s
    [Task  6/25]  Current/Best:    4.83/  18.39 GFLOPS | Progress: (12/20) | 10.09 s
    [Task  6/25]  Current/Best:    3.23/  18.39 GFLOPS | Progress: (16/20) | 13.09 s
    [Task  6/25]  Current/Best:   12.04/  18.39 GFLOPS | Progress: (20/20) | 15.23 s Done.
-
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:   15.33/  18.33 GFLOPS | Progress: (4/20) | 5.07 s
    [Task  7/25]  Current/Best:    6.46/  18.33 GFLOPS | Progress: (8/20) | 7.79 s
    [Task  7/25]  Current/Best:   10.47/  19.43 GFLOPS | Progress: (12/20) | 10.04 s
    [Task  7/25]  Current/Best:   16.45/  19.43 GFLOPS | Progress: (16/20) | 12.21 s
    [Task  7/25]  Current/Best:    6.27/  19.43 GFLOPS | Progress: (20/20) | 14.93 s Done.
-
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   15.52/  15.52 GFLOPS | Progress: (4/20) | 8.82 s
    [Task  8/25]  Current/Best:   12.56/  15.52 GFLOPS | Progress: (8/20) | 17.80 s
    [Task  8/25]  Current/Best:    4.82/  17.50 GFLOPS | Progress: (12/20) | 20.30 s
    [Task  8/25]  Current/Best:    6.91/  19.16 GFLOPS | Progress: (16/20) | 27.74 s
    [Task  8/25]  Current/Best:    2.88/  19.16 GFLOPS | Progress: (20/20) | 32.82 s Done.
-
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   14.47/  14.47 GFLOPS | Progress: (4/20) | 7.00 s
    [Task  9/25]  Current/Best:   15.08/  16.10 GFLOPS | Progress: (8/20) | 8.79 s
    [Task  9/25]  Current/Best:   17.67/  19.05 GFLOPS | Progress: (12/20) | 10.83 s
    [Task  9/25]  Current/Best:    6.10/  19.05 GFLOPS | Progress: (16/20) | 12.96 s
    [Task  9/25]  Current/Best:    8.57/  19.05 GFLOPS | Progress: (20/20) | 21.04 s Done.
-
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   15.17/  15.17 GFLOPS | Progress: (4/20) | 5.05 s
    [Task 10/25]  Current/Best:   12.42/  15.17 GFLOPS | Progress: (8/20) | 6.98 s
    [Task 10/25]  Current/Best:   14.69/  15.17 GFLOPS | Progress: (12/20) | 8.96 s
    [Task 10/25]  Current/Best:   15.60/  17.13 GFLOPS | Progress: (16/20) | 11.07 s
    [Task 10/25]  Current/Best:   18.09/  18.09 GFLOPS | Progress: (20/20) | 15.24 s Done.
-
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   22.61/  22.61 GFLOPS | Progress: (4/20) | 4.93 s
    [Task 11/25]  Current/Best:   23.38/  23.38 GFLOPS | Progress: (8/20) | 7.15 s
    [Task 11/25]  Current/Best:   10.12/  23.38 GFLOPS | Progress: (12/20) | 11.18 s
    [Task 11/25]  Current/Best:   12.07/  23.38 GFLOPS | Progress: (16/20) | 13.93 s
    [Task 11/25]  Current/Best:   11.34/  23.38 GFLOPS | Progress: (20/20) | 16.82 s Done.
-
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   16.24/  16.24 GFLOPS | Progress: (4/20) | 6.44 s
    [Task 12/25]  Current/Best:    9.74/  16.24 GFLOPS | Progress: (8/20) | 10.83 s
    [Task 12/25]  Current/Best:   14.39/  16.24 GFLOPS | Progress: (12/20) | 17.03 s
    [Task 12/25]  Current/Best:   17.01/  17.01 GFLOPS | Progress: (16/20) | 20.04 s
    [Task 12/25]  Current/Best:   16.35/  17.21 GFLOPS | Progress: (20/20) | 22.24 s Done.
-
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:   14.94/  18.24 GFLOPS | Progress: (4/20) | 5.48 s
    [Task 13/25]  Current/Best:    9.41/  18.24 GFLOPS | Progress: (8/20) | 8.78 s
    [Task 13/25]  Current/Best:   20.74/  20.74 GFLOPS | Progress: (12/20) | 12.99 s
    [Task 13/25]  Current/Best:    3.09/  20.74 GFLOPS | Progress: (16/20) | 16.21 s
    [Task 13/25]  Current/Best:   18.82/  20.74 GFLOPS | Progress: (20/20) | 21.02 s Done.
-
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   10.27/  20.55 GFLOPS | Progress: (4/20) | 10.80 s
    [Task 14/25]  Current/Best:    7.69/  20.55 GFLOPS | Progress: (8/20) | 17.64 s
    [Task 14/25]  Current/Best:    9.37/  20.55 GFLOPS | Progress: (12/20) | 21.36 s
    [Task 14/25]  Current/Best:   14.93/  20.70 GFLOPS | Progress: (16/20) | 23.44 s
    [Task 14/25]  Current/Best:    7.81/  23.24 GFLOPS | Progress: (20/20) | 34.93 s
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
-
    [Task 15/25]  Current/Best:    7.64/  22.67 GFLOPS | Progress: (4/20) | 5.75 s
    [Task 15/25]  Current/Best:    6.73/  22.67 GFLOPS | Progress: (8/20) | 16.90 s
    [Task 15/25]  Current/Best:    7.94/  22.67 GFLOPS | Progress: (12/20) | 26.86 s
    [Task 15/25]  Current/Best:    6.56/  22.67 GFLOPS | Progress: (16/20) | 29.17 s
    [Task 15/25]  Current/Best:   18.87/  22.67 GFLOPS | Progress: (20/20) | 32.40 s
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   10.91/  18.52 GFLOPS | Progress: (4/20) | 5.47 s
    [Task 16/25]  Current/Best:    5.81/  18.52 GFLOPS | Progress: (8/20) | 7.72 s
    [Task 16/25]  Current/Best:    9.26/  19.24 GFLOPS | Progress: (12/20) | 11.32 s
    [Task 16/25]  Current/Best:   15.05/  19.24 GFLOPS | Progress: (16/20) | 13.29 s
    [Task 16/25]  Current/Best:   19.80/  19.80 GFLOPS | Progress: (20/20) | 16.48 s Done.
-
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:   23.04/  23.04 GFLOPS | Progress: (4/20) | 5.30 s
    [Task 17/25]  Current/Best:   12.25/  23.04 GFLOPS | Progress: (8/20) | 8.07 s
    [Task 17/25]  Current/Best:   11.49/  23.04 GFLOPS | Progress: (12/20) | 11.05 s
    [Task 17/25]  Current/Best:   20.25/  23.04 GFLOPS | Progress: (16/20) | 13.83 s
    [Task 17/25]  Current/Best:   14.45/  23.04 GFLOPS | Progress: (20/20) | 16.67 s Done.
-
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:    9.12/  14.76 GFLOPS | Progress: (4/20) | 6.71 s
    [Task 18/25]  Current/Best:   12.21/  14.76 GFLOPS | Progress: (8/20) | 11.50 s
    [Task 18/25]  Current/Best:   18.14/  18.14 GFLOPS | Progress: (12/20) | 13.93 s
    [Task 18/25]  Current/Best:   19.33/  19.33 GFLOPS | Progress: (16/20) | 18.08 s
    [Task 18/25]  Current/Best:   18.26/  19.33 GFLOPS | Progress: (20/20) | 21.25 s Done.
-
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   16.04/  20.00 GFLOPS | Progress: (4/20) | 5.58 s
    [Task 19/25]  Current/Best:   13.24/  20.00 GFLOPS | Progress: (8/20) | 8.56 s
    [Task 19/25]  Current/Best:    8.88/  20.00 GFLOPS | Progress: (12/20) | 11.70 s
    [Task 19/25]  Current/Best:    3.08/  20.10 GFLOPS | Progress: (16/20) | 16.74 s
    [Task 19/25]  Current/Best:    7.42/  20.10 GFLOPS | Progress: (20/20) | 22.47 s Done.
-
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:   17.21/  17.21 GFLOPS | Progress: (4/20) | 7.27 s
    [Task 20/25]  Current/Best:    9.30/  17.21 GFLOPS | Progress: (8/20) | 10.26 s
    [Task 20/25]  Current/Best:    4.75/  19.82 GFLOPS | Progress: (12/20) | 22.60 s
    [Task 20/25]  Current/Best:   18.35/  19.82 GFLOPS | Progress: (16/20) | 27.21 s
    [Task 20/25]  Current/Best:   12.26/  19.82 GFLOPS | Progress: (20/20) | 30.28 s Done.
-
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:   21.05/  21.05 GFLOPS | Progress: (4/20) | 4.85 s
    [Task 21/25]  Current/Best:   17.59/  21.40 GFLOPS | Progress: (8/20) | 6.40 s
    [Task 21/25]  Current/Best:   13.88/  21.40 GFLOPS | Progress: (12/20) | 14.28 s
    [Task 21/25]  Current/Best:   17.14/  21.40 GFLOPS | Progress: (16/20) | 20.21 s
    [Task 21/25]  Current/Best:    5.56/  21.40 GFLOPS | Progress: (20/20) | 26.21 s Done.
-
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:   18.44/  21.60 GFLOPS | Progress: (4/20) | 5.08 s
    [Task 22/25]  Current/Best:   12.29/  21.60 GFLOPS | Progress: (8/20) | 7.11 s
    [Task 22/25]  Current/Best:   10.60/  21.60 GFLOPS | Progress: (12/20) | 9.12 s
    [Task 22/25]  Current/Best:   12.20/  21.60 GFLOPS | Progress: (16/20) | 11.44 s
    [Task 22/25]  Current/Best:   18.01/  21.60 GFLOPS | Progress: (20/20) | 13.07 s Done.
-
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   19.79/  19.79 GFLOPS | Progress: (4/20) | 5.61 s
    [Task 23/25]  Current/Best:   11.36/  21.79 GFLOPS | Progress: (8/20) | 9.25 s
    [Task 23/25]  Current/Best:   12.07/  21.79 GFLOPS | Progress: (12/20) | 11.69 s
    [Task 23/25]  Current/Best:    3.09/  21.79 GFLOPS | Progress: (16/20) | 17.68 s
    [Task 23/25]  Current/Best:   12.07/  21.79 GFLOPS | Progress: (20/20) | 21.71 s Done.
-
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    1.81/   7.38 GFLOPS | Progress: (4/20) | 13.83 s
    [Task 24/25]  Current/Best:    3.19/   7.38 GFLOPS | Progress: (8/20) | 22.71 s
    [Task 24/25]  Current/Best:    3.39/   8.83 GFLOPS | Progress: (12/20) | 25.30 s
    [Task 24/25]  Current/Best:    0.59/   8.83 GFLOPS | Progress: (16/20) | 31.91 s
    [Task 24/25]  Current/Best:    3.57/   8.83 GFLOPS | Progress: (20/20) | 42.94 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    7.16/   7.16 GFLOPS | Progress: (4/20) | 13.89 s
    [Task 25/25]  Current/Best:    3.27/   9.16 GFLOPS | Progress: (8/20) | 17.17 s
    [Task 25/25]  Current/Best:    1.55/   9.16 GFLOPS | Progress: (12/20) | 20.12 s
    [Task 25/25]  Current/Best:    7.91/   9.16 GFLOPS | Progress: (16/20) | 28.68 s Done.
+
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   18.26/  18.26 GFLOPS | Progress: (4/20) | 9.58 s
    [Task  1/25]  Current/Best:   12.89/  20.03 GFLOPS | Progress: (8/20) | 13.84 s
    [Task  1/25]  Current/Best:    8.29/  20.03 GFLOPS | Progress: (12/20) | 20.23 s
    [Task  1/25]  Current/Best:   19.75/  20.81 GFLOPS | Progress: (16/20) | 22.89 s
    [Task  1/25]  Current/Best:    3.34/  20.81 GFLOPS | Progress: (20/20) | 26.05 s Done.
+
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:    6.26/  13.66 GFLOPS | Progress: (4/20) | 4.37 s
    [Task  2/25]  Current/Best:   15.07/  15.07 GFLOPS | Progress: (8/20) | 5.95 s
    [Task  2/25]  Current/Best:   10.40/  18.29 GFLOPS | Progress: (12/20) | 7.56 s
    [Task  2/25]  Current/Best:   20.14/  20.30 GFLOPS | Progress: (16/20) | 9.33 s
    [Task  2/25]  Current/Best:    8.01/  20.30 GFLOPS | Progress: (20/20) | 11.09 s Done.
+
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:    6.21/  19.28 GFLOPS | Progress: (4/20) | 5.37 s
    [Task  3/25]  Current/Best:    6.94/  19.51 GFLOPS | Progress: (8/20) | 7.90 s
    [Task  3/25]  Current/Best:    6.07/  19.51 GFLOPS | Progress: (12/20) | 10.13 s
    [Task  3/25]  Current/Best:    6.40/  19.51 GFLOPS | Progress: (16/20) | 13.05 s
    [Task  3/25]  Current/Best:    1.62/  19.51 GFLOPS | Progress: (20/20) | 16.84 s Done.
+
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:   12.15/  14.46 GFLOPS | Progress: (4/20) | 5.08 s
    [Task  4/25]  Current/Best:   14.99/  15.29 GFLOPS | Progress: (8/20) | 7.76 s
    [Task  4/25]  Current/Best:   11.57/  15.75 GFLOPS | Progress: (12/20) | 10.22 s
    [Task  4/25]  Current/Best:    2.21/  15.75 GFLOPS | Progress: (16/20) | 13.33 s
    [Task  4/25]  Current/Best:   20.38/  20.38 GFLOPS | Progress: (20/20) | 16.49 s Done.
+
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:   16.66/  22.27 GFLOPS | Progress: (4/20) | 4.64 s
    [Task  5/25]  Current/Best:   12.07/  22.27 GFLOPS | Progress: (8/20) | 7.33 s
    [Task  5/25]  Current/Best:   12.35/  22.27 GFLOPS | Progress: (12/20) | 9.80 s
    [Task  5/25]  Current/Best:   13.87/  22.27 GFLOPS | Progress: (16/20) | 11.52 s
    [Task  5/25]  Current/Best:    5.90/  22.27 GFLOPS | Progress: (20/20) | 13.43 s Done.
+
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:    5.48/  16.09 GFLOPS | Progress: (4/20) | 5.20 s
    [Task  6/25]  Current/Best:    3.08/  16.09 GFLOPS | Progress: (8/20) | 8.20 s
    [Task  6/25]  Current/Best:   13.57/  16.09 GFLOPS | Progress: (12/20) | 10.90 s
    [Task  6/25]  Current/Best:   12.37/  22.47 GFLOPS | Progress: (16/20) | 13.16 s
    [Task  6/25]  Current/Best:    7.62/  22.47 GFLOPS | Progress: (20/20) | 16.26 s Done.
+
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:   21.68/  21.68 GFLOPS | Progress: (4/20) | 4.99 s
    [Task  7/25]  Current/Best:   12.09/  21.68 GFLOPS | Progress: (8/20) | 8.56 s
    [Task  7/25]  Current/Best:   21.28/  21.68 GFLOPS | Progress: (12/20) | 10.69 s
    [Task  7/25]  Current/Best:    6.25/  21.68 GFLOPS | Progress: (16/20) | 13.48 s
    [Task  7/25]  Current/Best:   11.21/  21.68 GFLOPS | Progress: (20/20) | 16.47 s Done.
+
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   14.80/  19.32 GFLOPS | Progress: (4/20) | 5.06 s
    [Task  8/25]  Current/Best:   11.09/  19.32 GFLOPS | Progress: (8/20) | 16.59 s
    [Task  8/25]  Current/Best:   14.83/  19.32 GFLOPS | Progress: (12/20) | 20.71 s
    [Task  8/25]  Current/Best:    9.54/  19.32 GFLOPS | Progress: (16/20) | 32.18 s
    [Task  8/25]  Current/Best:   12.13/  19.32 GFLOPS | Progress: (20/20) | 36.70 s Done.
+
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   12.39/  12.39 GFLOPS | Progress: (4/20) | 5.22 s
    [Task  9/25]  Current/Best:   13.99/  20.10 GFLOPS | Progress: (8/20) | 7.84 s
    [Task  9/25]  Current/Best:    7.55/  20.10 GFLOPS | Progress: (12/20) | 16.25 s
    [Task  9/25]  Current/Best:   13.44/  20.10 GFLOPS | Progress: (16/20) | 23.48 s
    [Task  9/25]  Current/Best:    6.51/  20.10 GFLOPS | Progress: (20/20) | 29.93 s Done.
+
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   12.59/  14.82 GFLOPS | Progress: (4/20) | 5.15 s
    [Task 10/25]  Current/Best:   21.69/  21.69 GFLOPS | Progress: (8/20) | 7.09 s
    [Task 10/25]  Current/Best:    9.62/  21.69 GFLOPS | Progress: (12/20) | 8.78 s
    [Task 10/25]  Current/Best:   10.59/  21.69 GFLOPS | Progress: (16/20) | 11.37 s
    [Task 10/25]  Current/Best:   13.54/  21.69 GFLOPS | Progress: (20/20) | 14.12 s Done.
+
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   12.43/  21.94 GFLOPS | Progress: (4/20) | 5.11 s
    [Task 11/25]  Current/Best:   16.99/  21.94 GFLOPS | Progress: (8/20) | 7.43 s
    [Task 11/25]  Current/Best:    6.20/  21.94 GFLOPS | Progress: (12/20) | 9.89 s
    [Task 11/25]  Current/Best:   18.33/  21.94 GFLOPS | Progress: (16/20) | 13.16 s
    [Task 11/25]  Current/Best:   12.09/  21.94 GFLOPS | Progress: (20/20) | 16.26 s Done.
+
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:    7.94/  15.88 GFLOPS | Progress: (4/20) | 7.39 s
    [Task 12/25]  Current/Best:   11.97/  18.16 GFLOPS | Progress: (8/20) | 10.17 s
    [Task 12/25]  Current/Best:   10.86/  20.00 GFLOPS | Progress: (12/20) | 13.67 s
    [Task 12/25]  Current/Best:    5.77/  20.00 GFLOPS | Progress: (16/20) | 16.38 s
    [Task 12/25]  Current/Best:   15.28/  20.00 GFLOPS | Progress: (20/20) | 18.51 s Done.
+
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:    9.90/   9.90 GFLOPS | Progress: (4/20) | 6.13 s
    [Task 13/25]  Current/Best:   19.52/  19.52 GFLOPS | Progress: (8/20) | 8.65 s
    [Task 13/25]  Current/Best:    6.43/  19.52 GFLOPS | Progress: (12/20) | 11.38 s
    [Task 13/25]  Current/Best:    8.74/  19.52 GFLOPS | Progress: (16/20) | 13.87 s
    [Task 13/25]  Current/Best:   10.83/  19.52 GFLOPS | Progress: (20/20) | 17.56 s Done.
+
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:    9.60/  10.07 GFLOPS | Progress: (4/20) | 7.23 s
    [Task 14/25]  Current/Best:   14.96/  18.92 GFLOPS | Progress: (8/20) | 9.51 s
    [Task 14/25]  Current/Best:   16.04/  18.92 GFLOPS | Progress: (12/20) | 15.40 s
    [Task 14/25]  Current/Best:   17.19/  18.92 GFLOPS | Progress: (16/20) | 18.72 s
    [Task 14/25]  Current/Best:   10.15/  18.92 GFLOPS | Progress: (20/20) | 24.54 s Done.
+
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:   22.53/  22.53 GFLOPS | Progress: (4/20) | 13.77 s
    [Task 15/25]  Current/Best:   19.62/  22.53 GFLOPS | Progress: (8/20) | 18.92 s
    [Task 15/25]  Current/Best:   17.45/  22.53 GFLOPS | Progress: (12/20) | 20.52 s
    [Task 15/25]  Current/Best:   17.10/  22.53 GFLOPS | Progress: (16/20) | 22.21 s
    [Task 15/25]  Current/Best:    6.74/  22.53 GFLOPS | Progress: (20/20) | 32.13 s Done.
+
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:    6.33/  19.16 GFLOPS | Progress: (4/20) | 4.34 s
    [Task 16/25]  Current/Best:   11.85/  19.16 GFLOPS | Progress: (8/20) | 6.41 s
    [Task 16/25]  Current/Best:   18.87/  20.57 GFLOPS | Progress: (12/20) | 8.28 s
    [Task 16/25]  Current/Best:   15.38/  20.57 GFLOPS | Progress: (16/20) | 10.16 s
    [Task 16/25]  Current/Best:   15.90/  22.25 GFLOPS | Progress: (20/20) | 11.65 s Done.
+
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:    6.36/  24.28 GFLOPS | Progress: (4/20) | 5.32 s
    [Task 17/25]  Current/Best:    3.20/  24.28 GFLOPS | Progress: (8/20) | 10.38 s
    [Task 17/25]  Current/Best:   10.33/  24.28 GFLOPS | Progress: (12/20) | 13.34 s
    [Task 17/25]  Current/Best:   12.86/  24.28 GFLOPS | Progress: (16/20) | 16.00 s
    [Task 17/25]  Current/Best:   17.75/  24.28 GFLOPS | Progress: (20/20) | 18.94 s Done.
+
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:    9.46/  20.65 GFLOPS | Progress: (4/20) | 5.74 s
    [Task 18/25]  Current/Best:   16.09/  20.65 GFLOPS | Progress: (8/20) | 10.05 s
    [Task 18/25]  Current/Best:   18.44/  20.65 GFLOPS | Progress: (12/20) | 12.51 s
    [Task 18/25]  Current/Best:   16.63/  20.65 GFLOPS | Progress: (16/20) | 17.39 s
    [Task 18/25]  Current/Best:    5.47/  20.65 GFLOPS | Progress: (20/20) | 21.46 s Done.
+
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   10.04/  20.05 GFLOPS | Progress: (4/20) | 7.36 s
    [Task 19/25]  Current/Best:   19.10/  22.25 GFLOPS | Progress: (8/20) | 10.50 s
    [Task 19/25]  Current/Best:   22.94/  23.06 GFLOPS | Progress: (12/20) | 13.62 s
    [Task 19/25]  Current/Best:    3.19/  23.06 GFLOPS | Progress: (16/20) | 19.90 s
    [Task 19/25]  Current/Best:    6.62/  23.06 GFLOPS | Progress: (20/20) | 24.44 s Done.
+
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:    5.18/  22.42 GFLOPS | Progress: (4/20) | 5.19 s
    [Task 20/25]  Current/Best:    6.87/  22.42 GFLOPS | Progress: (8/20) | 12.52 s
    [Task 20/25]  Current/Best:    8.11/  22.42 GFLOPS | Progress: (12/20) | 20.59 s
    [Task 20/25]  Current/Best:   18.30/  22.42 GFLOPS | Progress: (16/20) | 31.70 s
    [Task 20/25]  Current/Best:    7.51/  22.42 GFLOPS | Progress: (20/20) | 36.74 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:   18.39/  20.89 GFLOPS | Progress: (4/20) | 6.08 s
    [Task 21/25]  Current/Best:   21.32/  21.32 GFLOPS | Progress: (8/20) | 8.02 s
    [Task 21/25]  Current/Best:   14.96/  21.32 GFLOPS | Progress: (12/20) | 11.35 s
    [Task 21/25]  Current/Best:    5.47/  21.32 GFLOPS | Progress: (16/20) | 17.32 s
    [Task 21/25]  Current/Best:    2.81/  21.32 GFLOPS | Progress: (20/2
 0) | 22.24 s Done.
+
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:   10.94/  22.33 GFLOPS | Progress: (4/20) | 4.89 s
    [Task 22/25]  Current/Best:    7.54/  22.33 GFLOPS | Progress: (8/20) | 7.99 s
    [Task 22/25]  Current/Best:   13.25/  22.33 GFLOPS | Progress: (12/20) | 10.82 s
    [Task 22/25]  Current/Best:   22.17/  22.33 GFLOPS | Progress: (16/20) | 13.69 s
    [Task 22/25]  Current/Best:   10.77/  22.33 GFLOPS | Progress: (20/20) | 15.84 s Done.
+
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:    9.79/  17.30 GFLOPS | Progress: (4/20) | 5.72 s
    [Task 23/25]  Current/Best:   24.20/  24.20 GFLOPS | Progress: (8/20) | 8.18 s
    [Task 23/25]  Current/Best:    1.60/  24.20 GFLOPS | Progress: (12/20) | 11.88 s
    [Task 23/25]  Current/Best:   18.06/  24.20 GFLOPS | Progress: (16/20) | 14.77 s
    [Task 23/25]  Current/Best:    9.08/  24.20 GFLOPS | Progress: (20/20) | 18.21 s Done.
+
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    4.20/   5.92 GFLOPS | Progress: (4/20) | 13.37 s
    [Task 24/25]  Current/Best:    8.26/   8.26 GFLOPS | Progress: (8/20) | 18.03 s
    [Task 24/25]  Current/Best:    2.53/   8.26 GFLOPS | Progress: (12/20) | 29.01 s
    [Task 24/25]  Current/Best:    7.42/   8.26 GFLOPS | Progress: (16/20) | 31.58 s
    [Task 24/25]  Current/Best:    7.84/   8.26 GFLOPS | Progress: (20/20) | 37.56 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
      Done.
-
    [Task 25/25]  Current/Best:    6.95/   9.65 GFLOPS | Progress: (20/20) | 39.37 s
+
    [Task 25/25]  Current/Best:    8.36/   9.03 GFLOPS | Progress: (4/20) | 13.66 s
    [Task 25/25]  Current/Best:    6.47/   9.87 GFLOPS | Progress: (8/20) | 19.03 s
    [Task 25/25]  Current/Best:    7.65/   9.87 GFLOPS | Progress: (12/20) | 25.25 s
    [Task 25/25]  Current/Best:    5.07/  10.06 GFLOPS | Progress: (16/20) | 36.18 s
    [Task 25/25]  Current/Best:    5.75/  10.06 GFLOPS | Progress: (20/20) | 39.05 s
 
 
 
@@ -708,7 +708,7 @@ Verify that the optimized model runs and produces the same results:
 
  .. code-block:: none
 
-    class='n02123045 tabby, tabby cat' with probability=0.621104
+    class='n02123045 tabby, tabby cat' with probability=0.621105
     class='n02123159 tiger cat' with probability=0.356378
     class='n02124075 Egyptian cat' with probability=0.019712
     class='n02129604 tiger, Panthera tigris' with probability=0.001215
@@ -766,8 +766,8 @@ improvement in comparing the optimized model to the unoptimized model.
 
  .. code-block:: none
 
-    optimized: {'mean': 410.5118951500481, 'median': 410.435267000139, 'std': 1.4656197114556417}
-    unoptimized: {'mean': 494.39252055002726, 'median': 494.9026871498063, 'std': 3.1817584438598923}
+    optimized: {'mean': 382.5639265501013, 'median': 382.085147250109, 'std': 2.9130401392340906}
+    unoptimized: {'mean': 495.16211137008213, 'median': 494.3268438999439, 'std': 3.6706157178015775}
 
 
 
@@ -790,7 +790,7 @@ profiling/benchmarking.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 13 minutes  40.901 seconds)
+   **Total running time of the script:** ( 13 minutes  22.489 seconds)
 
 
 .. _sphx_glr_download_tutorial_autotvm_relay_x86.py:
diff --git a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
index 84f0ccebe4..93348c1909 100644
--- a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
@@ -274,7 +274,7 @@ device and returns the measured cost. Network overhead is excluded.
 
  .. code-block:: none
 
-    1.289e-07 secs/op
+    1.279e-07 secs/op
 
 
 
diff --git a/docs/_sources/tutorial/intro_topi.rst.txt b/docs/_sources/tutorial/intro_topi.rst.txt
index 0a04bbb052..30e07f1bd6 100644
--- a/docs/_sources/tutorial/intro_topi.rst.txt
+++ b/docs/_sources/tutorial/intro_topi.rst.txt
@@ -270,7 +270,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
 
  .. code-block:: none
 
-    [stage(a, placeholder(a, 0xc7ecda0)), stage(b, placeholder(b, 0x12710350)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T. [...]
+    [stage(a, placeholder(a, 0xd4e8040)), stage(b, placeholder(b, 0x9c3cc70)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.R [...]
 
 
 
diff --git a/docs/_sources/tutorial/sg_execution_times.rst.txt b/docs/_sources/tutorial/sg_execution_times.rst.txt
index d0d23b19a1..dc4d66cd91 100644
--- a/docs/_sources/tutorial/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorial/sg_execution_times.rst.txt
@@ -5,24 +5,24 @@
 
 Computation times
 =================
-**17:05.228** total execution time for **tutorial** files:
+**16:48.162** total execution time for **tutorial** files:
 
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 13:40.901 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 13:22.489 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:14.973 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:30.142 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 01:00.605 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 00:58.312 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:43.767 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:41.329 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:22.816 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:13.947 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:01.028 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.965 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.928 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.818 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.212 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.160 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_uma.py` (``uma.py``)                                             | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
index 2e41e132e1..4dee7bcb2d 100644
--- a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
@@ -285,8 +285,8 @@ helper function to run a profile of the TVM generated code.
 
  .. code-block:: none
 
-    Numpy running time: 0.000009
-    naive: 0.000007
+    Numpy running time: 0.000006
+    naive: 0.000006
 
 
 
@@ -389,7 +389,7 @@ compile and run this new schedule with the parallel operation applied:
 
  .. code-block:: none
 
-    parallel: 0.000007
+    parallel: 0.000006
 
 
 
@@ -444,7 +444,7 @@ factor to be the number of threads on your CPU.
 
  .. code-block:: none
 
-    vector: 0.000039
+    vector: 0.000038
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -498,10 +498,10 @@ We can now compare the different schedules
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                   numpy    8.603850001236423e-06                    1.0
-                   naive              6.6914e-06      0.7777216012643651
-                parallel    7.0907000000000006e-06     0.824131057489499
-                  vector    3.9132000000000004e-05      4.54819644628585
+                   numpy    5.648829974234104e-06                    1.0
+                   naive              5.7964e-06       1.026123998498628
+                parallel              6.0328e-06      1.0679733728076948
+                  vector    3.790319999999999e-05      6.709920492011108
 
 
 
@@ -922,7 +922,7 @@ matrix multiplication.
 
  .. code-block:: none
 
-    Numpy running time: 0.018497
+    Numpy running time: 0.015701
 
 
 
@@ -980,7 +980,7 @@ optimizations.
 
  .. code-block:: none
 
-    none: 3.437607
+    none: 3.355358
 
 
 
@@ -1080,7 +1080,7 @@ schedule.
 
  .. code-block:: none
 
-    blocking: 0.297377
+    blocking: 0.285455
 
 
 
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.
 
  .. code-block:: none
 
-    vectorization: 0.283909
+    vectorization: 0.267830
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1230,7 +1230,7 @@ more cache friendly.
 
  .. code-block:: none
 
-    loop permutation: 0.119010
+    loop permutation: 0.111910
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1321,7 +1321,7 @@ optimized schedule.
 
  .. code-block:: none
 
-    array packing: 0.107144
+    array packing: 0.100887
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1404,7 +1404,7 @@ to `C` when all the block results are ready.
 
  .. code-block:: none
 
-    block caching: 0.111754
+    block caching: 0.096841
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1478,7 +1478,7 @@ of thread-level parallelization.
 
  .. code-block:: none
 
-    parallelization: 0.132169
+    parallelization: 0.114638
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1548,13 +1548,13 @@ working, we can compare the results.
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                    none            3.4376067333                     1.0
-                blocking     0.29737730780000005     0.08650707625142644
-           vectorization     0.28390941830000005     0.08258926640728781
-        loop permutation            0.1190097843     0.03461995322127909
-           array packing     0.10714365569999999    0.031168095716738745
-           block caching            0.1117543183    0.032509337737048005
-         parallelization            0.1321686653    0.038447872474674266
+                    none            3.3553575343                     1.0
+                blocking            0.2854547764     0.08507432471262769
+           vectorization            0.2678296384      0.0798214901577918
+        loop permutation     0.11190976379999999     0.03335256009412028
+           array packing     0.10088720389999999    0.030067497388485378
+           block caching            0.0968410304    0.028861612930975812
+         parallelization            0.1146384096     0.03416578067407534
 
 
 
@@ -1594,11 +1594,6 @@ operations with tunable parameters that allows you to automatically optimize
 the computation for specific platforms.
 
 
-.. rst-class:: sphx-glr-timing
-
-   **Total running time of the script:** ( 1 minutes  0.605 seconds)
-
-
 .. _sphx_glr_download_tutorial_tensor_expr_get_started.py:
 
 .. only:: html
diff --git a/docs/api/rust/help.html b/docs/api/rust/help.html
index 8d27f9b944..5e374fb45f 100644
--- a/docs/api/rust/help.html
+++ b/docs/api/rust/help.html
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="generator" content="rustdoc"><meta name="description" content="Documentation for Rustdoc"><title>Rustdoc help</title><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/SourceSerif4-Regular-46f98efaafac5295.ttf.woff2"><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/FiraSans-Regular-018c14 [...]
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="generator" content="rustdoc"><meta name="description" content="Documentation for Rustdoc"><title>Rustdoc help</title><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/SourceSerif4-Regular-46f98efaafac5295.ttf.woff2"><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/FiraSans-Regular-018c14 [...]
\ No newline at end of file
diff --git a/docs/api/rust/settings.html b/docs/api/rust/settings.html
index ab27dc3ccb..e3e401110e 100644
--- a/docs/api/rust/settings.html
+++ b/docs/api/rust/settings.html
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="generator" content="rustdoc"><meta name="description" content="Settings of Rustdoc"><title>Rustdoc settings</title><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/SourceSerif4-Regular-46f98efaafac5295.ttf.woff2"><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/FiraSans-Regular-018c141b [...]
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="generator" content="rustdoc"><meta name="description" content="Settings of Rustdoc"><title>Rustdoc settings</title><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/SourceSerif4-Regular-46f98efaafac5295.ttf.woff2"><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/FiraSans-Regular-018c141b [...]
\ No newline at end of file
diff --git a/docs/commit_hash b/docs/commit_hash
index 88ce546ed9..4e22adf8d9 100644
--- a/docs/commit_hash
+++ b/docs/commit_hash
@@ -1 +1 @@
-7ad71e622a7965be5a31c8ea09bf46d7c1e86c5c
+6c63e0db53a9fee57b22b53941a8e634c1c90227
diff --git a/docs/how_to/compile_models/from_darknet.html b/docs/how_to/compile_models/from_darknet.html
index 278123aaa1..2abff42e5b 100644
--- a/docs/how_to/compile_models/from_darknet.html
+++ b/docs/how_to/compile_models/from_darknet.html
@@ -595,7 +595,7 @@ class:[&#39;truck 0.9266&#39;] left:471 top:83 right:689 bottom:169
 class:[&#39;bicycle 0.9984&#39;] left:111 top:113 right:577 bottom:447
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  34.000 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  31.935 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-darknet-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7716f96385bd5abb6e822041e285be54/from_darknet.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_darknet.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_mxnet.html b/docs/how_to/compile_models/from_mxnet.html
index 9fefa497f0..b3a41bc67f 100644
--- a/docs/how_to/compile_models/from_mxnet.html
+++ b/docs/how_to/compile_models/from_mxnet.html
@@ -449,7 +449,7 @@
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;x&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#tuple" title="builtins.tuple" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">x</span><span class="o">.</span><span class="n">shape</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip2b33c744-77bf-440e-8f69-394f9aac9175 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip3c17ccdc-7b05-4710-bc61-77fca5fb5fb0 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
 x (1, 3, 224, 224)
 </pre></div>
 </div>
diff --git a/docs/how_to/compile_models/from_oneflow.html b/docs/how_to/compile_models/from_oneflow.html
index 0ae904e85d..d9b8c73fcd 100644
--- a/docs/how_to/compile_models/from_oneflow.html
+++ b/docs/how_to/compile_models/from_oneflow.html
@@ -459,13 +459,15 @@ Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdo
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip&quot; to /workspace/.oneflow/flowvision_cache/resnet18.zip
 
   0%|          | 0.00/41.5M [00:00&lt;?, ?B/s]
- 19%|#9        | 7.99M/41.5M [00:00&lt;00:00, 52.2MB/s]
- 35%|###4      | 14.3M/41.5M [00:00&lt;00:00, 44.8MB/s]
- 45%|####4     | 18.6M/41.5M [00:00&lt;00:00, 42.7MB/s]
- 58%|#####7    | 24.0M/41.5M [00:00&lt;00:00, 32.8MB/s]
- 77%|#######7  | 32.0M/41.5M [00:00&lt;00:00, 36.9MB/s]
- 92%|#########2| 38.3M/41.5M [00:01&lt;00:00, 32.1MB/s]
-100%|##########| 41.5M/41.5M [00:01&lt;00:00, 36.0MB/s]
+ 16%|#5        | 6.58M/41.5M [00:00&lt;00:00, 69.0MB/s]
+ 32%|###1      | 13.2M/41.5M [00:00&lt;00:00, 57.4MB/s]
+ 45%|####5     | 18.8M/41.5M [00:00&lt;00:00, 30.9MB/s]
+ 54%|#####4    | 22.6M/41.5M [00:00&lt;00:00, 28.6MB/s]
+ 62%|######2   | 25.9M/41.5M [00:00&lt;00:00, 26.0MB/s]
+ 77%|#######7  | 32.0M/41.5M [00:01&lt;00:00, 29.4MB/s]
+ 92%|#########2| 38.3M/41.5M [00:01&lt;00:00, 31.5MB/s]
+100%|#########9| 41.5M/41.5M [00:01&lt;00:00, 25.9MB/s]
+100%|##########| 41.5M/41.5M [00:01&lt;00:00, 29.9MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_paddle.html b/docs/how_to/compile_models/from_paddle.html
index 3d6fdb9704..da7ca11c9e 100644
--- a/docs/how_to/compile_models/from_paddle.html
+++ b/docs/how_to/compile_models/from_paddle.html
@@ -494,7 +494,7 @@ To begin, we’ll install PaddlePaddle&gt;=2.1.3:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>TVM prediction top-1 id: 282, class name:  282: &#39;tiger cat&#39;,
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  3.873 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  0.979 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-paddle-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/16269b77359771348d507395692524cf/from_paddle.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_paddle.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_pytorch.html b/docs/how_to/compile_models/from_pytorch.html
index e49a27048c..26cfdbf850 100644
--- a/docs/how_to/compile_models/from_pytorch.html
+++ b/docs/how_to/compile_models/from_pytorch.html
@@ -442,15 +442,15 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/resnet18-f37072fd.pth&quot; to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
 
   0%|          | 0.00/44.7M [00:00&lt;?, ?B/s]
- 15%|#5        | 6.72M/44.7M [00:00&lt;00:00, 70.5MB/s]
- 30%|###       | 13.4M/44.7M [00:00&lt;00:01, 28.8MB/s]
- 39%|###8      | 17.4M/44.7M [00:00&lt;00:01, 24.4MB/s]
- 54%|#####3    | 24.0M/44.7M [00:00&lt;00:00, 29.3MB/s]
- 68%|######7   | 30.3M/44.7M [00:01&lt;00:00, 29.1MB/s]
- 75%|#######4  | 33.4M/44.7M [00:01&lt;00:00, 27.2MB/s]
- 86%|########5 | 38.3M/44.7M [00:01&lt;00:00, 28.7MB/s]
- 92%|#########2| 41.2M/44.7M [00:01&lt;00:00, 27.4MB/s]
-100%|##########| 44.7M/44.7M [00:01&lt;00:00, 30.8MB/s]
+ 14%|#4        | 6.30M/44.7M [00:00&lt;00:01, 33.2MB/s]
+ 21%|##1       | 9.48M/44.7M [00:00&lt;00:01, 25.4MB/s]
+ 36%|###5      | 16.0M/44.7M [00:00&lt;00:00, 30.4MB/s]
+ 54%|#####3    | 24.0M/44.7M [00:00&lt;00:00, 38.6MB/s]
+ 62%|######2   | 27.8M/44.7M [00:00&lt;00:00, 38.7MB/s]
+ 72%|#######1  | 32.0M/44.7M [00:01&lt;00:00, 32.6MB/s]
+ 86%|########5 | 38.3M/44.7M [00:01&lt;00:00, 31.8MB/s]
+ 93%|#########2| 41.4M/44.7M [00:01&lt;00:00, 26.9MB/s]
+100%|##########| 44.7M/44.7M [00:01&lt;00:00, 32.7MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_tensorflow.html b/docs/how_to/compile_models/from_tensorflow.html
index 0275622ec0..b0ceb0d560 100644
--- a/docs/how_to/compile_models/from_tensorflow.html
+++ b/docs/how_to/compile_models/from_tensorflow.html
@@ -662,7 +662,7 @@ banana (score = 0.00022)
 desk (score = 0.00019)
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  37.992 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  34.035 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-tensorflow-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7f1d3d1b878694c201c614c807cdebc8/from_tensorflow.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_tensorflow.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/sg_execution_times.html b/docs/how_to/compile_models/sg_execution_times.html
index 54e38425e6..3c1a326c1e 100644
--- a/docs/how_to/compile_models/sg_execution_times.html
+++ b/docs/how_to/compile_models/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-compile-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>07:24.384</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
+<p><strong>07:09.419</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -359,43 +359,43 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
-<td><p>01:37.992</p></td>
+<td><p>01:34.035</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
-<td><p>01:34.000</p></td>
+<td><p>01:31.935</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></td>
-<td><p>01:03.873</p></td>
+<td><p>01:00.979</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></td>
-<td><p>00:42.917</p></td>
+<td><p>00:41.687</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
-<td><p>00:37.401</p></td>
+<td><p>00:35.674</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></td>
-<td><p>00:35.378</p></td>
+<td><p>00:33.553</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
-<td><p>00:28.784</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
+<td><p>00:28.226</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
-<td><p>00:27.747</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
+<td><p>00:27.485</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></td>
-<td><p>00:13.392</p></td>
+<td><p>00:13.059</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></td>
-<td><p>00:02.899</p></td>
+<td><p>00:02.786</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno.html b/docs/how_to/deploy_models/deploy_model_on_adreno.html
index 91ee3b4531..0afcb3f8f9 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno.html
@@ -840,10 +840,10 @@ Top5 predictions:
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
- 4229.4225    4223.6224    4271.7660    4221.5677     14.4009
+ 4161.0131    4164.2507    4173.1748    4116.0565     15.5221
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  20.716 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  18.890 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-model-on-adreno-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/2387d8448da213eb625e6b3d916327d4/deploy_model_on_adreno.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_model_on_adreno.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
index 76e653e6a0..adf418c74b 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
@@ -448,25 +448,32 @@ to run this tutorial with a real device over rpc.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
 
      8192/102967424 [..............................] - ETA: 0s
-  8380416/102967424 [=&gt;............................] - ETA: 0s
- 10575872/102967424 [==&gt;...........................] - ETA: 1s
- 16769024/102967424 [===&gt;..........................] - ETA: 1s
- 25157632/102967424 [======&gt;.......................] - ETA: 0s
- 33546240/102967424 [========&gt;.....................] - ETA: 0s
- 40189952/102967424 [==========&gt;...................] - ETA: 0s
- 41934848/102967424 [===========&gt;..................] - ETA: 0s
- 50323456/102967424 [=============&gt;................] - ETA: 0s
- 66035712/102967424 [==================&gt;...........] - ETA: 0s
- 69296128/102967424 [===================&gt;..........] - ETA: 0s
- 72540160/102967424 [====================&gt;.........] - ETA: 0s
+  6684672/102967424 [&gt;.............................] - ETA: 0s
+  8380416/102967424 [=&gt;............................] - ETA: 1s
+ 15024128/102967424 [===&gt;..........................] - ETA: 1s
+ 16769024/102967424 [===&gt;..........................] - ETA: 2s
+ 23412736/102967424 [=====&gt;........................] - ETA: 1s
+ 25157632/102967424 [======&gt;.......................] - ETA: 1s
+ 31580160/102967424 [========&gt;.....................] - ETA: 1s
+ 33546240/102967424 [========&gt;.....................] - ETA: 1s
+ 33816576/102967424 [========&gt;.....................] - ETA: 1s
+ 40189952/102967424 [==========&gt;...................] - ETA: 1s
+ 41934848/102967424 [===========&gt;..................] - ETA: 1s
+ 45842432/102967424 [============&gt;.................] - ETA: 1s
+ 50323456/102967424 [=============&gt;................] - ETA: 1s
+ 56967168/102967424 [===============&gt;..............] - ETA: 1s
+ 58712064/102967424 [================&gt;.............] - ETA: 1s
+ 65355776/102967424 [==================&gt;...........] - ETA: 0s
+ 67100672/102967424 [==================&gt;...........] - ETA: 0s
+ 71540736/102967424 [===================&gt;..........] - ETA: 0s
  75489280/102967424 [====================&gt;.........] - ETA: 0s
  82124800/102967424 [======================&gt;.......] - ETA: 0s
  83877888/102967424 [=======================&gt;......] - ETA: 0s
- 90521600/102967424 [=========================&gt;....] - ETA: 0s
+ 86999040/102967424 [========================&gt;.....] - ETA: 0s
  92266496/102967424 [=========================&gt;....] - ETA: 0s
 100646912/102967424 [============================&gt;.] - ETA: 0s
 102850560/102967424 [============================&gt;.] - ETA: 0s
-102967424/102967424 [==============================] - 1s 0us/step
+102967424/102967424 [==============================] - 2s 0us/step
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_model_on_android.html b/docs/how_to/deploy_models/deploy_model_on_android.html
index 045cc067a0..e68b6cfe34 100644
--- a/docs/how_to/deploy_models/deploy_model_on_android.html
+++ b/docs/how_to/deploy_models/deploy_model_on_android.html
@@ -672,7 +672,7 @@ to the remote android device.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  16.3814      16.2922      17.8350      14.9727       0.7011
+  15.6872      15.6687      15.9568      15.4194       0.1526
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
index c0a0c0d647..f9658dab9e 100644
--- a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
+++ b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
@@ -464,40 +464,40 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth&quot; to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
 
   0%|          | 0.00/170M [00:00&lt;?, ?B/s]
-  4%|3         | 6.30M/170M [00:00&lt;00:03, 43.8MB/s]
-  6%|6         | 10.5M/170M [00:00&lt;00:05, 31.3MB/s]
-  8%|8         | 14.3M/170M [00:00&lt;00:05, 27.7MB/s]
- 10%|#         | 17.0M/170M [00:00&lt;00:06, 24.3MB/s]
- 13%|#3        | 22.3M/170M [00:00&lt;00:04, 31.2MB/s]
- 15%|#5        | 25.5M/170M [00:00&lt;00:05, 25.9MB/s]
- 18%|#7        | 30.3M/170M [00:01&lt;00:04, 31.4MB/s]
- 20%|#9        | 33.6M/170M [00:01&lt;00:04, 29.5MB/s]
- 24%|##3       | 40.0M/170M [00:01&lt;00:03, 35.6MB/s]
- 27%|##7       | 46.3M/170M [00:01&lt;00:03, 40.9MB/s]
- 30%|##9       | 50.4M/170M [00:01&lt;00:03, 36.7MB/s]
- 33%|###2      | 56.0M/170M [00:01&lt;00:03, 33.6MB/s]
- 37%|###6      | 62.3M/170M [00:01&lt;00:03, 34.9MB/s]
- 39%|###8      | 65.7M/170M [00:02&lt;00:03, 32.9MB/s]
- 42%|####2     | 72.0M/170M [00:02&lt;00:02, 37.3MB/s]
- 46%|####6     | 78.3M/170M [00:02&lt;00:02, 42.7MB/s]
- 49%|####8     | 82.6M/170M [00:02&lt;00:02, 30.7MB/s]
- 52%|#####1    | 88.0M/170M [00:02&lt;00:02, 30.5MB/s]
- 57%|#####6    | 96.0M/170M [00:03&lt;00:02, 33.0MB/s]
- 61%|######1   | 104M/170M [00:03&lt;00:01, 35.0MB/s]
- 66%|######5   | 112M/170M [00:03&lt;00:01, 35.3MB/s]
- 71%|#######   | 120M/170M [00:03&lt;00:01, 37.6MB/s]
- 74%|#######4  | 126M/170M [00:03&lt;00:01, 40.9MB/s]
- 77%|#######6  | 130M/170M [00:03&lt;00:01, 39.5MB/s]
- 80%|########  | 136M/170M [00:04&lt;00:00, 38.3MB/s]
- 85%|########4 | 144M/170M [00:04&lt;00:00, 41.8MB/s]
- 88%|########8 | 150M/170M [00:04&lt;00:00, 43.1MB/s]
- 91%|######### | 154M/170M [00:04&lt;00:00, 37.3MB/s]
- 93%|#########3| 158M/170M [00:04&lt;00:00, 35.3MB/s]
- 95%|#########5| 162M/170M [00:04&lt;00:00, 28.4MB/s]
- 97%|#########6| 165M/170M [00:05&lt;00:00, 25.1MB/s]
- 98%|#########8| 167M/170M [00:05&lt;00:00, 22.1MB/s]
-100%|#########9| 169M/170M [00:05&lt;00:00, 21.8MB/s]
-100%|##########| 170M/170M [00:05&lt;00:00, 33.0MB/s]
+  4%|3         | 6.30M/170M [00:00&lt;00:08, 19.4MB/s]
+  5%|4         | 8.16M/170M [00:00&lt;00:09, 18.5MB/s]
+  8%|8         | 14.3M/170M [00:00&lt;00:05, 29.7MB/s]
+ 13%|#3        | 22.3M/170M [00:00&lt;00:05, 29.5MB/s]
+ 15%|#4        | 25.3M/170M [00:01&lt;00:06, 24.8MB/s]
+ 18%|#8        | 30.6M/170M [00:01&lt;00:04, 30.8MB/s]
+ 20%|##        | 34.1M/170M [00:01&lt;00:04, 28.8MB/s]
+ 24%|##3       | 40.0M/170M [00:01&lt;00:04, 31.4MB/s]
+ 27%|##7       | 46.3M/170M [00:01&lt;00:05, 25.9MB/s]
+ 29%|##8       | 49.1M/170M [00:01&lt;00:05, 24.8MB/s]
+ 33%|###2      | 56.0M/170M [00:02&lt;00:03, 31.7MB/s]
+ 37%|###6      | 62.3M/170M [00:02&lt;00:03, 36.0MB/s]
+ 39%|###8      | 66.1M/170M [00:02&lt;00:03, 35.1MB/s]
+ 42%|####2     | 72.0M/170M [00:02&lt;00:02, 37.4MB/s]
+ 46%|####6     | 78.7M/170M [00:02&lt;00:02, 44.7MB/s]
+ 49%|####9     | 83.3M/170M [00:02&lt;00:02, 42.9MB/s]
+ 52%|#####1    | 87.6M/170M [00:02&lt;00:02, 33.7MB/s]
+ 54%|#####3    | 91.3M/170M [00:03&lt;00:02, 27.8MB/s]
+ 57%|#####6    | 96.0M/170M [00:03&lt;00:03, 24.9MB/s]
+ 60%|######    | 102M/170M [00:03&lt;00:02, 28.2MB/s]
+ 62%|######1   | 105M/170M [00:03&lt;00:02, 24.9MB/s]
+ 66%|######5   | 112M/170M [00:03&lt;00:02, 29.6MB/s]
+ 71%|#######   | 120M/170M [00:04&lt;00:01, 33.5MB/s]
+ 74%|#######4  | 126M/170M [00:04&lt;00:01, 35.9MB/s]
+ 76%|#######6  | 130M/170M [00:04&lt;00:01, 30.9MB/s]
+ 80%|########  | 136M/170M [00:04&lt;00:01, 29.2MB/s]
+ 83%|########3 | 141M/170M [00:04&lt;00:00, 33.2MB/s]
+ 85%|########5 | 145M/170M [00:05&lt;00:01, 26.1MB/s]
+ 88%|########8 | 150M/170M [00:05&lt;00:00, 30.0MB/s]
+ 90%|######### | 154M/170M [00:05&lt;00:00, 25.7MB/s]
+ 93%|#########3| 158M/170M [00:05&lt;00:00, 28.4MB/s]
+ 95%|#########4| 161M/170M [00:05&lt;00:00, 25.4MB/s]
+ 98%|#########7| 166M/170M [00:06&lt;00:00, 22.0MB/s]
+100%|##########| 170M/170M [00:06&lt;00:00, 29.3MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
   (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -591,7 +591,7 @@ torchvision rcnn models.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Get 9 valid boxes
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  37.380 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  27.104 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-object-detection-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7795da4b258c8feff986668b95ef57ad/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized.html b/docs/how_to/deploy_models/deploy_prequantized.html
index daa89ed47f..8b180d3710 100644
--- a/docs/how_to/deploy_models/deploy_prequantized.html
+++ b/docs/how_to/deploy_models/deploy_prequantized.html
@@ -505,9 +505,8 @@ training. Other models require a full post training calibration.</p>
 Downloading: &quot;https://download.pytorch.org/models/mobilenet_v2-b0353104.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
 
   0%|          | 0.00/13.6M [00:00&lt;?, ?B/s]
- 47%|####6     | 6.30M/13.6M [00:00&lt;00:00, 34.1MB/s]
- 70%|#######   | 9.55M/13.6M [00:00&lt;00:00, 32.9MB/s]
-100%|##########| 13.6M/13.6M [00:00&lt;00:00, 38.5MB/s]
+ 59%|#####8    | 7.99M/13.6M [00:00&lt;00:00, 46.2MB/s]
+100%|##########| 13.6M/13.6M [00:00&lt;00:00, 54.4MB/s]
 </pre></div>
 </div>
 </div>
@@ -598,7 +597,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  89.2496      89.1013      93.0435      88.6885       0.5769
+  86.2764      86.2405      90.3213      85.7954       0.4626
 </pre></div>
 </div>
 <div class="admonition note">
@@ -637,7 +636,7 @@ This includes support for the VNNI 8 bit dot product instruction (CascadeLake or
 <div class="section" id="deploy-a-quantized-tflite-model">
 <h2>Deploy a quantized TFLite Model<a class="headerlink" href="#deploy-a-quantized-tflite-model" title="Permalink to this headline">¶</a></h2>
 <p>TODO</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  28.865 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  23.630 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/fb8217c13f4351224c6cf3aacf1a87fc/deploy_prequantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized_tflite.html b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
index a809c5ea4e..b3c21d7769 100644
--- a/docs/how_to/deploy_models/deploy_prequantized_tflite.html
+++ b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
@@ -590,7 +590,7 @@ TFLite Top-5 labels: [387 102 386 341 349]
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  110.3457     110.1860     114.1777     109.6324      0.6613
+  106.6157     106.5555     111.8867     105.0673      0.6880
 </pre></div>
 </div>
 <div class="admonition note">
diff --git a/docs/how_to/deploy_models/deploy_quantized.html b/docs/how_to/deploy_models/deploy_quantized.html
index b0a83b274f..f5e4c19db3 100644
--- a/docs/how_to/deploy_models/deploy_quantized.html
+++ b/docs/how_to/deploy_models/deploy_quantized.html
@@ -531,7 +531,7 @@ for calibration. But the accuracy might be impacted.</p>
   warnings.warn(
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  7.848 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  3.579 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-quantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7810ecf51bfc05f7d5e8a400ac3e815d/deploy_quantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_quantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/sg_execution_times.html b/docs/how_to/deploy_models/sg_execution_times.html
index ca2c3a32d8..c34940e966 100644
--- a/docs/how_to/deploy_models/sg_execution_times.html
+++ b/docs/how_to/deploy_models/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-deploy-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>12:14.481</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
+<p><strong>11:45.590</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 86%" />
@@ -359,43 +359,43 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></td>
-<td><p>03:37.380</p></td>
+<td><p>03:27.104</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></td>
-<td><p>02:07.848</p></td>
+<td><p>02:03.579</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></td>
-<td><p>01:28.865</p></td>
+<td><p>01:23.630</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_adreno.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno.py</span></code>)</p></td>
-<td><p>01:20.716</p></td>
+<td><p>01:18.890</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></td>
-<td><p>00:58.138</p></td>
+<td><p>00:55.450</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></td>
-<td><p>00:50.618</p></td>
+<td><p>00:48.602</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_adreno_tvmc.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-tvmc-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™ with tvmc Interface</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno_tvmc.py</span></code>)</p></td>
-<td><p>00:49.382</p></td>
+<td><p>00:48.339</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_nano.html#sphx-glr-how-to-deploy-models-deploy-model-on-nano-py"><span class="std std-ref">Deploy the Pretrained Model on Jetson Nano</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_nano.py</span></code>)</p></td>
-<td><p>00:31.179</p></td>
+<td><p>00:30.183</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></td>
-<td><p>00:30.348</p></td>
+<td><p>00:29.808</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></td>
-<td><p>00:00.007</p></td>
+<td><p>00:00.006</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/extend_tvm/bring_your_own_datatypes.html b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
index 1dee7d7b9b..c55158fc57 100644
--- a/docs/how_to/extend_tvm/bring_your_own_datatypes.html
+++ b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
@@ -629,7 +629,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
 <span class="n">module</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">params</span></a> <span class="o">=</span> <span class="n">get_mobilenet</span><span class="p">()</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip87cdc82d-6e08-4ffe-be2a-0d1ba7100ffa from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip10646a4e-dfa9-4259-8360-dc6359136126 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 </pre></div>
 </div>
 <p>It’s easy to execute MobileNet with native TVM:</p>
diff --git a/docs/how_to/extend_tvm/sg_execution_times.html b/docs/how_to/extend_tvm/sg_execution_times.html
index acf223c89f..bc2b3cf018 100644
--- a/docs/how_to/extend_tvm/sg_execution_times.html
+++ b/docs/how_to/extend_tvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-extend-tvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:58.845</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
+<p><strong>00:53.728</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></td>
-<td><p>00:54.942</p></td>
+<td><p>00:50.089</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></td>
-<td><p>00:02.712</p></td>
+<td><p>00:02.520</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></td>
-<td><p>00:01.184</p></td>
+<td><p>00:01.112</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/use_pass_instrument.html b/docs/how_to/extend_tvm/use_pass_instrument.html
index 84bebfbe8d..696a4afa2f 100644
--- a/docs/how_to/extend_tvm/use_pass_instrument.html
+++ b/docs/how_to/extend_tvm/use_pass_instrument.html
@@ -536,10 +536,10 @@ profile the execution time of each passes.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 23329us [23329us] (48.48%; 48.48%)
-FoldScaleAxis: 24790us [8us] (51.52%; 51.52%)
-        FoldConstant: 24782us [1810us] (51.50%; 99.97%)
-                InferType: 22973us [22973us] (47.74%; 92.70%)
+InferType: 22292us [22292us] (48.38%; 48.38%)
+FoldScaleAxis: 23780us [7us] (51.62%; 51.62%)
+        FoldConstant: 23772us [1763us] (51.60%; 99.97%)
+                InferType: 22009us [22009us] (47.77%; 92.58%)
 </pre></div>
 </div>
 </div>
@@ -561,10 +561,10 @@ Refer to following sections and <a class="reference internal" href="../../refere
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 23478us [23478us] (48.81%; 48.81%)
-FoldScaleAxis: 24625us [7us] (51.19%; 51.19%)
-        FoldConstant: 24618us [1917us] (51.18%; 99.97%)
-                InferType: 22701us [22701us] (47.19%; 92.21%)
+InferType: 21835us [21835us] (48.28%; 48.28%)
+FoldScaleAxis: 23395us [5us] (51.72%; 51.72%)
+        FoldConstant: 23390us [1667us] (51.71%; 99.98%)
+                InferType: 21722us [21722us] (48.03%; 92.87%)
 </pre></div>
 </div>
 <p>Register empty list to clear existing instruments.</p>
diff --git a/docs/how_to/optimize_operators/opt_conv_cuda.html b/docs/how_to/optimize_operators/opt_conv_cuda.html
index 01f6d7fdba..aeb09028c1 100644
--- a/docs/how_to/optimize_operators/opt_conv_cuda.html
+++ b/docs/how_to/optimize_operators/opt_conv_cuda.html
@@ -585,7 +585,7 @@ latency of convolution.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Convolution: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">*</span> <span cl [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 53.471614 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 53.542945 ms
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-optimize-operators-opt-conv-cuda-py">
diff --git a/docs/how_to/optimize_operators/opt_conv_tensorcore.html b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
index 3422f542bb..a5c5bbbec2 100644
--- a/docs/how_to/optimize_operators/opt_conv_tensorcore.html
+++ b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
@@ -867,7 +867,7 @@ be able to run on our build server</p>
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;conv2d with tensor core: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">* [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 12.270800 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 12.273936 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/optimize_operators/opt_gemm.html b/docs/how_to/optimize_operators/opt_gemm.html
index a3bbde5289..17e9978b74 100644
--- a/docs/how_to/optimize_operators/opt_gemm.html
+++ b/docs/how_to/optimize_operators/opt_gemm.html
@@ -482,8 +482,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Baseline: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019034
-Baseline: 3.413039
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.015656
+Baseline: 3.396954
 </pre></div>
 </div>
 <p>In TVM, we can always inspect lower level IR to debug or optimize our schedule.
@@ -542,7 +542,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt1: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.304455
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.280310
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -599,7 +599,7 @@ vastly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt2: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.285980
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.270487
 </pre></div>
 </div>
 <p>Here is the generated IR after vectorization.</p>
@@ -654,7 +654,7 @@ the access pattern for A matrix is more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt3: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.119482
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.110006
 </pre></div>
 </div>
 <p>Here is the generated IR after loop permutation.</p>
@@ -731,7 +731,7 @@ flattening.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt4: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.107821
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.101239
 </pre></div>
 </div>
 <p>Here is the generated IR after array packing.</p>
@@ -809,7 +809,7 @@ write to C when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt5: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111831
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.101375
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -889,7 +889,7 @@ class Module:
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt6: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">opt6_time</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.134663
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.125000
 </pre></div>
 </div>
 <p>Here is the generated IR after parallelization.</p>
diff --git a/docs/how_to/optimize_operators/sg_execution_times.html b/docs/how_to/optimize_operators/sg_execution_times.html
index 767668f6e4..4c9e48ed23 100644
--- a/docs/how_to/optimize_operators/sg_execution_times.html
+++ b/docs/how_to/optimize_operators/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-optimize-operators-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:34.873</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
+<p><strong>00:32.888</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></td>
-<td><p>00:31.273</p></td>
+<td><p>00:29.525</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></td>
-<td><p>00:02.149</p></td>
+<td><p>00:02.012</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></td>
-<td><p>00:01.450</p></td>
+<td><p>00:01.351</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
index 8aba7143ae..421baa0df8 100644
--- a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
+++ b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autoscheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>03:52.223</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
+<p><strong>03:39.341</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 85%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></td>
-<td><p>01:41.250</p></td>
+<td><p>01:35.935</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></td>
-<td><p>01:19.678</p></td>
+<td><p>01:15.472</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></td>
-<td><p>00:18.242</p></td>
+<td><p>00:16.809</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></td>
-<td><p>00:16.624</p></td>
+<td><p>00:15.721</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></td>
-<td><p>00:16.326</p></td>
+<td><p>00:15.305</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></td>
-<td><p>00:00.103</p></td>
+<td><p>00:00.099</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
index 1ddd121b61..889ce8badb 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
@@ -1023,7 +1023,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.347 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.349 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
index 61b47e462a..a164d31bf2 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
@@ -926,7 +926,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   8.1179       8.1191       8.1217       8.1129       0.0037
+   8.1041       8.1042       8.1155       8.0928       0.0093
 </pre></div>
 </div>
 </div>
@@ -948,7 +948,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  19.678 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  15.472 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-cuda-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/eafe360d52540634c9eea0fa89e804bd/tune_network_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
index b0a889a29d..da76cb5de6 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
@@ -945,7 +945,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  768.5636     768.9423     768.9955     767.7532      0.5735
+  717.0081     718.1229     719.1647     713.7368      2.3520
 </pre></div>
 </div>
 </div>
@@ -967,7 +967,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  41.250 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  35.935 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/e416b94ca1090b0897c0f6e0df95b911/tune_network_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_x86.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autotvm/sg_execution_times.html b/docs/how_to/tune_with_autotvm/sg_execution_times.html
index 442aebe091..cea09d4c29 100644
--- a/docs/how_to/tune_with_autotvm/sg_execution_times.html
+++ b/docs/how_to/tune_with_autotvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:23.643</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
+<p><strong>00:22.872</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,22 +359,22 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></td>
-<td><p>00:23.603</p></td>
+<td><p>00:22.835</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></td>
-<td><p>00:00.024</p></td>
+<td><p>00:00.021</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a Convolutional Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></td>
 <td><p>00:00.006</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></td>
 <td><p>00:00.005</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></td>
 <td><p>00:00.005</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
diff --git a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
index f78f0cb186..17b117f7de 100644
--- a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
@@ -620,7 +620,7 @@ and measure running time.</p>
 
 Best config:
 ,None
-Time cost of this operator: 0.037073
+Time cost of this operator: 0.037159
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autotvm-tune-conv2d-cuda-py">
diff --git a/docs/how_to/work_with_microtvm/micro_autotune.html b/docs/how_to/work_with_microtvm/micro_autotune.html
index 61e8ee901f..14d2b3a2d0 100644
--- a/docs/how_to/work_with_microtvm/micro_autotune.html
+++ b/docs/how_to/work_with_microtvm/micro_autotune.html
@@ -654,10 +654,10 @@ the tuned operator.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build without Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  304.4     98.653   (1, 2, 10, 10, 3)  2       1        [304.4]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.936     0.952    (1, 6, 10, 10)     1       1        [2.936]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         1.22      0.395    (1, 1, 10, 10, 3)  1       1        [1.22]
-Total_time                                    -                                             308.557   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  294.3     98.747   (1, 2, 10, 10, 3)  2       1        [294.3]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.81      0.943    (1, 6, 10, 10)     1       1        [2.81]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.924     0.31     (1, 1, 10, 10, 3)  1       1        [0.924]
+Total_time                                    -                                             298.034   -        -                  -       -        -
 </pre></div>
 </div>
 </div>
@@ -709,13 +709,13 @@ Total_time                                    -
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build with Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  100.7     96.949   (1, 6, 10, 10, 1)  2       1        [100.7]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.016     1.941    (1, 6, 10, 10)     1       1        [2.016]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         1.153     1.11     (1, 1, 10, 10, 3)  1       1        [1.153]
-Total_time                                    -                                             103.869   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  99.062    97.396   (1, 6, 10, 10, 1)  2       1        [99.062]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.724     1.695    (1, 6, 10, 10)     1       1        [1.724]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.924     0.909    (1, 1, 10, 10, 3)  1       1        [0.924]
+Total_time                                    -                                             101.711   -        -                  -       -        -
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  28.171 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  21.236 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-autotune-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/9ccca8fd489a1486ac71b55a55c320c5/micro_autotune.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_autotune.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_pytorch.html b/docs/how_to/work_with_microtvm/micro_pytorch.html
index 1efb4aa249..1bf85e63c3 100644
--- a/docs/how_to/work_with_microtvm/micro_pytorch.html
+++ b/docs/how_to/work_with_microtvm/micro_pytorch.html
@@ -465,8 +465,8 @@ download a cat image and preprocess it to use as the model input.</p>
 Downloading: &quot;https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
 
   0%|          | 0.00/3.42M [00:00&lt;?, ?B/s]
- 61%|######    | 2.09M/3.42M [00:00&lt;00:00, 13.5MB/s]
-100%|##########| 3.42M/3.42M [00:00&lt;00:00, 21.4MB/s]
+ 95%|#########5| 3.27M/3.42M [00:00&lt;00:00, 34.2MB/s]
+100%|##########| 3.42M/3.42M [00:00&lt;00:00, 35.5MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
   device=storage.device,
 /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -594,7 +594,7 @@ via the host <cite>main.cc`</cite> or if a Zephyr emulated board is selected as
 Torch top-1 id: 282, class name: tiger cat
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.489 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  23.096 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/12b9ecc04c41abaa12022061771821d1/micro_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_train.html b/docs/how_to/work_with_microtvm/micro_train.html
index 20ca344465..cc143fdf10 100644
--- a/docs/how_to/work_with_microtvm/micro_train.html
+++ b/docs/how_to/work_with_microtvm/micro_train.html
@@ -533,7 +533,7 @@ take about <strong>2 minutes</strong> to download the Stanford Cars, while COCO
 <a href="https://docs.python.org/3/library/shutil.html#shutil.move" title="shutil.move" class="sphx-glr-backref-module-shutil sphx-glr-backref-type-py-function"><span class="n">shutil</span><span class="o">.</span><span class="n">move</span></a><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx-glr-backref-typ [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmph22ojcyw/images/random&#39;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmp0i0p6kok/images/random&#39;
 </pre></div>
 </div>
 </div>
@@ -593,8 +593,8 @@ objects to other stuff? We can display some examples from our datasets using <co
     <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmph22ojcyw/images/target contains 8144 images
-/tmp/tmph22ojcyw/images/random contains 5000 images
+<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmp0i0p6kok/images/target contains 8144 images
+/tmp/tmp0i0p6kok/images/random contains 5000 images
 </pre></div>
 </div>
 </div>
@@ -706,13 +706,13 @@ the time on our validation set).</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Epoch 1/3
-328/328 - 41s - loss: 0.2171 - accuracy: 0.9238 - val_loss: 0.1282 - val_accuracy: 0.9547 - 41s/epoch - 124ms/step
+328/328 - 40s - loss: 0.2137 - accuracy: 0.9261 - val_loss: 0.1172 - val_accuracy: 0.9585 - 40s/epoch - 123ms/step
 Epoch 2/3
-328/328 - 36s - loss: 0.0995 - accuracy: 0.9635 - val_loss: 0.0903 - val_accuracy: 0.9668 - 36s/epoch - 109ms/step
+328/328 - 34s - loss: 0.1009 - accuracy: 0.9633 - val_loss: 0.1074 - val_accuracy: 0.9634 - 34s/epoch - 104ms/step
 Epoch 3/3
-328/328 - 36s - loss: 0.0629 - accuracy: 0.9765 - val_loss: 0.1174 - val_accuracy: 0.9630 - 36s/epoch - 108ms/step
+328/328 - 34s - loss: 0.0658 - accuracy: 0.9761 - val_loss: 0.1009 - val_accuracy: 0.9671 - 34s/epoch - 104ms/step
 
-&lt;keras.callbacks.History object at 0x7f1da3af9820&gt;
+&lt;keras.callbacks.History object at 0x7f74e4ec8640&gt;
 </pre></div>
 </div>
 </div>
@@ -976,7 +976,7 @@ as intended.</p>
 <p>From here, we could modify the model to read live images from the camera - we have another
 Arduino tutorial for how to do that <a class="reference external" href="https://github.com/guberti/tvm-arduino-demos/tree/master/examples/person_detection">on GitHub</a>. Alternatively, we could also
 <a class="reference external" href="https://tvm.apache.org/docs/how_to/work_with_microtvm/micro_autotune.html">use TVM’s autotuning capabilities</a> to dramatically improve the model’s performance.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  53.923 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  39.033 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-train-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/b52cec46baf4f78d6bcd94cbe269c8a6/micro_train.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_train.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/sg_execution_times.html b/docs/how_to/work_with_microtvm/sg_execution_times.html
index 127d229a67..92e86ba71d 100644
--- a/docs/how_to/work_with_microtvm/sg_execution_times.html
+++ b/docs/how_to/work_with_microtvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-microtvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>08:22.216</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
+<p><strong>07:51.719</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_train.html#sphx-glr-how-to-work-with-microtvm-micro-train-py"><span class="std std-ref">5. Training Vision Models for microTVM on Arduino</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_train.py</span></code>)</p></td>
-<td><p>04:53.923</p></td>
+<td><p>04:39.033</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_pytorch.html#sphx-glr-how-to-work-with-microtvm-micro-pytorch-py"><span class="std std-ref">4. microTVM PyTorch Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_pytorch.py</span></code>)</p></td>
-<td><p>01:29.489</p></td>
+<td><p>01:23.096</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">6. Model Tuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
-<td><p>01:28.171</p></td>
+<td><p>01:21.236</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_aot.html#sphx-glr-how-to-work-with-microtvm-micro-aot-py"><span class="std std-ref">3. microTVM Ahead-of-Time (AOT) Compilation</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_aot.py</span></code>)</p></td>
-<td><p>00:12.756</p></td>
+<td><p>00:11.808</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">2. microTVM TFLite Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></td>
-<td><p>00:09.025</p></td>
+<td><p>00:08.586</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_custom_ide.html#sphx-glr-how-to-work-with-microtvm-micro-custom-ide-py"><span class="std std-ref">9. Bring microTVM to your own development environment</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_custom_ide.py</span></code>)</p></td>
-<td><p>00:08.852</p></td>
+<td><p>00:07.960</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">7. Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_relay/sg_execution_times.html b/docs/how_to/work_with_relay/sg_execution_times.html
index c28f4704f1..25c25c59d2 100644
--- a/docs/how_to/work_with_relay/sg_execution_times.html
+++ b/docs/how_to/work_with_relay/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-relay-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:41.073</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
+<p><strong>00:38.213</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="using_pipeline_executor.html#sphx-glr-how-to-work-with-relay-using-pipeline-executor-py"><span class="std std-ref">Using Pipeline Executor in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_pipeline_executor.py</span></code>)</p></td>
-<td><p>00:35.708</p></td>
+<td><p>00:33.220</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></td>
-<td><p>00:03.369</p></td>
+<td><p>00:03.133</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></td>
-<td><p>00:01.989</p></td>
+<td><p>00:01.855</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/intrin_math.html b/docs/how_to/work_with_schedules/intrin_math.html
index a84f83c2c6..c2bf232905 100644
--- a/docs/how_to/work_with_schedules/intrin_math.html
+++ b/docs/how_to/work_with_schedules/intrin_math.html
@@ -559,7 +559,7 @@ The following example customizes CUDA lowering rule for <code class="code docuti
 <a href="../../reference/api/python/ir.html#tvm.ir.register_intrin_lowering" title="tvm.ir.register_intrin_lowering" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-function"><span class="n">register_intrin_lowering</span></a><span class="p">(</span><span class="s2">&quot;tir.exp&quot;</span><span class="p">,</span> <span class="n">target</span><span class="o">=</span><span class="s2">&quot;cuda&quot;</span><span class="p">,</span> <span class="n">f</span><span class="o">= [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7f207c723dc0&gt;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7f74fda06b80&gt;
 </pre></div>
 </div>
 <p>Register the rule to TVM with override option to override existing rule.
diff --git a/docs/how_to/work_with_schedules/sg_execution_times.html b/docs/how_to/work_with_schedules/sg_execution_times.html
index e554215c68..3edbd5c00c 100644
--- a/docs/how_to/work_with_schedules/sg_execution_times.html
+++ b/docs/how_to/work_with_schedules/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-schedules-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:06.854</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
+<p><strong>00:07.963</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,35 +359,35 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></td>
-<td><p>00:03.467</p></td>
+<td><p>00:04.900</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></td>
-<td><p>00:01.519</p></td>
+<td><p>00:01.374</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></td>
-<td><p>00:00.793</p></td>
+<td><p>00:00.713</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></td>
-<td><p>00:00.784</p></td>
+<td><p>00:00.694</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></td>
-<td><p>00:00.117</p></td>
+<td><p>00:00.114</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></td>
-<td><p>00:00.084</p></td>
+<td><p>00:00.080</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></td>
-<td><p>00:00.059</p></td>
+<td><p>00:00.058</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tuple_inputs.html#sphx-glr-how-to-work-with-schedules-tuple-inputs-py"><span class="std std-ref">Compute and Reduce with Tuple Inputs</span></a> (<code class="docutils literal notranslate"><span class="pre">tuple_inputs.py</span></code>)</p></td>
-<td><p>00:00.032</p></td>
+<td><p>00:00.030</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/reference/api/python/auto_scheduler.html b/docs/reference/api/python/auto_scheduler.html
index ff03aef843..8efcad5194 100644
--- a/docs/reference/api/python/auto_scheduler.html
+++ b/docs/reference/api/python/auto_scheduler.html
@@ -1627,7 +1627,7 @@ history states as starting point to perform Evolutionary Search).</p></li>
 
 <dl class="py class">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.SketchPolicy">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
 <dd><p>The search policy that searches in a hierarchical search space defined by sketches.
 The policy randomly samples programs from the space defined by sketches and use evolutionary
 search to fine-tune them.</p>
@@ -1911,7 +1911,7 @@ Candidates:
 
 <dl class="py function">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.auto_schedule">
-<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
+<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
 <dd><p>THIS API IS DEPRECATED.</p>
 <p>Run auto scheduling search for a task.</p>
 <dl class="field-list simple">
diff --git a/docs/reference/api/typedoc/classes/bytestreamreader.html b/docs/reference/api/typedoc/classes/bytestreamreader.html
index bbc0ea8881..3799628127 100644
--- a/docs/reference/api/typedoc/classes/bytestreamreader.html
+++ b/docs/reference/api/typedoc/classes/bytestreamreader.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -141,7 +141,7 @@
 					<div class="tsd-signature tsd-kind-icon">bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Uint8Array</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -151,7 +151,7 @@
 					<div class="tsd-signature tsd-kind-icon">offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -168,7 +168,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">Uint8Array</span></h4>
@@ -185,7 +185,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -202,7 +202,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/cachedcallstack.html b/docs/reference/api/typedoc/classes/cachedcallstack.html
index a1af9af770..dd0ee40666 100644
--- a/docs/reference/api/typedoc/classes/cachedcallstack.html
+++ b/docs/reference/api/typedoc/classes/cachedcallstack.html
@@ -144,7 +144,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L223">memory.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L223">memory.ts:223</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">temp<wbr>Args<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><a href="../interfaces/disposable.html" class="tsd-signature-type">Disposable</a><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L208">memory.ts:208</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L208">memory.ts:208</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -194,7 +194,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L312">memory.ts:312</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L312">memory.ts:312</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L284">memory.ts:284</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L284">memory.ts:284</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L388">memory.ts:388</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L388">memory.ts:388</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -300,7 +300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L376">memory.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L376">memory.ts:376</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -340,7 +340,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L267">memory.ts:267</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L267">memory.ts:267</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -373,7 +373,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L243">memory.ts:243</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L243">memory.ts:243</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -390,7 +390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L321">memory.ts:321</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L321">memory.ts:321</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -422,7 +422,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L252">memory.ts:252</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L252">memory.ts:252</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -444,7 +444,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L359">memory.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L359">memory.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -470,7 +470,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L342">memory.ts:342</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L342">memory.ts:342</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -496,7 +496,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L350">memory.ts:350</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L350">memory.ts:350</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -522,7 +522,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L326">memory.ts:326</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L326">memory.ts:326</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -548,7 +548,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L363">memory.ts:363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L363">memory.ts:363</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -574,7 +574,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L346">memory.ts:346</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L346">memory.ts:346</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -600,7 +600,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L334">memory.ts:334</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L334">memory.ts:334</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/dldatatype.html b/docs/reference/api/typedoc/classes/dldatatype.html
index fc30b87c80..c5dd39995b 100644
--- a/docs/reference/api/typedoc/classes/dldatatype.html
+++ b/docs/reference/api/typedoc/classes/dldatatype.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">bits<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L357">runtime.ts:357</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L357">runtime.ts:357</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">code<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L355">runtime.ts:355</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L355">runtime.ts:355</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -177,7 +177,7 @@
 					<div class="tsd-signature tsd-kind-icon">lanes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -199,7 +199,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L376">runtime.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L376">runtime.ts:376</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -216,7 +216,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L367">runtime.ts:367</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L367">runtime.ts:367</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/dldevice.html b/docs/reference/api/typedoc/classes/dldevice.html
index 8bb5ae38da..30e2b39e1d 100644
--- a/docs/reference/api/typedoc/classes/dldevice.html
+++ b/docs/reference/api/typedoc/classes/dldevice.html
@@ -118,7 +118,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L299">runtime.ts:299</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L299">runtime.ts:299</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L297">runtime.ts:297</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L297">runtime.ts:297</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -161,7 +161,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L295">runtime.ts:295</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L295">runtime.ts:295</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -183,7 +183,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L320">runtime.ts:320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L320">runtime.ts:320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -205,7 +205,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L327">runtime.ts:327</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L327">runtime.ts:327</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/environment.html b/docs/reference/api/typedoc/classes/environment.html
index 0d991022bb..4d2e387830 100644
--- a/docs/reference/api/typedoc/classes/environment.html
+++ b/docs/reference/api/typedoc/classes/environment.html
@@ -125,7 +125,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/environment.ts#L86">environment.ts:86</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/environment.ts#L86">environment.ts:86</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 					<aside class="tsd-sources">
 						<p>Implementation of <a href="../interfaces/libraryprovider.html">LibraryProvider</a>.<a href="../interfaces/libraryprovider.html#imports">imports</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/environment.ts#L70">environment.ts:70</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/environment.ts#L70">environment.ts:70</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/environment.ts#L69">environment.ts:69</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/environment.ts#L69">environment.ts:69</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -210,7 +210,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">ctypes.FTVMWasmPackedCFunc</span><span class="tsd-signature-symbol"> | </span><span class="tsd-signature-type">undefined</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = [undefined,]</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/environment.ts#L78">environment.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/environment.ts#L78">environment.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -228,7 +228,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<wbr>Free<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/environment.ts#L84">environment.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/environment.ts#L84">environment.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/environment.ts#L105">environment.ts:105</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/environment.ts#L105">environment.ts:105</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ffilibrary.html b/docs/reference/api/typedoc/classes/ffilibrary.html
index 70b78c43ee..e872ce4f92 100644
--- a/docs/reference/api/typedoc/classes/ffilibrary.html
+++ b/docs/reference/api/typedoc/classes/ffilibrary.html
@@ -131,7 +131,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L50">runtime.ts:50</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L50">runtime.ts:50</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L47">runtime.ts:47</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L47">runtime.ts:47</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L46">runtime.ts:46</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L46">runtime.ts:46</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L45">runtime.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L45">runtime.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">webGPUContext<span class="tsd-signature-symbol">:</span> <a href="webgpucontext.html" class="tsd-signature-type">WebGPUContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L48">runtime.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L48">runtime.ts:48</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -203,7 +203,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L77">runtime.ts:77</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L77">runtime.ts:77</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L67">runtime.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L67">runtime.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -243,7 +243,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L85">runtime.ts:85</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L85">runtime.ts:85</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <a href="cachedcallstack.html" class="tsd-signature-type">CachedCallStack</a></h4>
@@ -260,7 +260,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L96">runtime.ts:96</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L96">runtime.ts:96</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -283,7 +283,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L73">runtime.ts:73</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L73">runtime.ts:73</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/instance.html b/docs/reference/api/typedoc/classes/instance.html
index 4aae651d41..582b764f49 100644
--- a/docs/reference/api/typedoc/classes/instance.html
+++ b/docs/reference/api/typedoc/classes/instance.html
@@ -161,7 +161,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L844">runtime.ts:844</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L844">runtime.ts:844</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L834">runtime.ts:834</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L834">runtime.ts:834</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -234,7 +234,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L833">runtime.ts:833</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L833">runtime.ts:833</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -251,7 +251,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L973">runtime.ts:973</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L973">runtime.ts:973</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -296,7 +296,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L932">runtime.ts:932</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L932">runtime.ts:932</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -318,7 +318,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L901">runtime.ts:901</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L901">runtime.ts:901</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -381,7 +381,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -412,7 +412,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -453,7 +453,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -491,7 +491,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L922">runtime.ts:922</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L922">runtime.ts:922</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -508,7 +508,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -552,7 +552,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L943">runtime.ts:943</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L943">runtime.ts:943</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -577,7 +577,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -609,7 +609,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -640,7 +640,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -672,7 +672,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -695,7 +695,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -729,7 +729,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L986">runtime.ts:986</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L986">runtime.ts:986</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -769,7 +769,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -817,7 +817,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -857,7 +857,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -900,7 +900,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -938,7 +938,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -990,7 +990,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1014,7 +1014,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1046,7 +1046,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1078,7 +1078,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1110,7 +1110,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1141,7 +1141,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L957">runtime.ts:957</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L957">runtime.ts:957</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/memory.html b/docs/reference/api/typedoc/classes/memory.html
index 237006658c..6afa95dfbe 100644
--- a/docs/reference/api/typedoc/classes/memory.html
+++ b/docs/reference/api/typedoc/classes/memory.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L40">memory.ts:40</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L40">memory.ts:40</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Memory</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L32">memory.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L32">memory.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span><span class="tsd-signature-symbol"> = true</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L33">memory.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L33">memory.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L154">memory.ts:154</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L154">memory.ts:154</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -210,7 +210,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L90">memory.ts:90</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L90">memory.ts:90</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -233,7 +233,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L97">memory.ts:97</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L97">memory.ts:97</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -256,7 +256,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L74">memory.ts:74</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L74">memory.ts:74</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -279,7 +279,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L81">memory.ts:81</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L81">memory.ts:81</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -302,7 +302,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L104">memory.ts:104</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L104">memory.ts:104</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -325,7 +325,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L132">memory.ts:132</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L132">memory.ts:132</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -362,7 +362,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L145">memory.ts:145</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L145">memory.ts:145</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -393,7 +393,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L60">memory.ts:60</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L60">memory.ts:60</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -416,7 +416,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L67">memory.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L67">memory.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -439,7 +439,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L53">memory.ts:53</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L53">memory.ts:53</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -462,7 +462,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L114">memory.ts:114</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L114">memory.ts:114</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -485,7 +485,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L124">memory.ts:124</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L124">memory.ts:124</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -502,7 +502,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/memory.ts#L175">memory.ts:175</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/memory.ts#L175">memory.ts:175</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/module.html b/docs/reference/api/typedoc/classes/module.html
index 52022af902..ba85e7ab49 100644
--- a/docs/reference/api/typedoc/classes/module.html
+++ b/docs/reference/api/typedoc/classes/module.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L614">runtime.ts:614</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L614">runtime.ts:614</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L626">runtime.ts:626</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L626">runtime.ts:626</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -186,7 +186,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L653">runtime.ts:653</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L653">runtime.ts:653</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -218,7 +218,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L641">runtime.ts:641</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L641">runtime.ts:641</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L687">runtime.ts:687</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L687">runtime.ts:687</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ndarray.html b/docs/reference/api/typedoc/classes/ndarray.html
index fcc9c3dfdf..e830babb52 100644
--- a/docs/reference/api/typedoc/classes/ndarray.html
+++ b/docs/reference/api/typedoc/classes/ndarray.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L401">runtime.ts:401</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L401">runtime.ts:401</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <a href="dldevice.html" class="tsd-signature-type">DLDevice</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L394">runtime.ts:394</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L394">runtime.ts:394</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -173,7 +173,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L390">runtime.ts:390</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L390">runtime.ts:390</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -188,7 +188,7 @@
 					<div class="tsd-signature tsd-kind-icon">ndim<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L388">runtime.ts:388</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L388">runtime.ts:388</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -203,7 +203,7 @@
 					<div class="tsd-signature tsd-kind-icon">shape<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L392">runtime.ts:392</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L392">runtime.ts:392</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -225,7 +225,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L480">runtime.ts:480</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L480">runtime.ts:480</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -258,7 +258,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L524">runtime.ts:524</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L524">runtime.ts:524</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -290,7 +290,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L465">runtime.ts:465</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L465">runtime.ts:465</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -307,7 +307,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L458">runtime.ts:458</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L458">runtime.ts:458</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -339,7 +339,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L584">runtime.ts:584</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L584">runtime.ts:584</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -363,7 +363,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L553">runtime.ts:553</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L553">runtime.ts:553</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/packedfunccell.html b/docs/reference/api/typedoc/classes/packedfunccell.html
index 2dac63a8fb..42ed64021c 100644
--- a/docs/reference/api/typedoc/classes/packedfunccell.html
+++ b/docs/reference/api/typedoc/classes/packedfunccell.html
@@ -117,7 +117,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L248">runtime.ts:248</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L248">runtime.ts:248</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L255">runtime.ts:255</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L255">runtime.ts:255</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -163,7 +163,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L264">runtime.ts:264</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L264">runtime.ts:264</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/rpcserver.html b/docs/reference/api/typedoc/classes/rpcserver.html
index df04ab3cae..77dcecb3f3 100644
--- a/docs/reference/api/typedoc/classes/rpcserver.html
+++ b/docs/reference/api/typedoc/classes/rpcserver.html
@@ -115,7 +115,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">unknown</span><span class="tsd-signat [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -201,7 +201,7 @@
 					<div class="tsd-signature tsd-kind-icon">key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -211,7 +211,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -242,7 +242,7 @@
 					<div class="tsd-signature tsd-kind-icon">socket<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">WebSocket</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -252,7 +252,7 @@
 					<div class="tsd-signature tsd-kind-icon">state<span class="tsd-signature-symbol">:</span> <a href="../enums/rpcserverstate.html" class="tsd-signature-type">RPCServerState</a><span class="tsd-signature-symbol"> = RPCServerState.InitHeader</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -262,7 +262,7 @@
 					<div class="tsd-signature tsd-kind-icon">url<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/classes/runtimecontext.html b/docs/reference/api/typedoc/classes/runtimecontext.html
index 4712a10df7..20f1ea097b 100644
--- a/docs/reference/api/typedoc/classes/runtimecontext.html
+++ b/docs/reference/api/typedoc/classes/runtimecontext.html
@@ -132,7 +132,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L148">runtime.ts:148</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L148">runtime.ts:148</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Item<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L143">runtime.ts:143</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L143">runtime.ts:143</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Size<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L144">runtime.ts:144</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L144">runtime.ts:144</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -192,7 +192,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Make<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L145">runtime.ts:145</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -202,7 +202,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Sys<wbr>Lib<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L146">runtime.ts:146</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L146">runtime.ts:146</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -219,7 +219,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L189">runtime.ts:189</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L189">runtime.ts:189</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -263,7 +263,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L163">runtime.ts:163</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L163">runtime.ts:163</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -280,7 +280,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L208">runtime.ts:208</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L208">runtime.ts:208</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
@@ -309,7 +309,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L157">runtime.ts:157</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L157">runtime.ts:157</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -326,7 +326,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L167">runtime.ts:167</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L167">runtime.ts:167</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -343,7 +343,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L198">runtime.ts:198</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L198">runtime.ts:198</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/scalar.html b/docs/reference/api/typedoc/classes/scalar.html
index a446def358..f56f9d0cbd 100644
--- a/docs/reference/api/typedoc/classes/scalar.html
+++ b/docs/reference/api/typedoc/classes/scalar.html
@@ -112,7 +112,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -137,7 +137,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">value<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L233">runtime.ts:233</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L233">runtime.ts:233</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmarray.html b/docs/reference/api/typedoc/classes/tvmarray.html
index 2885353a2a..2dac169a9a 100644
--- a/docs/reference/api/typedoc/classes/tvmarray.html
+++ b/docs/reference/api/typedoc/classes/tvmarray.html
@@ -133,7 +133,7 @@
 							<aside class="tsd-sources">
 								<p>Overrides <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#constructor">constructor</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L784">runtime.ts:784</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L784">runtime.ts:784</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -162,7 +162,7 @@
 					<aside class="tsd-sources">
 						<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#ctx">ctx</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#dispose">dispose</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -197,7 +197,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L804">runtime.ts:804</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L804">runtime.ts:804</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -230,7 +230,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#gethandle">getHandle</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L796">runtime.ts:796</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L796">runtime.ts:796</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -283,7 +283,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typeindex">typeIndex</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -306,7 +306,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typekey">typeKey</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmobject.html b/docs/reference/api/typedoc/classes/tvmobject.html
index 5dc5fa2a8e..33a7c4c7ac 100644
--- a/docs/reference/api/typedoc/classes/tvmobject.html
+++ b/docs/reference/api/typedoc/classes/tvmobject.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">ctx<span class="tsd-signature-symbol">:</span> <a href="runtimecontext.html" class="tsd-signature-type">RuntimeContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -175,7 +175,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -192,7 +192,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -246,7 +246,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/webgpucontext.html b/docs/reference/api/typedoc/classes/webgpucontext.html
index 3dd4083c07..811023016f 100644
--- a/docs/reference/api/typedoc/classes/webgpucontext.html
+++ b/docs/reference/api/typedoc/classes/webgpucontext.html
@@ -120,7 +120,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -145,7 +145,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">GPUDevice</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -155,7 +155,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -172,7 +172,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -209,7 +209,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/enums/argtypecode.html b/docs/reference/api/typedoc/enums/argtypecode.html
index 65c96ac66d..1e906ffe3b 100644
--- a/docs/reference/api/typedoc/enums/argtypecode.html
+++ b/docs/reference/api/typedoc/enums/argtypecode.html
@@ -106,7 +106,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 6</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -116,7 +116,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -126,7 +126,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -136,7 +136,7 @@
 					<div class="tsd-signature tsd-kind-icon">Null<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 12</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMDLTensor<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 7</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMModule<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 9</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMNDArray<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 13</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -196,7 +196,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -206,7 +206,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObjectRValue<wbr>Ref<wbr>Arg<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 14</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -216,7 +216,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMOpaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -226,7 +226,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMPacked<wbr>Func<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 10</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -236,7 +236,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 11</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -246,7 +246,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/aynccallbackcode.html b/docs/reference/api/typedoc/enums/aynccallbackcode.html
index 93485f16d9..132eafe075 100644
--- a/docs/reference/api/typedoc/enums/aynccallbackcode.html
+++ b/docs/reference/api/typedoc/enums/aynccallbackcode.html
@@ -93,7 +93,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Exception<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L812">runtime.ts:812</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L812">runtime.ts:812</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -103,7 +103,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L811">runtime.ts:811</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L811">runtime.ts:811</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/dldatatypecode.html b/docs/reference/api/typedoc/enums/dldatatypecode.html
index fd669c2207..0ab4e6e875 100644
--- a/docs/reference/api/typedoc/enums/dldatatypecode.html
+++ b/docs/reference/api/typedoc/enums/dldatatypecode.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L339">runtime.ts:339</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L339">runtime.ts:339</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L337">runtime.ts:337</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L337">runtime.ts:337</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">Opaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L340">runtime.ts:340</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L340">runtime.ts:340</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -125,7 +125,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L338">runtime.ts:338</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L338">runtime.ts:338</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/rpcserverstate.html b/docs/reference/api/typedoc/enums/rpcserverstate.html
index 222d0977fa..ecd32f03b0 100644
--- a/docs/reference/api/typedoc/enums/rpcserverstate.html
+++ b/docs/reference/api/typedoc/enums/rpcserverstate.html
@@ -90,7 +90,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<wbr>Key<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Server<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Body<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">Wait<wbr>For<wbr>Callback<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/sizeof.html b/docs/reference/api/typedoc/enums/sizeof.html
index f77df53e9a..ad2fb587f4 100644
--- a/docs/reference/api/typedoc/enums/sizeof.html
+++ b/docs/reference/api/typedoc/enums/sizeof.html
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32 + I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">F32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">F64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">I32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -150,7 +150,7 @@
 					<div class="tsd-signature tsd-kind-icon">I64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -160,7 +160,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMValue<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -170,7 +170,7 @@
 					<div class="tsd-signature tsd-kind-icon">U16<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 					<div class="tsd-signature tsd-kind-icon">U8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/index.html b/docs/reference/api/typedoc/index.html
index d09f9baead..affd834428 100644
--- a/docs/reference/api/typedoc/index.html
+++ b/docs/reference/api/typedoc/index.html
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">FObject<wbr>Constructor<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, lib<span class="tsd-signature-symbol">: </span><a href="classes/ffilibrary.html" class="tsd-signature-type">FFILibrary</a>, ctx<span class="tsd-signature-symbol">: </span><a href="classes/runtimecontext.html" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L778">runtime.ts:778</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L778">runtime.ts:778</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Alloc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>shape<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, ndim<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeCode<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeBits<span class="tsd [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -288,7 +288,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>Bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">num [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -332,7 +332,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>To<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>from<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, to<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-sig [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -376,7 +376,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>ToBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</sp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -420,7 +420,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -456,7 +456,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMBackend<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number< [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -508,7 +508,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCFunc<wbr>Set<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ret<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -556,7 +556,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCb<wbr>Arg<wbr>ToReturn<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, code<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span c [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -595,7 +595,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Call<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -651,7 +651,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -687,7 +687,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Get<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span cla [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -726,7 +726,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>List<wbr>Global<wbr>Names<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>outSize<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, outArray<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -765,7 +765,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Register<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, f<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, override<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -808,7 +808,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMGet<wbr>Last<wbr>Error<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -838,7 +838,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -874,7 +874,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Get<wbr>Function<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, funcName<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, queryImports<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">numbe [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -922,7 +922,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Import<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, dep<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-si [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -962,7 +962,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -998,7 +998,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Get<wbr>Type<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt;  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1037,7 +1037,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Index2<wbr>Key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_index<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, out_type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1076,7 +1076,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Key2<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol">  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1115,7 +1115,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMSynchronize<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>deviceType<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, deviceId<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signatur [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1157,7 +1157,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Alloc<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>size<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1193,7 +1193,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Free<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ptr<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1229,7 +1229,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Func<wbr>Create<wbr>FromCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resource<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1269,7 +1269,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>args<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1321,7 +1321,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<wbr>Finalizer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resourceHandle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1357,7 +1357,7 @@
 					<div class="tsd-signature tsd-kind-icon">GPUPointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1372,7 +1372,7 @@
 					<div class="tsd-signature tsd-kind-icon">Packed<wbr>Func<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">...</span>args<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol"> &amp; </span><a href="interfaces/disp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L37">runtime.ts:37</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L37">runtime.ts:37</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1387,7 +1387,7 @@
 					<div class="tsd-signature tsd-kind-icon">Pointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1402,7 +1402,7 @@
 					<div class="tsd-signature tsd-kind-icon">Ptr<wbr>Offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1417,7 +1417,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Base<span class="tsd-signature-symbol">:</span> <a href="classes/tvmobject.html" class="tsd-signature-type">TVMObject</a><span class="tsd-signature-symbol"> | </span><a href="classes/ndarray.html" class="tsd-signature-type">NDArray</a><span class="tsd-signature-symbol"> | </span><a href="classes/module.html" class="tsd-signature-type">Module</a><span class="tsd-signature-symbol"> | </span><a href="index.html#packedfunc" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L781">runtime.ts:781</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L781">runtime.ts:781</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1435,7 +1435,7 @@
 					<div class="tsd-signature tsd-kind-icon">RPC_<wbr>MAGIC<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">1045105</span><span class="tsd-signature-symbol"> = 1045105</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1457,7 +1457,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/support.ts#L25">support.ts:25</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/support.ts#L25">support.ts:25</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1489,7 +1489,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/support.ts#L39">support.ts:39</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/support.ts#L39">support.ts:39</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1518,7 +1518,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/support.ts#L52">support.ts:52</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/support.ts#L52">support.ts:52</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1555,7 +1555,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/compact.ts#L38">compact.ts:38</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/compact.ts#L38">compact.ts:38</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1586,7 +1586,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1608,7 +1608,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/environment.ts#L32">environment.ts:32</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/environment.ts#L32">environment.ts:32</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1639,7 +1639,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/compact.ts#L24">compact.ts:24</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/compact.ts#L24">compact.ts:24</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1661,7 +1661,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1726,7 +1726,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/support.ts#L62">support.ts:62</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/support.ts#L62">support.ts:62</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1748,7 +1748,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<wbr>Code<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L343">runtime.ts:343</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L343">runtime.ts:343</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1757,7 +1757,7 @@
 						<div class="tsd-signature tsd-kind-icon">0<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;int&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L344">runtime.ts:344</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L344">runtime.ts:344</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1767,7 +1767,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;uint&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L345">runtime.ts:345</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L345">runtime.ts:345</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1777,7 +1777,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;float&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L346">runtime.ts:346</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L346">runtime.ts:346</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1787,7 +1787,7 @@
 						<div class="tsd-signature tsd-kind-icon">3<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;handle&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L347">runtime.ts:347</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L347">runtime.ts:347</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1798,7 +1798,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Enum<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L272">runtime.ts:272</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L272">runtime.ts:272</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1807,7 +1807,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L273">runtime.ts:273</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L273">runtime.ts:273</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1817,7 +1817,7 @@
 						<div class="tsd-signature tsd-kind-icon">15<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;webgpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L277">runtime.ts:277</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L277">runtime.ts:277</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1827,7 +1827,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cuda&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L274">runtime.ts:274</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L274">runtime.ts:274</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1837,7 +1837,7 @@
 						<div class="tsd-signature tsd-kind-icon">4<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;opencl&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L275">runtime.ts:275</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L275">runtime.ts:275</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1847,7 +1847,7 @@
 						<div class="tsd-signature tsd-kind-icon">8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;metal&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L276">runtime.ts:276</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L276">runtime.ts:276</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1858,7 +1858,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Str<wbr>ToEnum<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L280">runtime.ts:280</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L280">runtime.ts:280</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1867,7 +1867,7 @@
 						<div class="tsd-signature tsd-kind-icon">cl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L283">runtime.ts:283</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L283">runtime.ts:283</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1877,7 +1877,7 @@
 						<div class="tsd-signature tsd-kind-icon">cpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 1</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L281">runtime.ts:281</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L281">runtime.ts:281</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1887,7 +1887,7 @@
 						<div class="tsd-signature tsd-kind-icon">cuda<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 2</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L282">runtime.ts:282</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L282">runtime.ts:282</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1897,7 +1897,7 @@
 						<div class="tsd-signature tsd-kind-icon">metal<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 8</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L286">runtime.ts:286</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L286">runtime.ts:286</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1907,7 +1907,7 @@
 						<div class="tsd-signature tsd-kind-icon">opencl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L284">runtime.ts:284</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L284">runtime.ts:284</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1917,7 +1917,7 @@
 						<div class="tsd-signature tsd-kind-icon">vulkan<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 7</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L285">runtime.ts:285</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L285">runtime.ts:285</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1927,7 +1927,7 @@
 						<div class="tsd-signature tsd-kind-icon">webgpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 15</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/runtime.ts#L287">runtime.ts:287</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/runtime.ts#L287">runtime.ts:287</a></li>
 							</ul>
 						</aside>
 					</section>
diff --git a/docs/reference/api/typedoc/interfaces/disposable.html b/docs/reference/api/typedoc/interfaces/disposable.html
index f95e1e2954..d48a7328e0 100644
--- a/docs/reference/api/typedoc/interfaces/disposable.html
+++ b/docs/reference/api/typedoc/interfaces/disposable.html
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">dispose<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/types.ts#L52">types.ts:52</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/types.ts#L52">types.ts:52</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/interfaces/functioninfo.html b/docs/reference/api/typedoc/interfaces/functioninfo.html
index babe76001b..941e9f88f4 100644
--- a/docs/reference/api/typedoc/interfaces/functioninfo.html
+++ b/docs/reference/api/typedoc/interfaces/functioninfo.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">arg_<wbr>types<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">launch_<wbr>param_<wbr>tags<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">name<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/interfaces/libraryprovider.html b/docs/reference/api/typedoc/interfaces/libraryprovider.html
index dffeeb6922..54631be4a7 100644
--- a/docs/reference/api/typedoc/interfaces/libraryprovider.html
+++ b/docs/reference/api/typedoc/interfaces/libraryprovider.html
@@ -112,7 +112,7 @@
 					<div class="tsd-signature tsd-kind-icon">imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/types.ts#L34">types.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/types.ts#L34">types.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -127,7 +127,7 @@
 					<div class="tsd-signature tsd-kind-icon">start<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>inst<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">Instance</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/7ad71e622/web/src/types.ts#L39">types.ts:39</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/6c63e0db5/web/src/types.ts#L39">types.ts:39</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/searchindex.js b/docs/searchindex.js
index 3c5575f728..8af368c23b 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
+Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
diff --git a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
index a412b05b29..738bf27848 100644
--- a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:35.867</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
+<p><strong>00:33.102</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-relay-vta-py"><span class="std std-ref">Auto-tuning a convolutional network on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_vta.py</span></code>)</p></td>
-<td><p>00:35.860</p></td>
+<td><p>00:33.095</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_alu_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-alu-vta-py"><span class="std std-ref">Auto-tuning a ALU fused op on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_alu_vta.py</span></code>)</p></td>
-<td><p>00:00.008</p></td>
+<td><p>00:00.007</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_classification.html b/docs/topic/vta/tutorials/frontend/deploy_classification.html
index 77fe611ecd..6545a9c553 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_classification.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_classification.html
@@ -593,7 +593,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
   warnings.warn(
 /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
   graph, lib, params = relay.build(
-resnet18_v1 inference graph built in 38.41s!
+resnet18_v1 inference graph built in 35.27s!
 </pre></div>
 </div>
 </div>
@@ -690,7 +690,7 @@ resnet18_v1 prediction for sample 0
         #5: weasel
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  5.801 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  1.619 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-topic-vta-tutorials-frontend-deploy-classification-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../../../_downloads/9e8de33a5822b31748bfd76861009f92/deploy_classification.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_classification.py</span></code></a></p>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_detection.html b/docs/topic/vta/tutorials/frontend/deploy_detection.html
index 005623bfd1..6f8f695186 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_detection.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_detection.html
@@ -611,7 +611,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
   warnings.warn(
-yolov3-tiny inference graph built in 26.40s!
+yolov3-tiny inference graph built in 24.40s!
 </pre></div>
 </div>
 </div>
@@ -696,7 +696,7 @@ Download test image</p>
         alu_counter     :           849056
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  9.964 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  6.755 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-topic-vta-tutorials-frontend-deploy-detection-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../../../_downloads/65b9451c8de050d7cd9da2fe5a49acc6/deploy_detection.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_detection.py</span></code></a></p>
diff --git a/docs/topic/vta/tutorials/frontend/sg_execution_times.html b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
index 029403a98a..20952b031e 100644
--- a/docs/topic/vta/tutorials/frontend/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-frontend-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>02:15.766</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
+<p><strong>02:08.373</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_detection.html#sphx-glr-topic-vta-tutorials-frontend-deploy-detection-py"><span class="std std-ref">Deploy Pretrained Vision Detection Model from Darknet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_detection.py</span></code>)</p></td>
-<td><p>01:09.964</p></td>
+<td><p>01:06.755</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_classification.html#sphx-glr-topic-vta-tutorials-frontend-deploy-classification-py"><span class="std std-ref">Deploy Pretrained Vision Model from MxNet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_classification.py</span></code>)</p></td>
-<td><p>01:05.801</p></td>
+<td><p>01:01.619</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/optimize/sg_execution_times.html b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
index a2e3a042ab..be4d7b7ef2 100644
--- a/docs/topic/vta/tutorials/optimize/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-optimize-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:03.483</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
+<p><strong>00:03.300</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="convolution_opt.html#sphx-glr-topic-vta-tutorials-optimize-convolution-opt-py"><span class="std std-ref">2D Convolution Optimization</span></a> (<code class="docutils literal notranslate"><span class="pre">convolution_opt.py</span></code>)</p></td>
-<td><p>00:02.908</p></td>
+<td><p>00:02.798</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="matrix_multiply_opt.html#sphx-glr-topic-vta-tutorials-optimize-matrix-multiply-opt-py"><span class="std std-ref">Matrix Multiply Blocking</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply_opt.py</span></code>)</p></td>
-<td><p>00:00.576</p></td>
+<td><p>00:00.502</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/sg_execution_times.html b/docs/topic/vta/tutorials/sg_execution_times.html
index 91bff7d087..64b09801db 100644
--- a/docs/topic/vta/tutorials/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:00.992</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
+<p><strong>00:00.825</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="matrix_multiply.html#sphx-glr-topic-vta-tutorials-matrix-multiply-py"><span class="std std-ref">Simple Matrix Multiply</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply.py</span></code>)</p></td>
-<td><p>00:00.508</p></td>
+<td><p>00:00.422</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="vta_get_started.html#sphx-glr-topic-vta-tutorials-vta-get-started-py"><span class="std std-ref">Get Started with VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">vta_get_started.py</span></code>)</p></td>
-<td><p>00:00.484</p></td>
+<td><p>00:00.403</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/tutorial/auto_scheduler_matmul_x86.html b/docs/tutorial/auto_scheduler_matmul_x86.html
index b818c0ba3b..ce5f005bd0 100644
--- a/docs/tutorial/auto_scheduler_matmul_x86.html
+++ b/docs/tutorial/auto_scheduler_matmul_x86.html
@@ -579,7 +579,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 93.868 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 93.718 ms
 </pre></div>
 </div>
 </div>
@@ -651,7 +651,7 @@ automatically optimize a matrix multiplication, without the need to specify a
 search template.  It ends a series of examples that starts from the Tensor
 Expression (TE) language that demonstrates how TVM can optimize computational
 operations.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  14.973 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  30.142 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-auto-scheduler-matmul-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/eac4389b114db015e95cb3cdf8b86b83/auto_scheduler_matmul_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">auto_scheduler_matmul_x86.py</span></code></a></p>
diff --git a/docs/tutorial/autotvm_matmul_x86.html b/docs/tutorial/autotvm_matmul_x86.html
index 46996652ea..8665cd475a 100644
--- a/docs/tutorial/autotvm_matmul_x86.html
+++ b/docs/tutorial/autotvm_matmul_x86.html
@@ -690,16 +690,16 @@ reduce variance, we take 5 measurements and average them.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>waiting for device...
 device available
 Get devices for measurement successfully!
-No: 1   GFLOPS: 1.35/1.35       result: MeasureResult(costs=(0.19876891800000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.439051389694214, timestamp=1689605227.7057743) [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 1])],None,1
-No: 2   GFLOPS: 1.02/1.35       result: MeasureResult(costs=(0.2622365216,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.434575080871582, timestamp=1689605232.1642838)        [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 2])],None,18
-No: 3   GFLOPS: 3.51/3.51       result: MeasureResult(costs=(0.0764074588,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.487936019897461, timestamp=1689605233.6547983)        [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 8])],None,39
-No: 4   GFLOPS: 10.07/10.07     result: MeasureResult(costs=(0.026649061799999995,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6629624366760254, timestamp=1689605234.3535738)       [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 512])],None,99
-No: 5   GFLOPS: 2.03/10.07      result: MeasureResult(costs=(0.13247077440000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3836374282836914, timestamp=1689605236.8544886)        [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 4])],None,28
-No: 6   GFLOPS: 11.08/11.08     result: MeasureResult(costs=(0.024221291,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6559348106384277, timestamp=1689605237.5138962)        [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 512])],None,92
-No: 7   GFLOPS: 11.15/11.15     result: MeasureResult(costs=(0.0240818594,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6616318225860596, timestamp=1689605238.17214) [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 256])],None,84
-No: 8   GFLOPS: 11.26/11.26     result: MeasureResult(costs=(0.02384155,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6424837112426758, timestamp=1689605238.8267894) [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 512])],None,95
-No: 9   GFLOPS: 8.18/11.26      result: MeasureResult(costs=(0.0328242486,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7511351108551025, timestamp=1689605239.705403)        [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 8])],None,30
-No: 10  GFLOPS: 11.64/11.64     result: MeasureResult(costs=(0.023060854,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6159284114837646, timestamp=1689605240.3469317)        [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 512])],None,96
+No: 1   GFLOPS: 10.49/10.49     result: MeasureResult(costs=(0.0255916892,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7722578048706055, timestamp=1689639231.3839726)       [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 128])],None,73
+No: 2   GFLOPS: 12.78/12.78     result: MeasureResult(costs=(0.0210088066,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5968716144561768, timestamp=1689639231.9832437)       [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 512])],None,92
+No: 3   GFLOPS: 9.28/12.78      result: MeasureResult(costs=(0.028933529399999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.735072135925293, timestamp=1689639232.698494) [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 16])],None,48
+No: 4   GFLOPS: 3.68/12.78      result: MeasureResult(costs=(0.0728832788,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.423401117324829, timestamp=1689639234.127702) [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 2])],None,12
+No: 5   GFLOPS: 12.70/12.78     result: MeasureResult(costs=(0.0211333772,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.608086109161377, timestamp=1689639234.908141) [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 512])],None,93
+No: 6   GFLOPS: 8.92/12.78      result: MeasureResult(costs=(0.0300933974,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7456381320953369, timestamp=1689639235.6465182)       [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 8])],None,33
+No: 7   GFLOPS: 14.86/14.86     result: MeasureResult(costs=(0.018067073,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.60986328125, timestamp=1689639236.1922264)     [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 64])],None,65
+No: 8   GFLOPS: 10.82/14.86     result: MeasureResult(costs=(0.024803384,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6664876937866211, timestamp=1689639236.8437579)        [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 16])],None,43
+No: 9   GFLOPS: 10.64/14.86     result: MeasureResult(costs=(0.025232743600000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.664832592010498, timestamp=1689639237.6277812)        [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 64])],None,62
+No: 10  GFLOPS: 11.70/14.86     result: MeasureResult(costs=(0.0229420726,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6484942436218262, timestamp=1689639238.2583177)       [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 128])],None,72
 </pre></div>
 </div>
 <p>With tuning completed, we can choose the configuration from the log file that
diff --git a/docs/tutorial/autotvm_relay_x86.html b/docs/tutorial/autotvm_relay_x86.html
index 41198ab54f..7418c6f248 100644
--- a/docs/tutorial/autotvm_relay_x86.html
+++ b/docs/tutorial/autotvm_relay_x86.html
@@ -568,7 +568,7 @@ standard deviation.</p>
 <span class="nb">print</span><span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 494.39252055002726, &#39;median&#39;: 494.9026871498063, &#39;std&#39;: 3.1817584438598923}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 495.16211137008213, &#39;median&#39;: 494.3268438999439, &#39;std&#39;: 3.6706157178015775}
 </pre></div>
 </div>
 </div>
@@ -757,179 +757,179 @@ depending on the specifics of the model and the target platform.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  1/25]  Current/Best:    9.33/   9.33 GFLOPS | Progress: (4/20) | 9.75 s
-[Task  1/25]  Current/Best:   22.64/  22.64 GFLOPS | Progress: (8/20) | 11.85 s
-[Task  1/25]  Current/Best:   21.04/  22.64 GFLOPS | Progress: (12/20) | 15.05 s
-[Task  1/25]  Current/Best:   13.22/  22.86 GFLOPS | Progress: (16/20) | 19.87 s
-[Task  1/25]  Current/Best:   13.53/  22.86 GFLOPS | Progress: (20/20) | 24.25 s Done.
+[Task  1/25]  Current/Best:   18.26/  18.26 GFLOPS | Progress: (4/20) | 9.58 s
+[Task  1/25]  Current/Best:   12.89/  20.03 GFLOPS | Progress: (8/20) | 13.84 s
+[Task  1/25]  Current/Best:    8.29/  20.03 GFLOPS | Progress: (12/20) | 20.23 s
+[Task  1/25]  Current/Best:   19.75/  20.81 GFLOPS | Progress: (16/20) | 22.89 s
+[Task  1/25]  Current/Best:    3.34/  20.81 GFLOPS | Progress: (20/20) | 26.05 s Done.
 
 [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  2/25]  Current/Best:   11.97/  22.81 GFLOPS | Progress: (4/20) | 4.60 s
-[Task  2/25]  Current/Best:   20.74/  22.81 GFLOPS | Progress: (8/20) | 6.23 s
-[Task  2/25]  Current/Best:    6.50/  22.81 GFLOPS | Progress: (12/20) | 7.76 s
-[Task  2/25]  Current/Best:   19.69/  22.81 GFLOPS | Progress: (16/20) | 9.34 s
-[Task  2/25]  Current/Best:   15.12/  22.81 GFLOPS | Progress: (20/20) | 10.94 s Done.
+[Task  2/25]  Current/Best:    6.26/  13.66 GFLOPS | Progress: (4/20) | 4.37 s
+[Task  2/25]  Current/Best:   15.07/  15.07 GFLOPS | Progress: (8/20) | 5.95 s
+[Task  2/25]  Current/Best:   10.40/  18.29 GFLOPS | Progress: (12/20) | 7.56 s
+[Task  2/25]  Current/Best:   20.14/  20.30 GFLOPS | Progress: (16/20) | 9.33 s
+[Task  2/25]  Current/Best:    8.01/  20.30 GFLOPS | Progress: (20/20) | 11.09 s Done.
 
 [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  3/25]  Current/Best:   11.72/  16.61 GFLOPS | Progress: (4/20) | 5.91 s
-[Task  3/25]  Current/Best:   13.26/  16.61 GFLOPS | Progress: (8/20) | 9.01 s
-[Task  3/25]  Current/Best:   12.77/  23.16 GFLOPS | Progress: (12/20) | 11.21 s
-[Task  3/25]  Current/Best:   10.97/  23.16 GFLOPS | Progress: (16/20) | 13.46 s
-[Task  3/25]  Current/Best:   10.50/  23.16 GFLOPS | Progress: (20/20) | 16.17 s Done.
+[Task  3/25]  Current/Best:    6.21/  19.28 GFLOPS | Progress: (4/20) | 5.37 s
+[Task  3/25]  Current/Best:    6.94/  19.51 GFLOPS | Progress: (8/20) | 7.90 s
+[Task  3/25]  Current/Best:    6.07/  19.51 GFLOPS | Progress: (12/20) | 10.13 s
+[Task  3/25]  Current/Best:    6.40/  19.51 GFLOPS | Progress: (16/20) | 13.05 s
+[Task  3/25]  Current/Best:    1.62/  19.51 GFLOPS | Progress: (20/20) | 16.84 s Done.
 
 [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  4/25]  Current/Best:    4.56/  21.47 GFLOPS | Progress: (4/20) | 4.97 s
-[Task  4/25]  Current/Best:   14.29/  21.47 GFLOPS | Progress: (8/20) | 7.17 s
-[Task  4/25]  Current/Best:   10.49/  21.47 GFLOPS | Progress: (12/20) | 14.87 s
-[Task  4/25]  Current/Best:    7.89/  21.47 GFLOPS | Progress: (16/20) | 22.75 s
-[Task  4/25]  Current/Best:   13.84/  21.47 GFLOPS | Progress: (20/20) | 25.48 s Done.
+[Task  4/25]  Current/Best:   12.15/  14.46 GFLOPS | Progress: (4/20) | 5.08 s
+[Task  4/25]  Current/Best:   14.99/  15.29 GFLOPS | Progress: (8/20) | 7.76 s
+[Task  4/25]  Current/Best:   11.57/  15.75 GFLOPS | Progress: (12/20) | 10.22 s
+[Task  4/25]  Current/Best:    2.21/  15.75 GFLOPS | Progress: (16/20) | 13.33 s
+[Task  4/25]  Current/Best:   20.38/  20.38 GFLOPS | Progress: (20/20) | 16.49 s Done.
 
 [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  5/25]  Current/Best:   17.67/  17.67 GFLOPS | Progress: (4/20) | 5.15 s
-[Task  5/25]  Current/Best:   17.67/  19.15 GFLOPS | Progress: (8/20) | 7.22 s
-[Task  5/25]  Current/Best:   19.82/  19.82 GFLOPS | Progress: (12/20) | 9.25 s
-[Task  5/25]  Current/Best:   16.12/  19.82 GFLOPS | Progress: (16/20) | 11.31 s
-[Task  5/25]  Current/Best:   22.42/  22.42 GFLOPS | Progress: (20/20) | 13.76 s Done.
+[Task  5/25]  Current/Best:   16.66/  22.27 GFLOPS | Progress: (4/20) | 4.64 s
+[Task  5/25]  Current/Best:   12.07/  22.27 GFLOPS | Progress: (8/20) | 7.33 s
+[Task  5/25]  Current/Best:   12.35/  22.27 GFLOPS | Progress: (12/20) | 9.80 s
+[Task  5/25]  Current/Best:   13.87/  22.27 GFLOPS | Progress: (16/20) | 11.52 s
+[Task  5/25]  Current/Best:    5.90/  22.27 GFLOPS | Progress: (20/20) | 13.43 s Done.
 
 [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  6/25]  Current/Best:   18.39/  18.39 GFLOPS | Progress: (4/20) | 4.83 s
-[Task  6/25]  Current/Best:   10.81/  18.39 GFLOPS | Progress: (8/20) | 7.11 s
-[Task  6/25]  Current/Best:    4.83/  18.39 GFLOPS | Progress: (12/20) | 10.09 s
-[Task  6/25]  Current/Best:    3.23/  18.39 GFLOPS | Progress: (16/20) | 13.09 s
-[Task  6/25]  Current/Best:   12.04/  18.39 GFLOPS | Progress: (20/20) | 15.23 s Done.
+[Task  6/25]  Current/Best:    5.48/  16.09 GFLOPS | Progress: (4/20) | 5.20 s
+[Task  6/25]  Current/Best:    3.08/  16.09 GFLOPS | Progress: (8/20) | 8.20 s
+[Task  6/25]  Current/Best:   13.57/  16.09 GFLOPS | Progress: (12/20) | 10.90 s
+[Task  6/25]  Current/Best:   12.37/  22.47 GFLOPS | Progress: (16/20) | 13.16 s
+[Task  6/25]  Current/Best:    7.62/  22.47 GFLOPS | Progress: (20/20) | 16.26 s Done.
 
 [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  7/25]  Current/Best:   15.33/  18.33 GFLOPS | Progress: (4/20) | 5.07 s
-[Task  7/25]  Current/Best:    6.46/  18.33 GFLOPS | Progress: (8/20) | 7.79 s
-[Task  7/25]  Current/Best:   10.47/  19.43 GFLOPS | Progress: (12/20) | 10.04 s
-[Task  7/25]  Current/Best:   16.45/  19.43 GFLOPS | Progress: (16/20) | 12.21 s
-[Task  7/25]  Current/Best:    6.27/  19.43 GFLOPS | Progress: (20/20) | 14.93 s Done.
+[Task  7/25]  Current/Best:   21.68/  21.68 GFLOPS | Progress: (4/20) | 4.99 s
+[Task  7/25]  Current/Best:   12.09/  21.68 GFLOPS | Progress: (8/20) | 8.56 s
+[Task  7/25]  Current/Best:   21.28/  21.68 GFLOPS | Progress: (12/20) | 10.69 s
+[Task  7/25]  Current/Best:    6.25/  21.68 GFLOPS | Progress: (16/20) | 13.48 s
+[Task  7/25]  Current/Best:   11.21/  21.68 GFLOPS | Progress: (20/20) | 16.47 s Done.
 
 [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  8/25]  Current/Best:   15.52/  15.52 GFLOPS | Progress: (4/20) | 8.82 s
-[Task  8/25]  Current/Best:   12.56/  15.52 GFLOPS | Progress: (8/20) | 17.80 s
-[Task  8/25]  Current/Best:    4.82/  17.50 GFLOPS | Progress: (12/20) | 20.30 s
-[Task  8/25]  Current/Best:    6.91/  19.16 GFLOPS | Progress: (16/20) | 27.74 s
-[Task  8/25]  Current/Best:    2.88/  19.16 GFLOPS | Progress: (20/20) | 32.82 s Done.
+[Task  8/25]  Current/Best:   14.80/  19.32 GFLOPS | Progress: (4/20) | 5.06 s
+[Task  8/25]  Current/Best:   11.09/  19.32 GFLOPS | Progress: (8/20) | 16.59 s
+[Task  8/25]  Current/Best:   14.83/  19.32 GFLOPS | Progress: (12/20) | 20.71 s
+[Task  8/25]  Current/Best:    9.54/  19.32 GFLOPS | Progress: (16/20) | 32.18 s
+[Task  8/25]  Current/Best:   12.13/  19.32 GFLOPS | Progress: (20/20) | 36.70 s Done.
 
 [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  9/25]  Current/Best:   14.47/  14.47 GFLOPS | Progress: (4/20) | 7.00 s
-[Task  9/25]  Current/Best:   15.08/  16.10 GFLOPS | Progress: (8/20) | 8.79 s
-[Task  9/25]  Current/Best:   17.67/  19.05 GFLOPS | Progress: (12/20) | 10.83 s
-[Task  9/25]  Current/Best:    6.10/  19.05 GFLOPS | Progress: (16/20) | 12.96 s
-[Task  9/25]  Current/Best:    8.57/  19.05 GFLOPS | Progress: (20/20) | 21.04 s Done.
+[Task  9/25]  Current/Best:   12.39/  12.39 GFLOPS | Progress: (4/20) | 5.22 s
+[Task  9/25]  Current/Best:   13.99/  20.10 GFLOPS | Progress: (8/20) | 7.84 s
+[Task  9/25]  Current/Best:    7.55/  20.10 GFLOPS | Progress: (12/20) | 16.25 s
+[Task  9/25]  Current/Best:   13.44/  20.10 GFLOPS | Progress: (16/20) | 23.48 s
+[Task  9/25]  Current/Best:    6.51/  20.10 GFLOPS | Progress: (20/20) | 29.93 s Done.
 
 [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 10/25]  Current/Best:   15.17/  15.17 GFLOPS | Progress: (4/20) | 5.05 s
-[Task 10/25]  Current/Best:   12.42/  15.17 GFLOPS | Progress: (8/20) | 6.98 s
-[Task 10/25]  Current/Best:   14.69/  15.17 GFLOPS | Progress: (12/20) | 8.96 s
-[Task 10/25]  Current/Best:   15.60/  17.13 GFLOPS | Progress: (16/20) | 11.07 s
-[Task 10/25]  Current/Best:   18.09/  18.09 GFLOPS | Progress: (20/20) | 15.24 s Done.
+[Task 10/25]  Current/Best:   12.59/  14.82 GFLOPS | Progress: (4/20) | 5.15 s
+[Task 10/25]  Current/Best:   21.69/  21.69 GFLOPS | Progress: (8/20) | 7.09 s
+[Task 10/25]  Current/Best:    9.62/  21.69 GFLOPS | Progress: (12/20) | 8.78 s
+[Task 10/25]  Current/Best:   10.59/  21.69 GFLOPS | Progress: (16/20) | 11.37 s
+[Task 10/25]  Current/Best:   13.54/  21.69 GFLOPS | Progress: (20/20) | 14.12 s Done.
 
 [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 11/25]  Current/Best:   22.61/  22.61 GFLOPS | Progress: (4/20) | 4.93 s
-[Task 11/25]  Current/Best:   23.38/  23.38 GFLOPS | Progress: (8/20) | 7.15 s
-[Task 11/25]  Current/Best:   10.12/  23.38 GFLOPS | Progress: (12/20) | 11.18 s
-[Task 11/25]  Current/Best:   12.07/  23.38 GFLOPS | Progress: (16/20) | 13.93 s
-[Task 11/25]  Current/Best:   11.34/  23.38 GFLOPS | Progress: (20/20) | 16.82 s Done.
+[Task 11/25]  Current/Best:   12.43/  21.94 GFLOPS | Progress: (4/20) | 5.11 s
+[Task 11/25]  Current/Best:   16.99/  21.94 GFLOPS | Progress: (8/20) | 7.43 s
+[Task 11/25]  Current/Best:    6.20/  21.94 GFLOPS | Progress: (12/20) | 9.89 s
+[Task 11/25]  Current/Best:   18.33/  21.94 GFLOPS | Progress: (16/20) | 13.16 s
+[Task 11/25]  Current/Best:   12.09/  21.94 GFLOPS | Progress: (20/20) | 16.26 s Done.
 
 [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 12/25]  Current/Best:   16.24/  16.24 GFLOPS | Progress: (4/20) | 6.44 s
-[Task 12/25]  Current/Best:    9.74/  16.24 GFLOPS | Progress: (8/20) | 10.83 s
-[Task 12/25]  Current/Best:   14.39/  16.24 GFLOPS | Progress: (12/20) | 17.03 s
-[Task 12/25]  Current/Best:   17.01/  17.01 GFLOPS | Progress: (16/20) | 20.04 s
-[Task 12/25]  Current/Best:   16.35/  17.21 GFLOPS | Progress: (20/20) | 22.24 s Done.
+[Task 12/25]  Current/Best:    7.94/  15.88 GFLOPS | Progress: (4/20) | 7.39 s
+[Task 12/25]  Current/Best:   11.97/  18.16 GFLOPS | Progress: (8/20) | 10.17 s
+[Task 12/25]  Current/Best:   10.86/  20.00 GFLOPS | Progress: (12/20) | 13.67 s
+[Task 12/25]  Current/Best:    5.77/  20.00 GFLOPS | Progress: (16/20) | 16.38 s
+[Task 12/25]  Current/Best:   15.28/  20.00 GFLOPS | Progress: (20/20) | 18.51 s Done.
 
 [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 13/25]  Current/Best:   14.94/  18.24 GFLOPS | Progress: (4/20) | 5.48 s
-[Task 13/25]  Current/Best:    9.41/  18.24 GFLOPS | Progress: (8/20) | 8.78 s
-[Task 13/25]  Current/Best:   20.74/  20.74 GFLOPS | Progress: (12/20) | 12.99 s
-[Task 13/25]  Current/Best:    3.09/  20.74 GFLOPS | Progress: (16/20) | 16.21 s
-[Task 13/25]  Current/Best:   18.82/  20.74 GFLOPS | Progress: (20/20) | 21.02 s Done.
+[Task 13/25]  Current/Best:    9.90/   9.90 GFLOPS | Progress: (4/20) | 6.13 s
+[Task 13/25]  Current/Best:   19.52/  19.52 GFLOPS | Progress: (8/20) | 8.65 s
+[Task 13/25]  Current/Best:    6.43/  19.52 GFLOPS | Progress: (12/20) | 11.38 s
+[Task 13/25]  Current/Best:    8.74/  19.52 GFLOPS | Progress: (16/20) | 13.87 s
+[Task 13/25]  Current/Best:   10.83/  19.52 GFLOPS | Progress: (20/20) | 17.56 s Done.
 
 [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 14/25]  Current/Best:   10.27/  20.55 GFLOPS | Progress: (4/20) | 10.80 s
-[Task 14/25]  Current/Best:    7.69/  20.55 GFLOPS | Progress: (8/20) | 17.64 s
-[Task 14/25]  Current/Best:    9.37/  20.55 GFLOPS | Progress: (12/20) | 21.36 s
-[Task 14/25]  Current/Best:   14.93/  20.70 GFLOPS | Progress: (16/20) | 23.44 s
-[Task 14/25]  Current/Best:    7.81/  23.24 GFLOPS | Progress: (20/20) | 34.93 s
-[Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
-
-[Task 15/25]  Current/Best:    7.64/  22.67 GFLOPS | Progress: (4/20) | 5.75 s
-[Task 15/25]  Current/Best:    6.73/  22.67 GFLOPS | Progress: (8/20) | 16.90 s
-[Task 15/25]  Current/Best:    7.94/  22.67 GFLOPS | Progress: (12/20) | 26.86 s
-[Task 15/25]  Current/Best:    6.56/  22.67 GFLOPS | Progress: (16/20) | 29.17 s
-[Task 15/25]  Current/Best:   18.87/  22.67 GFLOPS | Progress: (20/20) | 32.40 s
+[Task 14/25]  Current/Best:    9.60/  10.07 GFLOPS | Progress: (4/20) | 7.23 s
+[Task 14/25]  Current/Best:   14.96/  18.92 GFLOPS | Progress: (8/20) | 9.51 s
+[Task 14/25]  Current/Best:   16.04/  18.92 GFLOPS | Progress: (12/20) | 15.40 s
+[Task 14/25]  Current/Best:   17.19/  18.92 GFLOPS | Progress: (16/20) | 18.72 s
+[Task 14/25]  Current/Best:   10.15/  18.92 GFLOPS | Progress: (20/20) | 24.54 s Done.
+
+[Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
+[Task 15/25]  Current/Best:   22.53/  22.53 GFLOPS | Progress: (4/20) | 13.77 s
+[Task 15/25]  Current/Best:   19.62/  22.53 GFLOPS | Progress: (8/20) | 18.92 s
+[Task 15/25]  Current/Best:   17.45/  22.53 GFLOPS | Progress: (12/20) | 20.52 s
+[Task 15/25]  Current/Best:   17.10/  22.53 GFLOPS | Progress: (16/20) | 22.21 s
+[Task 15/25]  Current/Best:    6.74/  22.53 GFLOPS | Progress: (20/20) | 32.13 s Done.
+
 [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 16/25]  Current/Best:   10.91/  18.52 GFLOPS | Progress: (4/20) | 5.47 s
-[Task 16/25]  Current/Best:    5.81/  18.52 GFLOPS | Progress: (8/20) | 7.72 s
-[Task 16/25]  Current/Best:    9.26/  19.24 GFLOPS | Progress: (12/20) | 11.32 s
-[Task 16/25]  Current/Best:   15.05/  19.24 GFLOPS | Progress: (16/20) | 13.29 s
-[Task 16/25]  Current/Best:   19.80/  19.80 GFLOPS | Progress: (20/20) | 16.48 s Done.
+[Task 16/25]  Current/Best:    6.33/  19.16 GFLOPS | Progress: (4/20) | 4.34 s
+[Task 16/25]  Current/Best:   11.85/  19.16 GFLOPS | Progress: (8/20) | 6.41 s
+[Task 16/25]  Current/Best:   18.87/  20.57 GFLOPS | Progress: (12/20) | 8.28 s
+[Task 16/25]  Current/Best:   15.38/  20.57 GFLOPS | Progress: (16/20) | 10.16 s
+[Task 16/25]  Current/Best:   15.90/  22.25 GFLOPS | Progress: (20/20) | 11.65 s Done.
 
 [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 17/25]  Current/Best:   23.04/  23.04 GFLOPS | Progress: (4/20) | 5.30 s
-[Task 17/25]  Current/Best:   12.25/  23.04 GFLOPS | Progress: (8/20) | 8.07 s
-[Task 17/25]  Current/Best:   11.49/  23.04 GFLOPS | Progress: (12/20) | 11.05 s
-[Task 17/25]  Current/Best:   20.25/  23.04 GFLOPS | Progress: (16/20) | 13.83 s
-[Task 17/25]  Current/Best:   14.45/  23.04 GFLOPS | Progress: (20/20) | 16.67 s Done.
+[Task 17/25]  Current/Best:    6.36/  24.28 GFLOPS | Progress: (4/20) | 5.32 s
+[Task 17/25]  Current/Best:    3.20/  24.28 GFLOPS | Progress: (8/20) | 10.38 s
+[Task 17/25]  Current/Best:   10.33/  24.28 GFLOPS | Progress: (12/20) | 13.34 s
+[Task 17/25]  Current/Best:   12.86/  24.28 GFLOPS | Progress: (16/20) | 16.00 s
+[Task 17/25]  Current/Best:   17.75/  24.28 GFLOPS | Progress: (20/20) | 18.94 s Done.
 
 [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 18/25]  Current/Best:    9.12/  14.76 GFLOPS | Progress: (4/20) | 6.71 s
-[Task 18/25]  Current/Best:   12.21/  14.76 GFLOPS | Progress: (8/20) | 11.50 s
-[Task 18/25]  Current/Best:   18.14/  18.14 GFLOPS | Progress: (12/20) | 13.93 s
-[Task 18/25]  Current/Best:   19.33/  19.33 GFLOPS | Progress: (16/20) | 18.08 s
-[Task 18/25]  Current/Best:   18.26/  19.33 GFLOPS | Progress: (20/20) | 21.25 s Done.
+[Task 18/25]  Current/Best:    9.46/  20.65 GFLOPS | Progress: (4/20) | 5.74 s
+[Task 18/25]  Current/Best:   16.09/  20.65 GFLOPS | Progress: (8/20) | 10.05 s
+[Task 18/25]  Current/Best:   18.44/  20.65 GFLOPS | Progress: (12/20) | 12.51 s
+[Task 18/25]  Current/Best:   16.63/  20.65 GFLOPS | Progress: (16/20) | 17.39 s
+[Task 18/25]  Current/Best:    5.47/  20.65 GFLOPS | Progress: (20/20) | 21.46 s Done.
 
 [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 19/25]  Current/Best:   16.04/  20.00 GFLOPS | Progress: (4/20) | 5.58 s
-[Task 19/25]  Current/Best:   13.24/  20.00 GFLOPS | Progress: (8/20) | 8.56 s
-[Task 19/25]  Current/Best:    8.88/  20.00 GFLOPS | Progress: (12/20) | 11.70 s
-[Task 19/25]  Current/Best:    3.08/  20.10 GFLOPS | Progress: (16/20) | 16.74 s
-[Task 19/25]  Current/Best:    7.42/  20.10 GFLOPS | Progress: (20/20) | 22.47 s Done.
+[Task 19/25]  Current/Best:   10.04/  20.05 GFLOPS | Progress: (4/20) | 7.36 s
+[Task 19/25]  Current/Best:   19.10/  22.25 GFLOPS | Progress: (8/20) | 10.50 s
+[Task 19/25]  Current/Best:   22.94/  23.06 GFLOPS | Progress: (12/20) | 13.62 s
+[Task 19/25]  Current/Best:    3.19/  23.06 GFLOPS | Progress: (16/20) | 19.90 s
+[Task 19/25]  Current/Best:    6.62/  23.06 GFLOPS | Progress: (20/20) | 24.44 s Done.
 
 [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 20/25]  Current/Best:   17.21/  17.21 GFLOPS | Progress: (4/20) | 7.27 s
-[Task 20/25]  Current/Best:    9.30/  17.21 GFLOPS | Progress: (8/20) | 10.26 s
-[Task 20/25]  Current/Best:    4.75/  19.82 GFLOPS | Progress: (12/20) | 22.60 s
-[Task 20/25]  Current/Best:   18.35/  19.82 GFLOPS | Progress: (16/20) | 27.21 s
-[Task 20/25]  Current/Best:   12.26/  19.82 GFLOPS | Progress: (20/20) | 30.28 s Done.
-
+[Task 20/25]  Current/Best:    5.18/  22.42 GFLOPS | Progress: (4/20) | 5.19 s
+[Task 20/25]  Current/Best:    6.87/  22.42 GFLOPS | Progress: (8/20) | 12.52 s
+[Task 20/25]  Current/Best:    8.11/  22.42 GFLOPS | Progress: (12/20) | 20.59 s
+[Task 20/25]  Current/Best:   18.30/  22.42 GFLOPS | Progress: (16/20) | 31.70 s
+[Task 20/25]  Current/Best:    7.51/  22.42 GFLOPS | Progress: (20/20) | 36.74 s
 [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 21/25]  Current/Best:   21.05/  21.05 GFLOPS | Progress: (4/20) | 4.85 s
-[Task 21/25]  Current/Best:   17.59/  21.40 GFLOPS | Progress: (8/20) | 6.40 s
-[Task 21/25]  Current/Best:   13.88/  21.40 GFLOPS | Progress: (12/20) | 14.28 s
-[Task 21/25]  Current/Best:   17.14/  21.40 GFLOPS | Progress: (16/20) | 20.21 s
-[Task 21/25]  Current/Best:    5.56/  21.40 GFLOPS | Progress: (20/20) | 26.21 s Done.
+[Task 21/25]  Current/Best:   18.39/  20.89 GFLOPS | Progress: (4/20) | 6.08 s
+[Task 21/25]  Current/Best:   21.32/  21.32 GFLOPS | Progress: (8/20) | 8.02 s
+[Task 21/25]  Current/Best:   14.96/  21.32 GFLOPS | Progress: (12/20) | 11.35 s
+[Task 21/25]  Current/Best:    5.47/  21.32 GFLOPS | Progress: (16/20) | 17.32 s
+[Task 21/25]  Current/Best:    2.81/  21.32 GFLOPS | Progress: (20/20) | 22.24 s Done.
 
 [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 22/25]  Current/Best:   18.44/  21.60 GFLOPS | Progress: (4/20) | 5.08 s
-[Task 22/25]  Current/Best:   12.29/  21.60 GFLOPS | Progress: (8/20) | 7.11 s
-[Task 22/25]  Current/Best:   10.60/  21.60 GFLOPS | Progress: (12/20) | 9.12 s
-[Task 22/25]  Current/Best:   12.20/  21.60 GFLOPS | Progress: (16/20) | 11.44 s
-[Task 22/25]  Current/Best:   18.01/  21.60 GFLOPS | Progress: (20/20) | 13.07 s Done.
+[Task 22/25]  Current/Best:   10.94/  22.33 GFLOPS | Progress: (4/20) | 4.89 s
+[Task 22/25]  Current/Best:    7.54/  22.33 GFLOPS | Progress: (8/20) | 7.99 s
+[Task 22/25]  Current/Best:   13.25/  22.33 GFLOPS | Progress: (12/20) | 10.82 s
+[Task 22/25]  Current/Best:   22.17/  22.33 GFLOPS | Progress: (16/20) | 13.69 s
+[Task 22/25]  Current/Best:   10.77/  22.33 GFLOPS | Progress: (20/20) | 15.84 s Done.
 
 [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 23/25]  Current/Best:   19.79/  19.79 GFLOPS | Progress: (4/20) | 5.61 s
-[Task 23/25]  Current/Best:   11.36/  21.79 GFLOPS | Progress: (8/20) | 9.25 s
-[Task 23/25]  Current/Best:   12.07/  21.79 GFLOPS | Progress: (12/20) | 11.69 s
-[Task 23/25]  Current/Best:    3.09/  21.79 GFLOPS | Progress: (16/20) | 17.68 s
-[Task 23/25]  Current/Best:   12.07/  21.79 GFLOPS | Progress: (20/20) | 21.71 s Done.
+[Task 23/25]  Current/Best:    9.79/  17.30 GFLOPS | Progress: (4/20) | 5.72 s
+[Task 23/25]  Current/Best:   24.20/  24.20 GFLOPS | Progress: (8/20) | 8.18 s
+[Task 23/25]  Current/Best:    1.60/  24.20 GFLOPS | Progress: (12/20) | 11.88 s
+[Task 23/25]  Current/Best:   18.06/  24.20 GFLOPS | Progress: (16/20) | 14.77 s
+[Task 23/25]  Current/Best:    9.08/  24.20 GFLOPS | Progress: (20/20) | 18.21 s Done.
 
 [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 24/25]  Current/Best:    1.81/   7.38 GFLOPS | Progress: (4/20) | 13.83 s
-[Task 24/25]  Current/Best:    3.19/   7.38 GFLOPS | Progress: (8/20) | 22.71 s
-[Task 24/25]  Current/Best:    3.39/   8.83 GFLOPS | Progress: (12/20) | 25.30 s
-[Task 24/25]  Current/Best:    0.59/   8.83 GFLOPS | Progress: (16/20) | 31.91 s
-[Task 24/25]  Current/Best:    3.57/   8.83 GFLOPS | Progress: (20/20) | 42.94 s
-[Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 25/25]  Current/Best:    7.16/   7.16 GFLOPS | Progress: (4/20) | 13.89 s
-[Task 25/25]  Current/Best:    3.27/   9.16 GFLOPS | Progress: (8/20) | 17.17 s
-[Task 25/25]  Current/Best:    1.55/   9.16 GFLOPS | Progress: (12/20) | 20.12 s
-[Task 25/25]  Current/Best:    7.91/   9.16 GFLOPS | Progress: (16/20) | 28.68 s Done.
+[Task 24/25]  Current/Best:    4.20/   5.92 GFLOPS | Progress: (4/20) | 13.37 s
+[Task 24/25]  Current/Best:    8.26/   8.26 GFLOPS | Progress: (8/20) | 18.03 s
+[Task 24/25]  Current/Best:    2.53/   8.26 GFLOPS | Progress: (12/20) | 29.01 s
+[Task 24/25]  Current/Best:    7.42/   8.26 GFLOPS | Progress: (16/20) | 31.58 s
+[Task 24/25]  Current/Best:    7.84/   8.26 GFLOPS | Progress: (20/20) | 37.56 s
+[Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
  Done.
 
-[Task 25/25]  Current/Best:    6.95/   9.65 GFLOPS | Progress: (20/20) | 39.37 s
+[Task 25/25]  Current/Best:    8.36/   9.03 GFLOPS | Progress: (4/20) | 13.66 s
+[Task 25/25]  Current/Best:    6.47/   9.87 GFLOPS | Progress: (8/20) | 19.03 s
+[Task 25/25]  Current/Best:    7.65/   9.87 GFLOPS | Progress: (12/20) | 25.25 s
+[Task 25/25]  Current/Best:    5.07/  10.06 GFLOPS | Progress: (16/20) | 36.18 s
+[Task 25/25]  Current/Best:    5.75/  10.06 GFLOPS | Progress: (20/20) | 39.05 s
 </pre></div>
 </div>
 <p>The output from this tuning process will look something like this:</p>
@@ -993,7 +993,7 @@ model using optimized operators to speed up our computations.</p>
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;class=&#39;</span><span class="si">%s</span><span class="s2">&#39; with probability=</span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#list" title="builtins.list" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">labels</span></a [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>class=&#39;n02123045 tabby, tabby cat&#39; with probability=0.621104
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>class=&#39;n02123045 tabby, tabby cat&#39; with probability=0.621105
 class=&#39;n02123159 tiger cat&#39; with probability=0.356378
 class=&#39;n02124075 Egyptian cat&#39; with probability=0.019712
 class=&#39;n02129604 tiger, Panthera tigris&#39; with probability=0.001215
@@ -1031,8 +1031,8 @@ improvement in comparing the optimized model to the unoptimized model.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;unoptimized: </span><span class="si">%s</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">))</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 410.5118951500481, &#39;median&#39;: 410.435267000139, &#39;std&#39;: 1.4656197114556417}
-unoptimized: {&#39;mean&#39;: 494.39252055002726, &#39;median&#39;: 494.9026871498063, &#39;std&#39;: 3.1817584438598923}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 382.5639265501013, &#39;median&#39;: 382.085147250109, &#39;std&#39;: 2.9130401392340906}
+unoptimized: {&#39;mean&#39;: 495.16211137008213, &#39;median&#39;: 494.3268438999439, &#39;std&#39;: 3.6706157178015775}
 </pre></div>
 </div>
 </div>
@@ -1046,7 +1046,7 @@ models.</p>
 <p>Here we presented a simple example using ResNet-50 v2 locally. However, TVM
 supports many more features including cross-compilation, remote execution and
 profiling/benchmarking.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 13 minutes  40.901 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 13 minutes  22.489 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-autotvm-relay-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/57a45d9bef1af358191e7d50043e652c/autotvm_relay_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">autotvm_relay_x86.py</span></code></a></p>
diff --git a/docs/tutorial/cross_compilation_and_rpc.html b/docs/tutorial/cross_compilation_and_rpc.html
index b734cb465c..9bba26ad02 100644
--- a/docs/tutorial/cross_compilation_and_rpc.html
+++ b/docs/tutorial/cross_compilation_and_rpc.html
@@ -548,7 +548,7 @@ device and returns the measured cost. Network overhead is excluded.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;</span><span class="si">%g</span><span class="s2"> secs/op&quot;</span> <span class="o">%</span> <span class="n">cost</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.289e-07 secs/op
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.279e-07 secs/op
 </pre></div>
 </div>
 </div>
diff --git a/docs/tutorial/intro_topi.html b/docs/tutorial/intro_topi.html
index 9ddd9b2334..a5d1a63a45 100644
--- a/docs/tutorial/intro_topi.html
+++ b/docs/tutorial/intro_topi.html
@@ -518,7 +518,7 @@ class Module:
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/ir.html#tvm.ir.Array" title="tvm.ir.Array" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">sg</span><span class="o">.</span><span class="n">stages</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0xc7ecda0)), stage(b, placeholder(b, 0x12710350)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attrs [...]
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0xd4e8040)), stage(b, placeholder(b, 0x9c3cc70)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attrs= [...]
 </pre></div>
 </div>
 <p>We can test the correctness by comparing with <code class="code docutils literal notranslate"><span class="pre">numpy</span></code> result as follows</p>
diff --git a/docs/tutorial/sg_execution_times.html b/docs/tutorial/sg_execution_times.html
index 8bc6097fce..bf84e6b242 100644
--- a/docs/tutorial/sg_execution_times.html
+++ b/docs/tutorial/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-tutorial-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>17:05.228</strong> total execution time for <strong>tutorial</strong> files:</p>
+<p><strong>16:48.162</strong> total execution time for <strong>tutorial</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,35 +359,35 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_relay_x86.html#sphx-glr-tutorial-autotvm-relay-x86-py"><span class="std std-ref">Compiling and Optimizing a Model with the Python Interface (AutoTVM)</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_relay_x86.py</span></code>)</p></td>
-<td><p>13:40.901</p></td>
+<td><p>13:22.489</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="auto_scheduler_matmul_x86.html#sphx-glr-tutorial-auto-scheduler-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Auto-scheduling</span></a> (<code class="docutils literal notranslate"><span class="pre">auto_scheduler_matmul_x86.py</span></code>)</p></td>
-<td><p>01:14.973</p></td>
+<td><p>01:30.142</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_expr_get_started.html#sphx-glr-tutorial-tensor-expr-get-started-py"><span class="std std-ref">Working with Operators Using Tensor Expression</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_expr_get_started.py</span></code>)</p></td>
-<td><p>01:00.605</p></td>
+<td><p>00:58.312</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="relay_quick_start.html#sphx-glr-tutorial-relay-quick-start-py"><span class="std std-ref">Quick Start Tutorial for Compiling Deep Learning Models</span></a> (<code class="docutils literal notranslate"><span class="pre">relay_quick_start.py</span></code>)</p></td>
-<td><p>00:43.767</p></td>
+<td><p>00:41.329</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_matmul_x86.html#sphx-glr-tutorial-autotvm-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Schedule Templates and AutoTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_matmul_x86.py</span></code>)</p></td>
-<td><p>00:22.816</p></td>
+<td><p>00:13.947</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="intro_topi.html#sphx-glr-tutorial-intro-topi-py"><span class="std std-ref">Introduction to TOPI</span></a> (<code class="docutils literal notranslate"><span class="pre">intro_topi.py</span></code>)</p></td>
-<td><p>00:01.028</p></td>
+<td><p>00:00.965</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_ir_blitz_course.html#sphx-glr-tutorial-tensor-ir-blitz-course-py"><span class="std std-ref">Blitz Course to TensorIR</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_ir_blitz_course.py</span></code>)</p></td>
-<td><p>00:00.928</p></td>
+<td><p>00:00.818</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="cross_compilation_and_rpc.html#sphx-glr-tutorial-cross-compilation-and-rpc-py"><span class="std std-ref">Cross Compilation and RPC</span></a> (<code class="docutils literal notranslate"><span class="pre">cross_compilation_and_rpc.py</span></code>)</p></td>
-<td><p>00:00.212</p></td>
+<td><p>00:00.160</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="uma.html#sphx-glr-tutorial-uma-py"><span class="std std-ref">Making your Hardware Accelerator TVM-ready with UMA</span></a> (<code class="docutils literal notranslate"><span class="pre">uma.py</span></code>)</p></td>
diff --git a/docs/tutorial/tensor_expr_get_started.html b/docs/tutorial/tensor_expr_get_started.html
index 1c75703c5d..527e1d9291 100644
--- a/docs/tutorial/tensor_expr_get_started.html
+++ b/docs/tutorial/tensor_expr_get_started.html
@@ -559,8 +559,8 @@ helper function to run a profile of the TVM generated code.</p>
 <span class="n">evaluate_addition</span><span class="p">(</span><span class="n">fadd</span><span class="p">,</span> <a href="../reference/api/python/target.html#tvm.target.Target" title="tvm.target.Target" class="sphx-glr-backref-module-tvm-target sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">tgt</span></a><span class="p">,</span> <span class="s2">&quot;naive&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#list" ti [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.000009
-naive: 0.000007
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.000006
+naive: 0.000006
 </pre></div>
 </div>
 </div>
@@ -615,7 +615,7 @@ compile and run this new schedule with the parallel operation applied:</p>
 <span class="n">evaluate_addition</span><span class="p">(</span><span class="n">fadd_parallel</span><span class="p">,</span> <a href="../reference/api/python/target.html#tvm.target.Target" title="tvm.target.Target" class="sphx-glr-backref-module-tvm-target sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">tgt</span></a><span class="p">,</span> <span class="s2">&quot;parallel&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.h [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallel: 0.000007
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallel: 0.000006
 </pre></div>
 </div>
 </div>
@@ -654,7 +654,7 @@ factor to be the number of threads on your CPU.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vector: 0.000039
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vector: 0.000038
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -691,10 +691,10 @@ class Module:
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Operator                  Timing             Performance
-   numpy    8.603850001236423e-06                    1.0
-   naive              6.6914e-06      0.7777216012643651
-parallel    7.0907000000000006e-06     0.824131057489499
-  vector    3.9132000000000004e-05      4.54819644628585
+   numpy    5.648829974234104e-06                    1.0
+   naive              5.7964e-06       1.026123998498628
+parallel              6.0328e-06      1.0679733728076948
+  vector    3.790319999999999e-05      6.709920492011108
 </pre></div>
 </div>
 <div class="admonition-code-specialization admonition">
@@ -1010,7 +1010,7 @@ matrix multiplication.</p>
 <span class="n">answer</span> <span class="o">=</span> <span class="n">numpy</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a</span><span class="o">.</span><span class="n">numpy</span><span class="p">(),</span> <span class="n">b</span><span class="o">.</span><span class="n">numpy</span><span class="p">())</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018497
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.015701
 </pre></div>
 </div>
 <p>Now we write a basic matrix multiplication using TVM TE and verify that it
@@ -1051,7 +1051,7 @@ optimizations.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.437607
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.355358
 </pre></div>
 </div>
 <p>Let’s take a look at the intermediate representation of the operator and
@@ -1115,7 +1115,7 @@ schedule.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.297377
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.285455
 </pre></div>
 </div>
 <p>By reordering the computation to take advantage of caching, you should see a
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.283909
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.267830
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1213,7 +1213,7 @@ more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.119010
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.111910
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1283,7 +1283,7 @@ optimized schedule.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.107144
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.100887
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1349,7 +1349,7 @@ to `C</cite> when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.111754
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.096841
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1406,7 +1406,7 @@ of thread-level parallelization.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.132169
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.114638
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1459,13 +1459,13 @@ working, we can compare the results.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>        Operator                  Timing             Performance
-            none            3.4376067333                     1.0
-        blocking     0.29737730780000005     0.08650707625142644
-   vectorization     0.28390941830000005     0.08258926640728781
-loop permutation            0.1190097843     0.03461995322127909
-   array packing     0.10714365569999999    0.031168095716738745
-   block caching            0.1117543183    0.032509337737048005
- parallelization            0.1321686653    0.038447872474674266
+            none            3.3553575343                     1.0
+        blocking            0.2854547764     0.08507432471262769
+   vectorization            0.2678296384      0.0798214901577918
+loop permutation     0.11190976379999999     0.03335256009412028
+   array packing     0.10088720389999999    0.030067497388485378
+   block caching            0.0968410304    0.028861612930975812
+ parallelization            0.1146384096     0.03416578067407534
 </pre></div>
 </div>
 <p>Note that the outputs on the web page reflect the running times on a
@@ -1497,7 +1497,6 @@ is</p>
 you can build generic templates of the matrix multiplication and other
 operations with tunable parameters that allows you to automatically optimize
 the computation for specific platforms.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  0.605 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-tensor-expr-get-started-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/40a01cffb015a67aaec0fad7e27cf80d/tensor_expr_get_started.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tensor_expr_get_started.py</span></code></a></p>