You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2023/05/10 21:21:20 UTC

[tvm-site] branch asf-site updated: deploying docs (apache/tvm@a1c1ccafa16cfdc155519fa38f9a5b782a1a5571)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 2b9b8e65d0 deploying docs (apache/tvm@a1c1ccafa16cfdc155519fa38f9a5b782a1a5571)
2b9b8e65d0 is described below

commit 2b9b8e65d09ffb6dbe59f559e854f60b8a879f65
Author: tvm-bot <95...@users.noreply.github.com>
AuthorDate: Wed May 10 21:21:14 2023 +0000

    deploying docs (apache/tvm@a1c1ccafa16cfdc155519fa38f9a5b782a1a5571)
---
 .../how_to/compile_models/from_darknet.rst.txt     |   2 +-
 .../how_to/compile_models/from_mxnet.rst.txt       |   2 +-
 .../how_to/compile_models/from_oneflow.rst.txt     |   2 +-
 .../how_to/compile_models/from_paddle.rst.txt      |   2 +-
 .../how_to/compile_models/from_pytorch.rst.txt     |   2 +-
 .../how_to/compile_models/from_tensorflow.rst.txt  |   2 +-
 .../compile_models/sg_execution_times.rst.txt      |  22 +-
 .../deploy_models/deploy_model_on_adreno.rst.txt   |   4 +-
 .../deploy_model_on_adreno_tvmc.rst.txt            |   2 +-
 .../deploy_models/deploy_model_on_android.rst.txt  |   2 +-
 .../deploy_object_detection_pytorch.rst.txt        |   4 +-
 .../deploy_models/deploy_prequantized.rst.txt      |   6 +-
 .../deploy_prequantized_tflite.rst.txt             |   2 +-
 .../how_to/deploy_models/deploy_quantized.rst.txt  |   2 +-
 .../deploy_models/sg_execution_times.rst.txt       |  22 +-
 .../extend_tvm/bring_your_own_datatypes.rst.txt    |   2 +-
 .../how_to/extend_tvm/sg_execution_times.rst.txt   |  10 +-
 .../how_to/extend_tvm/use_pass_instrument.rst.txt  |  16 +-
 .../optimize_operators/opt_conv_cuda.rst.txt       |   2 +-
 .../optimize_operators/opt_conv_tensorcore.rst.txt |   2 +-
 .../how_to/optimize_operators/opt_gemm.rst.txt     |  16 +-
 .../optimize_operators/sg_execution_times.rst.txt  |   8 +-
 .../sg_execution_times.rst.txt                     |  12 +-
 .../tune_conv2d_layer_cuda.rst.txt                 |   2 +-
 .../tune_network_cuda.rst.txt                      |   4 +-
 .../tune_network_x86.rst.txt                       |   4 +-
 .../tune_with_autotvm/sg_execution_times.rst.txt   |   6 +-
 .../tune_with_autotvm/tune_conv2d_cuda.rst.txt     |   2 +-
 .../work_with_microtvm/micro_autotune.rst.txt      |  18 +-
 .../work_with_microtvm/micro_pytorch.rst.txt       |   4 +-
 .../how_to/work_with_microtvm/micro_train.rst.txt  |  16 +-
 .../work_with_microtvm/sg_execution_times.rst.txt  |  14 +-
 .../work_with_relay/sg_execution_times.rst.txt     |   8 +-
 .../how_to/work_with_schedules/intrin_math.rst.txt |   2 +-
 .../work_with_schedules/sg_execution_times.rst.txt |  16 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |   4 +-
 .../frontend/deploy_classification.rst.txt         |   4 +-
 .../tutorials/frontend/deploy_detection.rst.txt    |   4 +-
 .../tutorials/frontend/sg_execution_times.rst.txt  |   6 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |   6 +-
 .../topic/vta/tutorials/sg_execution_times.rst.txt |   6 +-
 .../tutorial/auto_scheduler_matmul_x86.rst.txt     |  11 +-
 docs/_sources/tutorial/autotvm_matmul_x86.rst.txt  | 177 +++++++++++++-
 docs/_sources/tutorial/autotvm_relay_x86.rst.txt   |  52 ++--
 .../tutorial/cross_compilation_and_rpc.rst.txt     |   2 +-
 docs/_sources/tutorial/intro_topi.rst.txt          |   2 +-
 docs/_sources/tutorial/sg_execution_times.rst.txt  |  22 +-
 .../tutorial/tensor_expr_get_started.rst.txt       |  40 ++--
 docs/commit_hash                                   |   2 +-
 docs/how_to/compile_models/from_darknet.html       |   2 +-
 docs/how_to/compile_models/from_mxnet.html         |   2 +-
 docs/how_to/compile_models/from_oneflow.html       |  14 +-
 docs/how_to/compile_models/from_paddle.html        |   2 +-
 docs/how_to/compile_models/from_pytorch.html       |  14 +-
 docs/how_to/compile_models/from_tensorflow.html    |   2 +-
 docs/how_to/compile_models/sg_execution_times.html |  22 +-
 .../deploy_models/deploy_model_on_adreno.html      |   4 +-
 .../deploy_models/deploy_model_on_adreno_tvmc.html |  16 +-
 .../deploy_models/deploy_model_on_android.html     |   2 +-
 .../deploy_object_detection_pytorch.html           |  55 ++---
 docs/how_to/deploy_models/deploy_prequantized.html |   8 +-
 .../deploy_models/deploy_prequantized_tflite.html  |   2 +-
 docs/how_to/deploy_models/deploy_quantized.html    |   2 +-
 docs/how_to/deploy_models/sg_execution_times.html  |  22 +-
 .../extend_tvm/bring_your_own_datatypes.html       |   2 +-
 docs/how_to/extend_tvm/sg_execution_times.html     |  10 +-
 docs/how_to/extend_tvm/use_pass_instrument.html    |  16 +-
 docs/how_to/optimize_operators/opt_conv_cuda.html  |   2 +-
 .../optimize_operators/opt_conv_tensorcore.html    |   2 +-
 docs/how_to/optimize_operators/opt_gemm.html       |  16 +-
 .../optimize_operators/sg_execution_times.html     |   8 +-
 .../sg_execution_times.html                        |  12 +-
 .../tune_conv2d_layer_cuda.html                    |   2 +-
 .../tune_with_autoscheduler/tune_network_cuda.html |   4 +-
 .../tune_with_autoscheduler/tune_network_x86.html  |   4 +-
 .../tune_with_autotvm/sg_execution_times.html      |   6 +-
 .../how_to/tune_with_autotvm/tune_conv2d_cuda.html |   2 +-
 docs/how_to/work_with_microtvm/micro_autotune.html |  18 +-
 docs/how_to/work_with_microtvm/micro_pytorch.html  |   5 +-
 docs/how_to/work_with_microtvm/micro_train.html    |  16 +-
 .../work_with_microtvm/sg_execution_times.html     |  14 +-
 .../how_to/work_with_relay/sg_execution_times.html |   8 +-
 docs/how_to/work_with_schedules/intrin_math.html   |   2 +-
 .../work_with_schedules/sg_execution_times.html    |  16 +-
 docs/install/nnpack.html                           |  12 +-
 docs/reference/api/python/auto_scheduler.html      |   4 +-
 .../api/typedoc/classes/bytestreamreader.html      |  12 +-
 .../api/typedoc/classes/cachedcallstack.html       |  34 +--
 docs/reference/api/typedoc/classes/dldatatype.html |  12 +-
 docs/reference/api/typedoc/classes/dldevice.html   |  10 +-
 .../reference/api/typedoc/classes/environment.html |  12 +-
 docs/reference/api/typedoc/classes/ffilibrary.html |  20 +-
 docs/reference/api/typedoc/classes/instance.html   |  58 ++---
 docs/reference/api/typedoc/classes/memory.html     |  34 +--
 docs/reference/api/typedoc/classes/module.html     |  10 +-
 docs/reference/api/typedoc/classes/ndarray.html    |  22 +-
 .../api/typedoc/classes/packedfunccell.html        |   6 +-
 docs/reference/api/typedoc/classes/rpcserver.html  |  14 +-
 .../api/typedoc/classes/runtimecontext.html        |  22 +-
 docs/reference/api/typedoc/classes/scalar.html     |   6 +-
 docs/reference/api/typedoc/classes/tvmarray.html   |  16 +-
 docs/reference/api/typedoc/classes/tvmobject.html  |  12 +-
 .../api/typedoc/classes/webgpucontext.html         |  12 +-
 docs/reference/api/typedoc/enums/argtypecode.html  |  30 +--
 .../api/typedoc/enums/aynccallbackcode.html        |   4 +-
 .../api/typedoc/enums/dldatatypecode.html          |   8 +-
 .../api/typedoc/enums/rpcserverstate.html          |  12 +-
 docs/reference/api/typedoc/enums/sizeof.html       |  18 +-
 docs/reference/api/typedoc/index.html              | 124 +++++-----
 .../api/typedoc/interfaces/disposable.html         |   2 +-
 .../api/typedoc/interfaces/functioninfo.html       |   6 +-
 .../api/typedoc/interfaces/libraryprovider.html    |   4 +-
 docs/searchindex.js                                |   2 +-
 .../vta/tutorials/autotvm/sg_execution_times.html  |   4 +-
 .../tutorials/frontend/deploy_classification.html  |   4 +-
 .../vta/tutorials/frontend/deploy_detection.html   |   4 +-
 .../vta/tutorials/frontend/sg_execution_times.html |   6 +-
 .../vta/tutorials/optimize/sg_execution_times.html |   6 +-
 docs/topic/vta/tutorials/sg_execution_times.html   |   6 +-
 docs/tutorial/auto_scheduler_matmul_x86.html       |   7 +-
 docs/tutorial/autotvm_matmul_x86.html              | 177 +++++++++++++-
 docs/tutorial/autotvm_relay_x86.html               | 264 ++++++++++-----------
 docs/tutorial/cross_compilation_and_rpc.html       |   2 +-
 docs/tutorial/intro_topi.html                      |   2 +-
 docs/tutorial/sg_execution_times.html              |  22 +-
 docs/tutorial/tensor_expr_get_started.html         |  40 ++--
 126 files changed, 1156 insertions(+), 840 deletions(-)

diff --git a/docs/_sources/how_to/compile_models/from_darknet.rst.txt b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
index 52b298e046..7ff73c67b0 100644
--- a/docs/_sources/how_to/compile_models/from_darknet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
@@ -318,7 +318,7 @@ The process is no different from other examples.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  30.217 seconds)
+   **Total running time of the script:** ( 1 minutes  30.774 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_darknet.py:
diff --git a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
index ba0ac64e47..a2d32d2c1f 100644
--- a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
@@ -116,7 +116,7 @@ In this section, we download a pretrained imagenet model and classify an image.
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip994aa5d7-bcb2-4090-a662-c279738f84d3 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipa81aa688-ee2d-4db9-8c75-4fbb38864279 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
     x (1, 3, 224, 224)
 
 
diff --git a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
index 1db2a45b13..f5bb8f0c90 100644
--- a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
@@ -121,7 +121,7 @@ Load a pretrained OneFlow model and save model
  .. code-block:: none
 
     Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
-
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     17%|#7        | 7.16M/41.5M [00:00<00:00, 75.0MB/s]
     34%|###4      | 14.3M/41.5M [00:00<00:00, 56.4MB/s]
     48%|####8     | 20.0M/41.5M [00:00<00:00, 40.1MB/s]
     58%|#####8    | 24.2M/41.5M [00:00<00:00, 33.6MB/s]
     77%|#######7  | 32.0M/41.5M [00:00<00:00, 44.3MB/s]
     91%|######### | 37.6M/41.5M [00:00<00:00, 47.9MB/s]
    100%|##########| 41.5M/41.5M [00:01<00:00, 43.5MB/s]
+
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     15%|#5        | 6.33M/41.5M [00:00<00:00, 61.5MB/s]
     29%|##9       | 12.2M/41.5M [00:00<00:00, 54.2MB/s]
     42%|####1     | 17.4M/41.5M [00:00<00:00, 45.8MB/s]
     58%|#####7    | 24.0M/41.5M [00:00<00:00, 37.1MB/s]
     77%|#######7  | 32.0M/41.5M [00:00<00:00, 40.8MB/s]
     96%|#########6| 40.0M/41.5M [00:00<00:00, 47.6MB/s]
    100%|##########| 41.5M/41.5M [00:00<00:00, 47.2MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_paddle.rst.txt b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
index 0f8476235c..358104cecf 100644
--- a/docs/_sources/how_to/compile_models/from_paddle.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
@@ -209,7 +209,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  3.082 seconds)
+   **Total running time of the script:** ( 1 minutes  1.081 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_paddle.py:
diff --git a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
index eb4676d0db..0a1156c1f4 100644
--- a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
@@ -101,7 +101,7 @@ Load a pretrained PyTorch model
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     18%|#7        | 7.99M/44.7M [00:00<00:00, 64.9MB/s]
     36%|###5      | 16.0M/44.7M [00:00<00:00, 70.6MB/s]
     54%|#####3    | 24.1M/44.7M [00:00<00:00, 74.9MB/s]
     70%|#######   | 31.3M/44.7M [00:00<00:00, 69.2MB/s]
     85%|########4 | 38.0M/44.7M [00:00<00:00, 66.0MB/s]
     99%|#########9| 44.3M/44.7M [00:00<00:00, 53.3MB/s]
    100%|##########| 44.7M/44.7M [00:00<00:00, 61.2MB/s]
+
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     18%|#7        | 7.99M/44.7M [00:00<00:00, 71.8MB/s]
     33%|###3      | 14.8M/44.7M [00:00<00:00, 47.6MB/s]
     44%|####4     | 19.8M/44.7M [00:00<00:00, 48.9MB/s]
     58%|#####8    | 26.1M/44.7M [00:00<00:00, 50.6MB/s]
     72%|#######1  | 32.0M/44.7M [00:00<00:00, 40.6MB/s]
     90%|########9 | 40.0M/44.7M [00:00<00:00, 44.0MB/s]
    100%|##########| 44.7M/44.7M [00:00<00:00, 49.7MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
index 04072d4534..8ce658f80c 100644
--- a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
@@ -430,7 +430,7 @@ Run the corresponding model on tensorflow
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  31.594 seconds)
+   **Total running time of the script:** ( 1 minutes  30.903 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_tensorflow.py:
diff --git a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
index 3d0d84c893..d492255e2e 100644
--- a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**07:01.605** total execution time for **how_to_compile_models** files:
+**06:57.822** total execution time for **how_to_compile_models** files:
 
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:31.594 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:30.903 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:30.217 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:30.774 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:03.082 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:01.081 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:40.766 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:39.926 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:36.563 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:35.969 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:32.386 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:32.369 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:27.561 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:27.840 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:25.426 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:24.911 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:11.222 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:11.208 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.787 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.841 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
index b519c84a9b..0c8259387e 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
@@ -673,7 +673,7 @@ well as provides information about the model's performance
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-     4226.3831    4226.3092    4229.8331    4223.6465      2.2606   
+     4223.1643    4223.5665    4226.1886    4218.2129      2.3249   
                
 
 
@@ -682,7 +682,7 @@ well as provides information about the model's performance
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  19.527 seconds)
+   **Total running time of the script:** ( 1 minutes  19.724 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_model_on_adreno.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
index 4076d3bbc5..f4f0ebe8fd 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
@@ -127,7 +127,7 @@ Make a Keras Resnet50 Model
  .. code-block:: none
 
     Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
-
         8192/102967424 [..............................] - ETA: 0s
      8380416/102967424 [=>............................] - ETA: 1s
     16769024/102967424 [===>..........................] - ETA: 1s
     17481728/102967424 [====>.........................] - ETA: 1s
     22216704/102967424 [=====>........................] - ETA: 1s
     25157632/102967424 [======>.......................] - ETA: 1s
     33546240/102967424 [========>.....................] - ETA: 1s
     40189952/102967424 [==========>...................] - ETA: 1s
 
     41934848/102967424 [===========>..................] - ETA: 1s
     50323456/102967424 [=============>................] - ETA: 0s
     58712064/102967424 [================>.............] - ETA: 0s
     65355776/102967424 [==================>...........] - ETA: 0s
     67100672/102967424 [==================>...........] - ETA: 0s
     69296128/102967424 [===================>..........] - ETA: 0s
     75489280/102967424 [====================>.........] - ETA: 0s
     82944000/102967424 [=======================>......] -
  ETA: 0s
     83877888/102967424 [=======================>......] - ETA: 0s
     92266496/102967424 [=========================>....] - ETA: 0s
    100646912/102967424 [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 2s 0us/step
+
         8192/102967424 [..............................] - ETA: 0s
      8380416/102967424 [=>............................] - ETA: 0s
     16769024/102967424 [===>..........................] - ETA: 0s
     25157632/102967424 [======>.......................] - ETA: 1s
     33546240/102967424 [========>.....................] - ETA: 1s
     41934848/102967424 [===========>..................] - ETA: 0s
     50323456/102967424 [=============>................] - ETA: 0s
     56967168/102967424 [===============>..............] - ETA: 0s
 
     58712064/102967424 [================>.............] - ETA: 0s
     65355776/102967424 [==================>...........] - ETA: 0s
     67100672/102967424 [==================>...........] - ETA: 0s
     73482240/102967424 [====================>.........] - ETA: 0s
     75489280/102967424 [====================>.........] - ETA: 0s
     82124800/102967424 [======================>.......] - ETA: 0s
     83877888/102967424 [=======================>......] - ETA: 0s
     89325568/102967424 [=========================>....] -
  ETA: 0s
     92266496/102967424 [=========================>....] - ETA: 0s
     98910208/102967424 [===========================>..] - ETA: 0s
    100646912/102967424 [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 2s 0us/step
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
index b94849a1f0..dcad7cf6ef 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
@@ -437,7 +437,7 @@ Execute on TVM
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      16.0945      16.1149      16.8290      15.2921       0.4424   
+      16.0708      15.8173      17.3136      15.6391       0.5556   
                
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
index 5f51431559..7b5acfbb7d 100644
--- a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
@@ -130,7 +130,7 @@ Load pre-trained maskrcnn from torchvision and do tracing
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MaskRCNN_ResNet50_FPN_Weights.COCO_V1`. You can also use `weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-
      0%|          | 0.00/170M [00:00<?, ?B/s]
      5%|4         | 7.99M/170M [00:00<00:03, 51.5MB/s]
      8%|8         | 14.3M/170M [00:00<00:03, 45.1MB/s]
     13%|#3        | 22.3M/170M [00:00<00:03, 44.0MB/s]
     16%|#5        | 26.5M/170M [00:00<00:04, 36.6MB/s]
     18%|#7        | 30.3M/170M [00:00<00:04, 35.7MB/s]
     21%|##1       | 36.2M/170M [00:00<00:03, 42.2MB/s]
     24%|##3       | 40.4M/170M [00:01<00:03, 40.7MB/s]
     28%|##8       | 48.0M/170M [00:01<00:03, 42.3MB/s]
     33%|###2      | 56.0M/170M [00:01<00:02, 41.7MB/s]
     38%|###7      | 64.0M/170M [00:01<00:02, 43.5MB/s]
     42%|####2     | 72.0M/170M [00:01<00:02, 47.8MB/s]
     47%|####7     | 80.0M/170M [00:01<00:01, 47.8MB/s]
     52%|#####1    | 87.7M/170M [00:02<00:01, 54.7MB/s]
     56%|#####5    | 94.3M/170M [00:02<00:01, 55.4MB/s]
     59%|#####9    | 101M/170M [00:02<00:01, 58.5MB/s] 
     63%|######2   | 107M/170M [00:02<00:01, 47.5MB/s]
     67%|######7   | 114M/170M [00:02<00:01, 54.3MB/s]
 
     71%|#######   | 120M/170M [00:02<00:00, 52.6MB/s]
     75%|#######5  | 128M/170M [00:02<00:00, 53.2MB/s]
     80%|########  | 136M/170M [00:02<00:00, 55.1MB/s]
     85%|########4 | 144M/170M [00:03<00:00, 54.8MB/s]
     88%|########8 | 150M/170M [00:03<00:00, 54.3MB/s]
     92%|#########1| 156M/170M [00:03<00:00, 47.5MB/s]
     94%|#########4| 160M/170M [00:03<00:00, 30.4MB/s]
     98%|#########7| 166M/170M [00:03<00:00, 35.8MB/s]
    100%|##########| 170M/170M [00:03<00:00, 45.5MB/s]
+
      0%|          | 0.00/170M [00:00<?, ?B/s]
      5%|4         | 7.99M/170M [00:00<00:03, 44.7MB/s]
      8%|8         | 14.3M/170M [00:00<00:03, 43.4MB/s]
     11%|#         | 18.4M/170M [00:00<00:03, 41.5MB/s]
     14%|#4        | 24.0M/170M [00:00<00:03, 38.4MB/s]
     19%|#8        | 32.0M/170M [00:00<00:03, 40.0MB/s]
     24%|##3       | 40.0M/170M [00:01<00:03, 40.9MB/s]
     28%|##7       | 47.3M/170M [00:01<00:02, 48.6MB/s]
     31%|###       | 52.3M/170M [00:01<00:02, 42.2MB/s]
     35%|###4      | 58.7M/170M [00:01<00:02, 47.5MB/s]
     38%|###7      | 64.0M/170M [00:01<00:02, 40.0MB/s]
     42%|####2     | 72.0M/170M [00:01<00:02, 45.9MB/s]
     47%|####6     | 79.5M/170M [00:01<00:01, 53.2MB/s]
     50%|#####     | 85.1M/170M [00:01<00:01, 50.9MB/s]
     53%|#####3    | 90.3M/170M [00:02<00:02, 41.0MB/s]
     57%|#####6    | 96.0M/170M [00:02<00:02, 38.7MB/s]
     61%|######1   | 104M/170M [00:02<00:01, 41.8MB/s] 
     66%|######5   | 112M/170M [00:02<00:01, 47.7MB/s
 ]
     71%|#######   | 120M/170M [00:02<00:01, 51.8MB/s]
     75%|#######5  | 128M/170M [00:02<00:00, 51.5MB/s]
     79%|#######9  | 134M/170M [00:03<00:00, 52.7MB/s]
     82%|########2 | 139M/170M [00:03<00:00, 50.9MB/s]
     85%|########5 | 144M/170M [00:03<00:00, 49.3MB/s]
     89%|########9 | 152M/170M [00:03<00:00, 50.6MB/s]
     93%|#########3| 158M/170M [00:03<00:00, 52.7MB/s]
     96%|#########6| 163M/170M [00:03<00:00, 45.0MB/s]
     99%|#########8| 168M/170M [00:03<00:00, 42.7MB/s]
    100%|##########| 170M/170M [00:03<00:00, 46.0MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
       (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -295,7 +295,7 @@ Get boxes with score larger than 0.9
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 3 minutes  40.156 seconds)
+   **Total running time of the script:** ( 3 minutes  39.367 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
index 7882426fe3..6fe7a094f9 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
@@ -227,7 +227,7 @@ training. Other models require a full post training calibration.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
-
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     59%|#####8    | 7.99M/13.6M [00:00<00:00, 77.9MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 95.9MB/s]
+
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     59%|#####9    | 8.04M/13.6M [00:00<00:00, 84.3MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 61.9MB/s]
 
 
 
@@ -409,7 +409,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      89.1452      89.1043      91.6734      88.5759       0.4102   
+      88.7922      88.7505      90.0318      88.3844       0.2541   
                
 
 
@@ -458,7 +458,7 @@ TODO
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  25.446 seconds)
+   **Total running time of the script:** ( 1 minutes  25.313 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_prequantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
index 7982e342a7..9d0a90af5b 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
@@ -423,7 +423,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      109.4427     109.4240     110.9627     108.2313      0.4822   
+      110.0824     110.0108     114.0589     109.1821      0.6752   
                
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
index 18caf6f907..24b24a960d 100644
--- a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
@@ -257,7 +257,7 @@ We create a Relay VM to build and execute the model.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  42.016 seconds)
+   **Total running time of the script:** ( 2 minutes  3.466 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_quantized.py:
diff --git a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
index 47330ff6bb..11536f93bf 100644
--- a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**11:32.973** total execution time for **how_to_deploy_models** files:
+**11:53.613** total execution time for **how_to_deploy_models** files:
 
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:40.156 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:39.367 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 01:42.016 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 02:03.466 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:25.446 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:25.313 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:19.527 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:19.724 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:51.482 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:51.778 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:49.082 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:49.045 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:45.431 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:44.992 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:30.221 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:30.243 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:29.605 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:29.679 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.007 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.006 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
index 2a87f6d66f..e2e0fc7b30 100644
--- a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
@@ -463,7 +463,7 @@ First let us define two helper functions to get the mobilenet model and a cat im
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipa6115f43-09d1-4ce3-8870-1a27fbd3c6d5 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipdf54d405-fcae-450c-91dc-ad980f825cba from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 
 
 
diff --git a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
index e8d6e3762b..0041889d76 100644
--- a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:56.661** total execution time for **how_to_extend_tvm** files:
+**00:55.828** total execution time for **how_to_extend_tvm** files:
 
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:52.800 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:51.997 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.681 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.672 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.173 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.151 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.007 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.008 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
index 990ab6195b..e2724906cb 100644
--- a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
@@ -220,10 +220,10 @@ profile the execution time of each passes.
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 23997us [23997us] (48.92%; 48.92%)
-    FoldScaleAxis: 25060us [9us] (51.08%; 51.08%)
-            FoldConstant: 25050us [1864us] (51.06%; 99.96%)
-                    InferType: 23187us [23187us] (47.27%; 92.56%)
+    InferType: 23353us [23353us] (47.64%; 47.64%)
+    FoldScaleAxis: 25667us [9us] (52.36%; 52.36%)
+            FoldConstant: 25658us [2134us] (52.34%; 99.96%)
+                    InferType: 23524us [23524us] (47.99%; 91.68%)
 
 
 
@@ -262,10 +262,10 @@ Refer to following sections and :py:func:`tvm.instrument.pass_instrument` for th
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 23117us [23117us] (48.42%; 48.42%)
-    FoldScaleAxis: 24624us [6us] (51.58%; 51.58%)
-            FoldConstant: 24618us [1823us] (51.57%; 99.97%)
-                    InferType: 22795us [22795us] (47.75%; 92.59%)
+    InferType: 23119us [23119us] (47.39%; 47.39%)
+    FoldScaleAxis: 25661us [8us] (52.61%; 52.61%)
+            FoldConstant: 25653us [1813us] (52.59%; 99.97%)
+                    InferType: 23840us [23840us] (48.87%; 92.93%)
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
index a4c36a7382..49a37d96bd 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
@@ -331,7 +331,7 @@ latency of convolution.
 
  .. code-block:: none
 
-    Convolution: 47.510208 ms
+    Convolution: 53.594974 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
index 5702768af8..e9d0acd21a 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
@@ -598,7 +598,7 @@ be able to run on our build server
 
  .. code-block:: none
 
-    conv2d with tensor core: 11.556883 ms
+    conv2d with tensor core: 12.268387 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
index 53b022d86d..48034c76d3 100644
--- a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
@@ -134,8 +134,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 
  .. code-block:: none
 
-    Numpy running time: 0.018083
-    Baseline: 3.314446
+    Numpy running time: 0.018092
+    Baseline: 3.298170
 
 
 
@@ -227,7 +227,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 
  .. code-block:: none
 
-    Opt1: 0.296930
+    Opt1: 0.297649
 
 
 
@@ -318,7 +318,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
 
  .. code-block:: none
 
-    Opt2: 0.283093
+    Opt2: 0.280033
 
 
 
@@ -406,7 +406,7 @@ the access pattern for A matrix is more cache friendly.
 
  .. code-block:: none
 
-    Opt3: 0.116886
+    Opt3: 0.119562
 
 
 
@@ -523,7 +523,7 @@ flattening.
 
  .. code-block:: none
 
-    Opt4: 0.107717
+    Opt4: 0.107184
 
 
 
@@ -635,7 +635,7 @@ write to C when all the block results are ready.
 
  .. code-block:: none
 
-    Opt5: 0.112268
+    Opt5: 0.112017
 
 
 
@@ -748,7 +748,7 @@ Furthermore, we can also utilize multi-core processors to do the thread-level pa
 
  .. code-block:: none
 
-    Opt6: 0.132823
+    Opt6: 0.132650
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
index 1064c24602..93c3bb5ba2 100644
--- a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:34.167** total execution time for **how_to_optimize_operators** files:
+**00:34.028** total execution time for **how_to_optimize_operators** files:
 
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:30.635 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:30.546 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:02.078 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:02.048 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.454 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.434 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
index 121c9d4bc3..de0287fcfc 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
@@ -5,18 +5,18 @@
 
 Computation times
 =================
-**03:33.170** total execution time for **how_to_tune_with_autoscheduler** files:
+**03:33.098** total execution time for **how_to_tune_with_autoscheduler** files:
 
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:29.491 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:29.073 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:15.002 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:14.843 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:17.245 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:17.760 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:15.871 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:15.940 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:15.457 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:15.378 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:00.102 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
index aa730c783d..2e4970cc4e 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
@@ -767,7 +767,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 0.350 ms
+    Execution time of this operator: 0.352 ms
 
 
 
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
index f8900b3144..1270f4a845 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
@@ -647,7 +647,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       8.0976       8.0975       8.1085       8.0868       0.0089   
+       8.1058       8.1094       8.1205       8.0874       0.0137   
                
 
 
@@ -675,7 +675,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  15.002 seconds)
+   **Total running time of the script:** ( 1 minutes  14.843 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
index 301b32ba4c..91c9877b69 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
@@ -666,7 +666,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      761.3511     759.2872     766.4372     758.3289      3.6176   
+      757.4948     756.7690     759.2098     756.5055      1.2175   
                
 
 
@@ -694,7 +694,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  29.491 seconds)
+   **Total running time of the script:** ( 1 minutes  29.073 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_x86.py:
diff --git a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
index b2914f9c8d..47df593c4b 100644
--- a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:23.771** total execution time for **how_to_tune_with_autotvm** files:
+**00:23.462** total execution time for **how_to_tune_with_autotvm** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:23.734 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:23.423 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.021 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.023 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)             | 00:00.006 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
index 3d6f37b1db..37433c1466 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
@@ -326,7 +326,7 @@ and measure running time.
 
     Best config:
     ,None
-    Time cost of this operator: 0.037135
+    Time cost of this operator: 0.037083
 
 
 
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
index 7be886560d..f131b0409c 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
@@ -360,10 +360,10 @@ Timing the untuned program
     ########## Build without Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  302.5     98.737   (1, 2, 10, 10, 3)  2       1        [302.5]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.901     0.947    (1, 6, 10, 10)     1       1        [2.901]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.967     0.316    (1, 1, 10, 10, 3)  1       1        [0.967]           
-    Total_time                                    -                                             306.368   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  303.5     98.738   (1, 2, 10, 10, 3)  2       1        [303.5]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.923     0.951    (1, 6, 10, 10)     1       1        [2.923]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.956     0.311    (1, 1, 10, 10, 3)  1       1        [0.956]           
+    Total_time                                    -                                             307.379   -        -                  -       -        -                 
 
 
 
@@ -428,10 +428,10 @@ Timing the tuned program
     ########## Build with Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  101.8     97.499   (1, 6, 10, 10, 1)  2       1        [101.8]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.752     1.678    (1, 6, 10, 10)     1       1        [1.752]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.859     0.823    (1, 3, 10, 10, 1)  1       1        [0.859]           
-    Total_time                                    -                                             104.411   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  101.6     97.439   (1, 6, 10, 10, 1)  2       1        [101.6]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.817     1.742    (1, 6, 10, 10)     1       1        [1.817]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.854     0.819    (1, 3, 10, 10, 1)  1       1        [0.854]           
+    Total_time                                    -                                             104.27    -        -                  -       -        -                 
 
 
 
@@ -439,7 +439,7 @@ Timing the tuned program
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  26.205 seconds)
+   **Total running time of the script:** ( 1 minutes  26.302 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_autotune.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
index 7296d661c7..8e64ba1026 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
@@ -118,7 +118,7 @@ download a cat image and preprocess it to use as the model input.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/ao/quantization/utils.py:310: UserWarning: must run observer before calling calculate_qparams. Returning default values.
       warnings.warn(
     Downloading: "https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
-
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 36.2MB/s]
+
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
     61%|######    | 2.09M/3.42M [00:00<00:00, 19.1MB/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 30.0MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
       device=storage.device,
     /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -326,7 +326,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  26.374 seconds)
+   **Total running time of the script:** ( 1 minutes  26.805 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_pytorch.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
index 59b5ab6035..807be2b736 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
@@ -217,7 +217,7 @@ take about **2 minutes** to download the Stanford Cars, while COCO 2017 validati
  .. code-block:: none
 
 
-    '/tmp/tmp7no100ul/images/random'
+    '/tmp/tmp2p4dqbop/images/random'
 
 
 
@@ -317,8 +317,8 @@ objects to other stuff? We can display some examples from our datasets using ``m
 
  .. code-block:: none
 
-    /tmp/tmp7no100ul/images/target contains 8144 images
-    /tmp/tmp7no100ul/images/random contains 5000 images
+    /tmp/tmp2p4dqbop/images/target contains 8144 images
+    /tmp/tmp2p4dqbop/images/random contains 5000 images
 
 
 
@@ -493,13 +493,13 @@ the time on our validation set).
  .. code-block:: none
 
     Epoch 1/3
-    328/328 - 41s - loss: 0.2290 - accuracy: 0.9233 - val_loss: 0.1310 - val_accuracy: 0.9558 - 41s/epoch - 125ms/step
+    328/328 - 41s - loss: 0.2254 - accuracy: 0.9230 - val_loss: 0.1422 - val_accuracy: 0.9494 - 41s/epoch - 125ms/step
     Epoch 2/3
-    328/328 - 35s - loss: 0.1034 - accuracy: 0.9618 - val_loss: 0.1087 - val_accuracy: 0.9641 - 35s/epoch - 108ms/step
+    328/328 - 35s - loss: 0.1025 - accuracy: 0.9644 - val_loss: 0.1077 - val_accuracy: 0.9645 - 35s/epoch - 108ms/step
     Epoch 3/3
-    328/328 - 35s - loss: 0.0641 - accuracy: 0.9774 - val_loss: 0.1012 - val_accuracy: 0.9687 - 35s/epoch - 108ms/step
+    328/328 - 35s - loss: 0.0655 - accuracy: 0.9750 - val_loss: 0.0967 - val_accuracy: 0.9705 - 35s/epoch - 108ms/step
 
-    <keras.callbacks.History object at 0x7f8d8c558850>
+    <keras.callbacks.History object at 0x7fb2b4992d30>
 
 
 
@@ -860,7 +860,7 @@ Arduino tutorial for how to do that `on GitHub <https://github.com/guberti/tvm-a
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 4 minutes  33.356 seconds)
+   **Total running time of the script:** ( 4 minutes  26.868 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_train.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
index 139113fdff..d91d5832fc 100644
--- a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
@@ -5,20 +5,20 @@
 
 Computation times
 =================
-**07:55.277** total execution time for **how_to_work_with_microtvm** files:
+**07:49.074** total execution time for **how_to_work_with_microtvm** files:
 
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 04:33.356 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 04:26.868 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:26.374 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:26.805 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:26.205 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:26.302 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:11.779 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:11.821 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:09.098 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:09.042 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.464 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.236 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)         | 00:00.000 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
index 84bc01f27f..389db55308 100644
--- a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:40.588** total execution time for **how_to_work_with_relay** files:
+**00:40.353** total execution time for **how_to_work_with_relay** files:
 
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:35.473 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:35.249 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.228 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.274 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.881 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.824 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``)                 | 00:00.006 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
index 5a5789dd1d..2feb028fd5 100644
--- a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
@@ -278,7 +278,7 @@ The following example customizes CUDA lowering rule for :code:`exp`.
  .. code-block:: none
 
 
-    <function my_cuda_math_rule at 0x7f8c104d91f0>
+    <function my_cuda_math_rule at 0x7fb14837eaf0>
 
 
 
diff --git a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
index 5451041e64..c0f4e83192 100644
--- a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
@@ -5,22 +5,22 @@
 
 Computation times
 =================
-**00:06.477** total execution time for **how_to_work_with_schedules** files:
+**00:06.424** total execution time for **how_to_work_with_schedules** files:
 
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:03.426 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:03.385 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.258 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.230 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.775 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.777 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.763 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.770 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.114 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.120 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.059 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.063 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.053 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.028 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.027 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
index fc04426593..5fd8125d0f 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:34.356** total execution time for **topic_vta_tutorials_autotvm** files:
+**00:34.583** total execution time for **topic_vta_tutorials_autotvm** files:
 
 +---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:34.349 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:34.575 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)     | 00:00.007 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
index baa38153c7..812d276d23 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -293,7 +293,7 @@ The compilation steps are:
       warnings.warn(
     /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
       graph, lib, params = relay.build(
-    resnet18_v1 inference graph built in 36.86s!
+    resnet18_v1 inference graph built in 36.40s!
 
 
 
@@ -416,7 +416,7 @@ and an input test image.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  4.176 seconds)
+   **Total running time of the script:** ( 1 minutes  3.760 seconds)
 
 
 .. _sphx_glr_download_topic_vta_tutorials_frontend_deploy_classification.py:
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
index 4bf7a9391b..ebc89d79ac 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
@@ -337,7 +337,7 @@ The compilation steps are:
 
     /workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
       warnings.warn(
-    yolov3-tiny inference graph built in 24.90s!
+    yolov3-tiny inference graph built in 25.12s!
 
 
 
@@ -447,7 +447,7 @@ Download test image
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  8.251 seconds)
+   **Total running time of the script:** ( 1 minutes  8.513 seconds)
 
 
 .. _sphx_glr_download_topic_vta_tutorials_frontend_deploy_detection.py:
diff --git a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
index c1eecb4f74..d902e4deb6 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**02:12.427** total execution time for **topic_vta_tutorials_frontend** files:
+**02:12.273** total execution time for **topic_vta_tutorials_frontend** files:
 
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 01:08.251 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 01:08.513 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 01:04.176 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 01:03.760 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
index a782d3dcbf..e34531ff77 100644
--- a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:03.408** total execution time for **topic_vta_tutorials_optimize** files:
+**00:03.451** total execution time for **topic_vta_tutorials_optimize** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.856 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.897 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.553 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.554 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
index e44c3ef280..f778e60a49 100644
--- a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:00.952** total execution time for **topic_vta_tutorials** files:
+**00:00.969** total execution time for **topic_vta_tutorials** files:
 
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.487 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.498 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.465 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.471 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
index b69ec883e3..c10fb9caae 100644
--- a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
@@ -207,6 +207,13 @@ trials, we can load the best schedule from the log file and apply it.
 
 
 
+.. rst-class:: sphx-glr-script-out
+
+ .. code-block:: none
+
+    *E
+
+
 
 
 
@@ -318,7 +325,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 96.577 ms
+    Execution time of this operator: 98.264 ms
 
 
 
@@ -434,7 +441,7 @@ operations.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  40.584 seconds)
+   **Total running time of the script:** ( 1 minutes  51.453 seconds)
 
 
 .. _sphx_glr_download_tutorial_auto_scheduler_matmul_x86.py:
diff --git a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
index bf06d7d45d..cd4b196695 100644
--- a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
@@ -454,16 +454,173 @@ reduce variance, we take 5 measurements and average them.
     waiting for device...
     device available
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 1.02/1.02       result: MeasureResult(costs=(0.2624888102,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.4670820236206055, timestamp=1683740196.5253062)       [('tile_y', [-1, 256]), ('tile_x', [-1, 2])],None,18
-    No: 2   GFLOPS: 10.71/10.71     result: MeasureResult(costs=(0.0250643258,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6679961681365967, timestamp=1683740197.1942677)       [('tile_y', [-1, 8]), ('tile_x', [-1, 16])],None,43
-    No: 3   GFLOPS: 4.98/10.71      result: MeasureResult(costs=(0.05387433740000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.1240603923797607, timestamp=1683740198.3227458)        [('tile_y', [-1, 4]), ('tile_x', [-1, 2])],None,12
-    No: 4   GFLOPS: 10.06/10.71     result: MeasureResult(costs=(0.0266928622,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6939327716827393, timestamp=1683740199.0207431)       [('tile_y', [-1, 2]), ('tile_x', [-1, 32])],None,51
-    No: 5   GFLOPS: 10.42/10.71     result: MeasureResult(costs=(0.025769643200000004,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7046566009521484, timestamp=1683740199.879057)        [('tile_y', [-1, 2]), ('tile_x', [-1, 16])],None,41
-    No: 6   GFLOPS: 10.18/10.71     result: MeasureResult(costs=(0.026357603999999996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7504997253417969, timestamp=1683740200.5699663)       [('tile_y', [-1, 1]), ('tile_x', [-1, 64])],None,60
-    No: 7   GFLOPS: 4.53/10.71      result: MeasureResult(costs=(0.0592993322,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.221484661102295, timestamp=1683740201.78052)  [('tile_y', [-1, 8]), ('tile_x', [-1, 2])],None,13
-    No: 8   GFLOPS: 11.76/11.76     result: MeasureResult(costs=(0.022832740799999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6648235321044922, timestamp=1683740202.4155614)       [('tile_y', [-1, 8]), ('tile_x', [-1, 256])],None,83
-    No: 9   GFLOPS: 10.54/11.76     result: MeasureResult(costs=(0.025470962800000004,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6552300453186035, timestamp=1683740203.180015)        [('tile_y', [-1, 2]), ('tile_x', [-1, 256])],None,81
-    No: 10  GFLOPS: 0.51/11.76      result: MeasureResult(costs=(0.5307238545999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.736793756484985, timestamp=1683740211.9417796)  [('tile_y', [-1, 256]), ('tile_x', [-1, 1])],None,8
+    No: 1   GFLOPS: 0.00/0.00       result: Traceback (most recent call last):
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 742, in __call__
+        yield remote, remote.load_module(os.path.split(build_result.filename)[1])
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 706, in run_through_rpc
+        costs = time_f(*args).results
+      File "/workspace/python/tvm/runtime/module.py", line 399, in evaluator
+        blob = feval(*args)
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 262, in tvm._ffi._cy3.core.FuncCall
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 251, in tvm._ffi._cy3.core.FuncCall3
+      File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
+    tvm._ffi.base.TVMError: Traceback (most recent call last):
+      4: TVMFuncCall
+            at /workspace/src/runtime/c_runtime_api.cc:477
+      3: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+            at /workspace/include/tvm/runtime/packed_func.h:1217
+      2: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+            at /workspace/src/runtime/rpc/rpc_module.cc:129
+      1: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:1012
+      0: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:804
+      File "/workspace/src/runtime/rpc/rpc_endpoint.cc", line 804
+    TVMError: 
+    ---------------------------------------------------------------
+    An error occurred during the execution of TVM.
+    For more information, please see: https://tvm.apache.org/docs/errors.html
+    ---------------------------------------------------------------
+      Check failed: (code == RPCCode::kReturn) is false: code=kShutdown
+
+    During handling of the above exception, another exception occurred:
+
+    Traceback (most recent call last):
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 706, in run_through_rpc
+        costs = time_f(*args).results
+      File "/usr/lib/python3.8/contextlib.py", line 131, in __exit__
+        self.gen.throw(type, value, traceback)
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 746, in __call__
+        remote.remove(build_result.filename)
+      File "/workspace/python/tvm/rpc/client.py", line 145, in remove
+        self._remote_funcs["remove"] = self.get_function("tvm.rpc.server.remove")
+      File "/workspace/python/tvm/rpc/client.py", line 73, in get_function
+        return self._sess.get_function(name)
+      File "/workspace/python/tvm/runtime/module.py", line 177, in get_function
+        check_call(
+      File "/workspace/python/tvm/_ffi/base.py", line 348, in check_call
+        raise get_last_ffi_error()
+    tvm._ffi.base.TVMError: Traceback (most recent call last):
+      54: 0xffffffffffffffff
+      53: _start
+      52: __libc_start_main
+      51: 0x00007f9ade2d2d8f
+      50: Py_BytesMain
+      49: Py_RunMain
+      48: 0x00000000005f3021
+      47: PyObject_Call
+      46: _PyFunction_Vectorcall
+      45: _PyEval_EvalCodeWithName
+      44: _PyEval_EvalFrameDefault
+      43: _PyFunction_Vectorcall
+      42: _PyEval_EvalCodeWithName
+      41: _PyEval_EvalFrameDefault
+      40: 0x000000000051546f
+      39: 0x00000000005dabd0
+      38: PyEval_EvalCode
+      37: _PyEval_EvalCodeWithName
+      36: _PyEval_EvalFrameDefault
+      35: _PyFunction_Vectorcall
+      34: _PyEval_EvalCodeWithName
+      33: _PyEval_EvalFrameDefault
+      32: PyObject_Call
+      31: _PyFunction_Vectorcall
+      30: _PyEval_EvalCodeWithName
+      29: _PyEval_EvalFrameDefault
+      28: 0x0000000000521f82
+      27: _PyFunction_Vectorcall
+      26: _PyEval_EvalFrameDefault
+      25: 0x000000000052f0a9
+      24: 0x0000000000626e1c
+      23: 0x0000000000626f00
+      22: 0x00000000005291f0
+      21: _PyEval_EvalFrameDefault
+      20: _PyFunction_Vectorcall
+      19: _PyEval_EvalFrameDefault
+      18: _PyFunction_Vectorcall
+      17: _PyEval_EvalFrameDefault
+      16: _PyFunction_Vectorcall
+      15: _PyEval_EvalCodeWithName
+      14: _PyEval_EvalFrameDefault
+      13: _PyObject_MakeTpCall
+      12: 0x00007f9ade0ee429
+      11: _ctypes_callproc
+      10: 0x00007f9adde4f492
+      9: 0x00007f9adde52e2d
+      8: TVMModGetFunction
+            at /workspace/src/runtime/c_runtime_api.cc:408
+      7: tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)
+            at /workspace/src/runtime/module.cc:66
+      6: tvm::runtime::RPCModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)
+            at /workspace/src/runtime/rpc/rpc_module.cc:187
+      5: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:1007
+      4: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(tvm::runtime::RPCCode, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+            at /workspace/src/runtime/rpc/rpc_endpoint.h:223
+      3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(int&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
+            at /workspace/include/tvm/runtime/packed_func.h:1621
+      2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+            at /workspace/include/tvm/runtime/packed_func.h:1217
+      1: Call
+            at /workspace/include/tvm/runtime/packed_func.h:1213
+      0: operator()
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:684
+      File "/workspace/src/runtime/rpc/rpc_endpoint.cc", line 684
+    TVMError: 
+    ---------------------------------------------------------------
+    An error occurred during the execution of TVM.
+    For more information, please see: https://tvm.apache.org/docs/errors.html
+    ---------------------------------------------------------------
+      Check failed: (code == RPCCode::kReturn) is false: code=1
+
+    Traceback (most recent call last):
+      54: 0xffffffffffffffff
+      53: _start
+      52: __libc_start_main
+      51: 0x00007f9ade2d2d8f
+      50: Py_BytesMain
+      49: Py_RunMain
+      48: 0x00000000005f3021
+      47: PyObject_Call
+      46: _PyFunction_Vectorcall
+      45: _PyEval_EvalCodeWithName
+      44: _PyEval_EvalFrameDefault
+      43: _PyFunction_Vectorcall
+      42: _PyEval_EvalCodeWithName
+      41: _PyEval_EvalFrameDefault
+      40: 0x000000000051546f
+      39: 0x00000000005dabd0
+      38: PyEval_EvalCode
+      37: _PyEval_EvalCodeWithName
+      36: _PyEval_EvalFrameDefault
+      35: _PyFunction_Vectorcall
+      34: _PyEval_EvalCodeWithName
+      33: _PyEval_EvalFrameDefault
+      32: PyObject_Call
+      31: _PyFunction_Vectorcall
+      30: _PyEval_EvalCodeWithName
+      29: _PyEval_EvalFrameDefault
+      28: 0x0000000000521f82
+      27: _PyFunction_Vectorcall
+      26: _PyEval_EvalFrameDefault
+      25: 0x000000000052f0a9
+      24: 0x0000000000626e1c
+      23: 0x0000000000626f00
+      22: 0x00000000005291f0
+      21: _PyEval_EvalFrameDefault
+      20: _PyFunction_Vectorcall
+      19: _PyEval_EvalFrameDefault
+      18: _PyFunction_Vectorcal     [('tile_y', [-1, 512]), ('tile_x', [-1, 1])],None,9
+    No: 2   GFLOPS: 8.06/8.06       result: MeasureResult(costs=(0.033303479399999995,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7889261245727539, timestamp=1683750991.224603)        [('tile_y', [-1, 1]), ('tile_x', [-1, 16])],None,40
+    No: 3   GFLOPS: 2.03/8.06       result: MeasureResult(costs=(0.13210675519999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3505711555480957, timestamp=1683750993.597124) [('tile_y', [-1, 256]), ('tile_x', [-1, 4])],None,28
+    No: 4   GFLOPS: 11.41/11.41     result: MeasureResult(costs=(0.0235300276,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6041018962860107, timestamp=1683750994.2387557)       [('tile_y', [-1, 32]), ('tile_x', [-1, 512])],None,95
+    No: 5   GFLOPS: 9.37/11.41      result: MeasureResult(costs=(0.0286352852,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7317562103271484, timestamp=1683750995.1117728)       [('tile_y', [-1, 2]), ('tile_x', [-1, 64])],None,61
+    No: 6   GFLOPS: 3.76/11.41      result: MeasureResult(costs=(0.0713379094,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4077887535095215, timestamp=1683750996.5105119)       [('tile_y', [-1, 32]), ('tile_x', [-1, 8])],None,35
+    No: 7   GFLOPS: 16.69/16.69     result: MeasureResult(costs=(0.0160871336,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5407025814056396, timestamp=1683750997.0280185)       [('tile_y', [-1, 16]), ('tile_x', [-1, 64])],None,64
+    No: 8   GFLOPS: 11.22/16.69     result: MeasureResult(costs=(0.0239241816,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6765973567962646, timestamp=1683750997.6746368)       [('tile_y', [-1, 128]), ('tile_x', [-1, 256])],None,87
+    No: 9   GFLOPS: 1.98/16.69      result: MeasureResult(costs=(0.135866018,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3858609199523926, timestamp=1683751000.1886578)        [('tile_y', [-1, 4]), ('tile_x', [-1, 1])],None,2
+    No: 10  GFLOPS: 12.55/16.69     result: MeasureResult(costs=(0.0213917062,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5771560668945312, timestamp=1683751000.7944303)       [('tile_y', [-1, 32]), ('tile_x', [-1, 128])],None,75
 
 
 
diff --git a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
index 249a6b2b0b..62238c399f 100644
--- a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
@@ -311,7 +311,7 @@ standard deviation.
 
  .. code-block:: none
 
-    {'mean': 495.85341244997835, 'median': 495.2317347000644, 'std': 4.122248126917765}
+    {'mean': 493.6643143499532, 'median': 494.8347487999854, 'std': 3.130444482823425}
 
 
 
@@ -582,30 +582,30 @@ the tuning data to.
 
  .. code-block:: none
 
-
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   11.67/  11.67 GFLOPS | Progress: (4/20) | 13.21 s
    [Task  1/25]  Current/Best:   11.10/  23.94 GFLOPS | Progress: (8/20) | 16.62 s
    [Task  1/25]  Current/Best:   13.63/  23.98 GFLOPS | Progress: (12/20) | 20.51 s
    [Task  1/25]  Current/Best:   12.93/  23.98 GFLOPS | Progress: (16/20) | 22.77 s
    [Task  1/25]  Current/Best:   20.64/  23.98 GFLOPS | Progress: (20/20) | 25.89 s Done.
-
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   19.47/  19.47 GFLOPS | Progress: (4/20) | 4.51 s
    [Task  2/25]  Current/Best:    8.08/  19.47 GFLOPS | Progress: (8/20) | 6.17 s
    [Task  2/25]  Current/Best:   19.80/  19.80 GFLOPS | Progress: (12/20) | 7.61 s
    [Task  2/25]  Current/Best:   20.48/  20.48 GFLOPS | Progress: (16/20) | 9.01 s
    [Task  2/25]  Current/Best:    6.40/  20.48 GFLOPS | Progress: (20/20) | 10.37 s Done.
-
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:   19.31/  19.31 GFLOPS | Progress: (4/20) | 5.38 s
    [Task  3/25]  Current/Best:   19.16/  19.31 GFLOPS | Progress: (8/20) | 8.08 s
    [Task  3/25]  Current/Best:   12.73/  20.20 GFLOPS | Progress: (12/20) | 10.39 s
    [Task  3/25]  Current/Best:   12.59/  20.20 GFLOPS | Progress: (16/20) | 12.52 s
    [Task  3/25]  Current/Best:   21.89/  21.89 GFLOPS | Progress: (20/20) | 14.62 s Done.
-
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:   20.34/  20.34 GFLOPS | Progress: (4/20) | 4.87 s
    [Task  4/25]  Current/Best:   14.09/  20.34 GFLOPS | Progress: (8/20) | 9.00 s
    [Task  4/25]  Current/Best:    7.29/  20.34 GFLOPS | Progress: (12/20) | 13.24 s
    [Task  4/25]  Current/Best:    9.49/  20.34 GFLOPS | Progress: (16/20) | 15.21 s
    [Task  4/25]  Current/Best:   14.97/  21.55 GFLOPS | Progress: (20/20) | 21.55 s Done.
-
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:   11.43/  13.59 GFLOPS | Progress: (4/20) | 5.50 s
    [Task  5/25]  Current/Best:   17.89/  17.89 GFLOPS | Progress: (8/20) | 7.79 s
    [Task  5/25]  Current/Best:   16.28/  23.41 GFLOPS | Progress: (12/20) | 9.37 s
    [Task  5/25]  Current/Best:   17.68/  23.41 GFLOPS | Progress: (16/20) | 11.55 s
    [Task  5/25]  Current/Best:   19.02/  23.41 GFLOPS | Progress: (20/20) | 13.86 s Done.
-
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   11.82/  16.23 GFLOPS | Progress: (4/20) | 6.39 s
    [Task  6/25]  Current/Best:   12.77/  16.23 GFLOPS | Progress: (8/20) | 9.66 s
    [Task  6/25]  Current/Best:    3.06/  16.23 GFLOPS | Progress: (12/20) | 13.07 s
    [Task  6/25]  Current/Best:   20.45/  20.45 GFLOPS | Progress: (16/20) | 15.64 s
    [Task  6/25]  Current/Best:   14.18/  20.45 GFLOPS | Progress: (20/20) | 18.24 s Done.
-
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:    7.05/  17.37 GFLOPS | Progress: (4/20) | 5.04 s
    [Task  7/25]  Current/Best:   14.90/  19.06 GFLOPS | Progress: (8/20) | 7.36 s
    [Task  7/25]  Current/Best:   12.26/  19.06 GFLOPS | Progress: (12/20) | 9.75 s
    [Task  7/25]  Current/Best:   22.99/  22.99 GFLOPS | Progress: (16/20) | 11.84 s
    [Task  7/25]  Current/Best:   10.82/  22.99 GFLOPS | Progress: (20/20) | 14.63 s Done.
-
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:    7.66/  13.61 GFLOPS | Progress: (4/20) | 9.14 s
    [Task  8/25]  Current/Best:   10.90/  15.56 GFLOPS | Progress: (8/20) | 11.52 s
    [Task  8/25]  Current/Best:   12.89/  15.56 GFLOPS | Progress: (12/20) | 15.84 s
    [Task  8/25]  Current/Best:    9.62/  15.56 GFLOPS | Progress: (16/20) | 19.89 s
    [Task  8/25]  Current/Best:    3.88/  15.56 GFLOPS | Progress: (20/20) | 24.70 s Done.
-
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   19.96/  19.96 GFLOPS | Progress: (4/20) | 4.32 s
    [Task  9/25]  Current/Best:   15.73/  19.96 GFLOPS | Progress: (8/20) | 15.38 s
    [Task  9/25]  Current/Best:   18.27/  19.96 GFLOPS | Progress: (12/20) | 18.27 s
    [Task  9/25]  Current/Best:   12.29/  19.96 GFLOPS | Progress: (16/20) | 21.10 s
    [Task  9/25]  Current/Best:   16.03/  21.28 GFLOPS | Progress: (20/20) | 23.04 s
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
-
    [Task 10/25]  Current/Best:    5.78/  18.10 GFLOPS | Progress: (4/20) | 4.78 s
    [Task 10/25]  Current/Best:   13.59/  18.10 GFLOPS | Progress: (8/20) | 6.57 s
    [Task 10/25]  Current/Best:   14.90/  18.10 GFLOPS | Progress: (12/20) | 9.45 s
    [Task 10/25]  Current/Best:   14.46/  18.10 GFLOPS | Progress: (16/20) | 11.93 s
    [Task 10/25]  Current/Best:   14.16/  18.10 GFLOPS | Progress: (20/20) | 14.17 s Done.
-
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   12.29/  20.05 GFLOPS | Progress: (4/20) | 5.11 s
    [Task 11/25]  Current/Best:   10.34/  22.44 GFLOPS | Progress: (8/20) | 7.08 s
    [Task 11/25]  Current/Best:   19.24/  22.44 GFLOPS | Progress: (12/20) | 9.75 s
    [Task 11/25]  Current/Best:   20.80/  22.91 GFLOPS | Progress: (16/20) | 12.15 s
    [Task 11/25]  Current/Best:    9.36/  23.53 GFLOPS | Progress: (20/20) | 14.23 s Done.
-
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   10.84/  11.39 GFLOPS | Progress: (4/20) | 6.29 s
    [Task 12/25]  Current/Best:    4.58/  15.96 GFLOPS | Progress: (8/20) | 9.04 s
    [Task 12/25]  Current/Best:   12.62/  15.96 GFLOPS | Progress: (12/20) | 11.50 s
    [Task 12/25]  Current/Best:    9.08/  16.53 GFLOPS | Progress: (16/20) | 14.40 s
    [Task 12/25]  Current/Best:   16.58/  16.58 GFLOPS | Progress: (20/20) | 17.84 s Done.
-
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:    6.17/  20.69 GFLOPS | Progress: (4/20) | 6.35 s
    [Task 13/25]  Current/Best:   19.18/  20.69 GFLOPS | Progress: (8/20) | 8.91 s
    [Task 13/25]  Current/Best:   18.44/  20.69 GFLOPS | Progress: (12/20) | 11.73 s
    [Task 13/25]  Current/Best:    9.68/  20.69 GFLOPS | Progress: (16/20) | 17.32 s
    [Task 13/25]  Current/Best:    3.08/  22.73 GFLOPS | Progress: (20/20) | 21.39 s Done.
-
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   10.45/  13.74 GFLOPS | Progress: (4/20) | 5.30 s
    [Task 14/25]  Current/Best:    7.20/  17.73 GFLOPS | Progress: (8/20) | 12.71 s
    [Task 14/25]  Current/Best:   13.04/  17.73 GFLOPS | Progress: (12/20) | 15.84 s
    [Task 14/25]  Current/Best:   20.63/  20.63 GFLOPS | Progress: (16/20) | 18.49 s
    [Task 14/25]  Current/Best:   17.57/  20.63 GFLOPS | Progress: (20/20) | 25.81 s Done.
-
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:   18.97/  18.97 GFLOPS | Progress: (4/20) | 7.18 s
    [Task 15/25]  Current/Best:   19.34/  20.58 GFLOPS | Progress: (8/20) | 18.27 s
    [Task 15/25]  Current/Best:   10.33/  20.58 GFLOPS | Progress: (12/20) | 21.23 s
    [Task 15/25]  Current/Best:   14.29/  20.58 GFLOPS | Progress: (16/20) | 23.15 s
    [Task 15/25]  Current/Best:    7.12/  22.43 GFLOPS | Progress: (20/20) | 30.28 s
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:    5.68/  16.23 GFLOPS | Progress: (4/20) | 4.63 s
    [Task 16/25]  Current/Best:   19.28/  19.28 GFLOPS | Progress: (8/20) | 6.80 s
    [Task 16/25]  Current/Best:   14.31/  19.28 GFLOPS | Progress: (12/20) | 9.12 s
    [Task 16/25]  Current/Best:   12.45/  19.28 GFLOPS | Progress: (16/20) | 11.27 s
    [Task 16/25]  Current/Best:    6.50/  19.28 GFLOPS | Progress: (20/20
 ) | 13.54 s Done.
-
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:   23.27/  23.27 GFLOPS | Progress: (4/20) | 5.36 s
    [Task 17/25]  Current/Best:   11.68/  23.27 GFLOPS | Progress: (8/20) | 8.26 s
    [Task 17/25]  Current/Best:    1.56/  23.27 GFLOPS | Progress: (12/20) | 11.85 s
    [Task 17/25]  Current/Best:    9.76/  23.41 GFLOPS | Progress: (16/20) | 16.37 s
    [Task 17/25]  Current/Best:   11.23/  23.41 GFLOPS | Progress: (20/20) | 20.02 s Done.
-
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:    3.93/  18.68 GFLOPS | Progress: (4/20) | 10.30 s
    [Task 18/25]  Current/Best:   15.37/  18.68 GFLOPS | Progress: (8/20) | 13.34 s
    [Task 18/25]  Current/Best:   15.77/  19.17 GFLOPS | Progress: (12/20) | 15.62 s
    [Task 18/25]  Current/Best:   15.15/  19.17 GFLOPS | Progress: (16/20) | 17.98 s
    [Task 18/25]  Current/Best:    6.26/  19.17 GFLOPS | Progress: (20/20) | 20.61 s Done.
-
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:    7.83/  12.29 GFLOPS | Progress: (4/20) | 5.61 s
    [Task 19/25]  Current/Best:   20.90/  20.90 GFLOPS | Progress: (8/20) | 9.39 s
    [Task 19/25]  Current/Best:    6.66/  20.90 GFLOPS | Progress: (12/20) | 13.59 s
    [Task 19/25]  Current/Best:    5.94/  20.90 GFLOPS | Progress: (16/20) | 19.41 s
    [Task 19/25]  Current/Best:    9.20/  20.90 GFLOPS | Progress: (20/20) | 23.77 s Done.
-
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:   12.05/  21.42 GFLOPS | Progress: (4/20) | 5.22 s
    [Task 20/25]  Current/Best:    2.70/  21.42 GFLOPS | Progress: (8/20) | 8.87 s
    [Task 20/25]  Current/Best:    2.68/  21.42 GFLOPS | Progress: (12/20) | 21.15 s
    [Task 20/25]  Current/Best:   12.52/  21.42 GFLOPS | Progress: (16/20) | 25.92 s
    [Task 20/25]  Current/Best:    5.96/  21.42 GFLOPS | Progress: (20/20) | 28.31 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:   18.80/  18.80 GFLOPS | Progress: (4/20) | 5.47 s
    [Task 21/25]  Current/Best:    7.21/  18.80 GFLOPS | Progress: (8/20) | 8.29 s
    [Task 21/25]  Current/Best:    1.61/  18.80 GFLOPS | Progress: (12/20) | 19.74 s
    [Task 21/25]  Current/Best:    9.15/  18.80 GFLOPS | Progress: (16/20) | 23.50 s
    [Task 21/25]  Current/Best:    2.74/  18.80 GFLOPS | Progress: (20/20
 ) | 34.90 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   17.63/  17.63 GFLOPS | Progress: (4/20) | 10.21 s
    [Task  1/25]  Current/Best:   14.17/  21.69 GFLOPS | Progress: (8/20) | 12.92 s
    [Task  1/25]  Current/Best:   13.35/  24.22 GFLOPS | Progress: (12/20) | 15.64 s
    [Task  1/25]  Current/Best:   14.84/  24.22 GFLOPS | Progress: (16/20) | 18.95 s
    [Task  1/25]  Current/Best:   11.99/  24.22 GFLOPS | Progress: (20/20) | 21.72 s Done.
+
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   16.20/  16.20 GFLOPS | Progress: (4/20) | 4.80 s
    [Task  2/25]  Current/Best:   11.30/  16.20 GFLOPS | Progress: (8/20) | 6.57 s
    [Task  2/25]  Current/Best:   15.00/  16.20 GFLOPS | Progress: (12/20) | 8.35 s
    [Task  2/25]  Current/Best:   13.84/  16.20 GFLOPS | Progress: (16/20) | 10.37 s
    [Task  2/25]  Current/Best:   17.71/  17.71 GFLOPS | Progress: (20/20) | 11.83 s Done.
+
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:    5.36/  22.51 GFLOPS | Progress: (4/20) | 5.53 s
    [Task  3/25]  Current/Best:   15.06/  22.51 GFLOPS | Progress: (8/20) | 7.94 s
    [Task  3/25]  Current/Best:   16.98/  22.51 GFLOPS | Progress: (12/20) | 9.95 s
    [Task  3/25]  Current/Best:    1.63/  24.22 GFLOPS | Progress: (16/20) | 13.36 s
    [Task  3/25]  Current/Best:    9.07/  24.22 GFLOPS | Progress: (20/20) | 15.71 s Done.
+
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:   14.51/  20.45 GFLOPS | Progress: (4/20) | 10.52 s
    [Task  4/25]  Current/Best:   16.53/  20.45 GFLOPS | Progress: (8/20) | 13.46 s
    [Task  4/25]  Current/Best:   12.21/  20.45 GFLOPS | Progress: (12/20) | 16.09 s
    [Task  4/25]  Current/Best:   14.53/  20.45 GFLOPS | Progress: (16/20) | 18.18 s
    [Task  4/25]  Current/Best:   17.87/  20.45 GFLOPS | Progress: (20/20) | 20.05 s Done.
+
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:   13.33/  19.62 GFLOPS | Progress: (4/20) | 5.15 s
    [Task  5/25]  Current/Best:   10.99/  19.62 GFLOPS | Progress: (8/20) | 7.44 s
    [Task  5/25]  Current/Best:   12.99/  19.62 GFLOPS | Progress: (12/20) | 9.73 s
    [Task  5/25]  Current/Best:    9.33/  20.57 GFLOPS | Progress: (16/20) | 12.08 s
    [Task  5/25]  Current/Best:    4.85/  20.57 GFLOPS | Progress: (20/20) | 14.09 s Done.
+
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   13.91/  15.06 GFLOPS | Progress: (4/20) | 6.56 s
    [Task  6/25]  Current/Best:   11.59/  19.38 GFLOPS | Progress: (8/20) | 9.17 s
    [Task  6/25]  Current/Best:   15.66/  19.38 GFLOPS | Progress: (12/20) | 13.93 s
    [Task  6/25]  Current/Best:   15.06/  19.38 GFLOPS | Progress: (16/20) | 15.95 s
    [Task  6/25]  Current/Best:   14.46/  19.38 GFLOPS | Progress: (20/20) | 18.64 s Done.
+
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:    6.32/  18.15 GFLOPS | Progress: (4/20) | 5.06 s
    [Task  7/25]  Current/Best:    8.42/  21.09 GFLOPS | Progress: (8/20) | 7.64 s
    [Task  7/25]  Current/Best:   10.39/  21.09 GFLOPS | Progress: (12/20) | 12.54 s
    [Task  7/25]  Current/Best:    8.56/  21.09 GFLOPS | Progress: (16/20) | 15.51 s
    [Task  7/25]  Current/Best:   15.01/  21.09 GFLOPS | Progress: (20/20) | 18.30 s Done.
+
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   17.07/  17.07 GFLOPS | Progress: (4/20) | 5.20 s
    [Task  8/25]  Current/Best:    4.77/  17.07 GFLOPS | Progress: (8/20) | 13.23 s
    [Task  8/25]  Current/Best:    3.21/  17.07 GFLOPS | Progress: (12/20) | 21.52 s
    [Task  8/25]  Current/Best:   14.93/  17.07 GFLOPS | Progress: (16/20) | 29.86 s
    [Task  8/25]  Current/Best:   13.43/  17.07 GFLOPS | Progress: (20/20) | 33.04 s Done.
+
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   18.69/  18.69 GFLOPS | Progress: (4/20) | 4.65 s
    [Task  9/25]  Current/Best:   10.28/  18.69 GFLOPS | Progress: (8/20) | 7.30 s
    [Task  9/25]  Current/Best:    7.48/  22.13 GFLOPS | Progress: (12/20) | 15.28 s
    [Task  9/25]  Current/Best:   14.78/  22.13 GFLOPS | Progress: (16/20) | 17.08 s
    [Task  9/25]  Current/Best:   14.86/  22.13 GFLOPS | Progress: (20/20) | 28.18 s
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:    7.45/  20.95 GFLOPS | Progress: (4/20) | 5.80 s Done.
+
    [Task 10/25]  Current/Best:   13.86/  20.95 GFLOPS | Progress: (8/20) | 8.72 s
    [Task 10/25]  Current/Best:   17.51/  21.90 GFLOPS | Progress: (12/20) | 10.48 s
    [Task 10/25]  Current/Best:   15.72/  21.90 GFLOPS | Progress: (16/20) | 12.64 s
    [Task 10/25]  Current/Best:   14.26/  21.90 GFLOPS | Progress: (20/20) | 14.76 s Done.
+
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   19.90/  20.79 GFLOPS | Progress: (4/20) | 5.31 s
    [Task 11/25]  Current/Best:    8.50/  20.79 GFLOPS | Progress: (8/20) | 8.17 s
    [Task 11/25]  Current/Best:   16.95/  20.80 GFLOPS | Progress: (12/20) | 10.48 s
    [Task 11/25]  Current/Best:    6.29/  20.80 GFLOPS | Progress: (16/20) | 12.80 s
    [Task 11/25]  Current/Best:   19.27/  20.80 GFLOPS | Progress: (20/20) | 14.97 s Done.
+
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   13.47/  17.38 GFLOPS | Progress: (4/20) | 5.12 s
    [Task 12/25]  Current/Best:    5.42/  19.45 GFLOPS | Progress: (8/20) | 9.24 s
    [Task 12/25]  Current/Best:   14.05/  19.45 GFLOPS | Progress: (12/20) | 12.81 s
    [Task 12/25]  Current/Best:   15.88/  19.45 GFLOPS | Progress: (16/20) | 16.55 s
    [Task 12/25]  Current/Best:   14.00/  19.45 GFLOPS | Progress: (20/20) | 19.22 s Done.
+
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:    8.53/  19.45 GFLOPS | Progress: (4/20) | 5.59 s
    [Task 13/25]  Current/Best:   11.67/  20.57 GFLOPS | Progress: (8/20) | 9.58 s
    [Task 13/25]  Current/Best:   18.36/  22.66 GFLOPS | Progress: (12/20) | 12.58 s
    [Task 13/25]  Current/Best:   16.65/  22.66 GFLOPS | Progress: (16/20) | 15.33 s
    [Task 13/25]  Current/Best:    9.60/  22.66 GFLOPS | Progress: (20/20) | 17.65 s Done.
+
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   18.30/  18.30 GFLOPS | Progress: (4/20) | 4.91 s
    [Task 14/25]  Current/Best:    1.62/  18.30 GFLOPS | Progress: (8/20) | 18.47 s
    [Task 14/25]  Current/Best:    6.33/  21.36 GFLOPS | Progress: (12/20) | 30.58 s
    [Task 14/25]  Current/Best:   20.41/  21.36 GFLOPS | Progress: (16/20) | 33.11 s
    [Task 14/25]  Current/Best:   10.31/  21.36 GFLOPS | Progress: (20/20) | 40.45 s Done.
+
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:   12.40/  21.51 GFLOPS | Progress: (4/20) | 13.43 s
    [Task 15/25]  Current/Best:   14.64/  21.51 GFLOPS | Progress: (8/20) | 15.71 s
    [Task 15/25]  Current/Best:    6.46/  21.51 GFLOPS | Progress: (12/20) | 21.34 s
    [Task 15/25]  Current/Best:    3.15/  21.51 GFLOPS | Progress: (16/20) | 29.28 s
    [Task 15/25]  Current/Best:   16.07/  21.51 GFLOPS | Progress: (20/20) | 31.67 s Done.
+
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   12.90/  12.90 GFLOPS | Progress: (4/20) | 5.46 s
    [Task 16/25]  Current/Best:    6.89/  16.17 GFLOPS | Progress: (8/20) | 7.17 s
    [Task 16/25]  Current/Best:    4.95/  18.91 GFLOPS | Progress: (12/20) | 9.14 s
    [Task 16/25]  Current/Best:   12.38/  18.91 GFLOPS | Progress: (16/20) | 11.22 s
    [Task 16/25]  Current/Best:   15.48/  20.55 GFLOPS | Progress: (20/20) | 13.56 s Done.
+
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:   19.21/  19.68 GFLOPS | Progress: (4/20) | 5.41 s
    [Task 17/25]  Current/Best:   21.04/  21.77 GFLOPS | Progress: (8/20) | 7.83 s
    [Task 17/25]  Current/Best:   10.90/  21.77 GFLOPS | Progress: (12/20) | 11.26 s
    [Task 17/25]  Current/Best:   20.00/  21.77 GFLOPS | Progress: (16/20) | 14.25 s
    [Task 17/25]  Current/Best:    9.30/  22.72 GFLOPS | Progress: (20/20) | 17.43 s Done.
+
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:    5.15/  10.00 GFLOPS | Progress: (4/20) | 8.03 s
    [Task 18/25]  Current/Best:    7.97/  16.40 GFLOPS | Progress: (8/20) | 11.68 s
    [Task 18/25]  Current/Best:   14.65/  16.40 GFLOPS | Progress: (12/20) | 14.17 s
    [Task 18/25]  Current/Best:    7.36/  18.97 GFLOPS | Progress: (16/20) | 16.20 s
    [Task 18/25]  Current/Best:    1.57/  20.49 GFLOPS | Progress: (20/20) | 19.36 s Done.
+
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   19.98/  19.98 GFLOPS | Progress: (4/20) | 8.27 s
    [Task 19/25]  Current/Best:   18.18/  19.98 GFLOPS | Progress: (8/20) | 13.08 s
    [Task 19/25]  Current/Best:   21.18/  21.18 GFLOPS | Progress: (12/20) | 24.58 s
    [Task 19/25]  Current/Best:    2.70/  21.18 GFLOPS | Progress: (16/20) | 28.75 s
    [Task 19/25]  Current/Best:    2.65/  21.18 GFLOPS | Progress: (20/20) | 33.88 s
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:    6.57/  17.99 GFLOPS | Progress: (4/20) | 8.44 s
    [Task 20/25]  Current/Best:    6.20/  17.99 GFLOPS | Progress: (8/20) | 17.63 s
    [Task 20/25]  Current/Best:   13.53/  17.99 GFLOPS | Progress: (12/20) | 29.43 s
    [Task 20/25]  Current/Best:    9.85/  17.99 GFLOPS | Progress: (16/20) | 33.03 s
    [Task 20/25]  Current/Best:    9.37/  21.86 GFLOPS | Progress: (20/
 20) | 35.30 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:    7.35/  19.18 GFLOPS | Progress: (4/20) | 10.07 s
    [Task 21/25]  Current/Best:   17.61/  23.15 GFLOPS | Progress: (8/20) | 21.11 s
    [Task 21/25]  Current/Best:   12.17/  23.15 GFLOPS | Progress: (12/20) | 32.83 s
    [Task 21/25]  Current/Best:    8.43/  23.15 GFLOPS | Progress: (16/20) | 35.34 s
    [Task 21/25]  Current/Best:    9.64/  23.15 GFLOPS | Progress: (20/20) | 46.76 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
      Done.
      Done.
-
    [Task 22/25]  Current/Best:   20.57/  20.57 GFLOPS | Progress: (4/20) | 5.73 s
    [Task 22/25]  Current/Best:    5.16/  21.47 GFLOPS | Progress: (8/20) | 8.38 s
    [Task 22/25]  Current/Best:   20.11/  21.47 GFLOPS | Progress: (12/20) | 13.05 s
    [Task 22/25]  Current/Best:   12.73/  21.47 GFLOPS | Progress: (16/20) | 14.93 s
    [Task 22/25]  Current/Best:    7.04/  21.47 GFLOPS | Progress: (20/20) | 19.34 s Done.
-
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   20.99/  20.99 GFLOPS | Progress: (4/20) | 6.50 s
    [Task 23/25]  Current/Best:    9.59/  20.99 GFLOPS | Progress: (8/20) | 10.47 s
    [Task 23/25]  Current/Best:   12.45/  20.99 GFLOPS | Progress: (12/20) | 13.55 s
    [Task 23/25]  Current/Best:   10.16/  20.99 GFLOPS | Progress: (16/20) | 16.51 s
    [Task 23/25]  Current/Best:    8.19/  20.99 GFLOPS | Progress: (20/20) | 21.15 s Done.
-
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    3.98/   9.52 GFLOPS | Progress: (4/20) | 6.73 s
    [Task 24/25]  Current/Best:    2.96/   9.52 GFLOPS | Progress: (8/20) | 15.73 s
    [Task 24/25]  Current/Best:    7.02/   9.52 GFLOPS | Progress: (12/20) | 26.76 s
    [Task 24/25]  Current/Best:    3.70/   9.52 GFLOPS | Progress: (16/20) | 37.81 s
    [Task 24/25]  Current/Best:    3.41/   9.52 GFLOPS | Progress: (20/20) | 49.98 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    5.99/   7.16 GFLOPS | Progress: (4/20) | 15.86 s
    [Task 25/25]  Current/Best:    1.55/   8.82 GFLOPS | Progress: (8/20) | 18.50 s
    [Task 25/25]  Current/Best:    7.38/   8.82 GFLOPS | Progress: (12/20) | 21.40 s
    [Task 25/25]  Current/Best:    1.51/   8.82 GFLOPS | Progress: (16/20) | 27.44 s
    [Task 25/25]  Current/Best:    2.76/   8.82 GFLOPS | Progress: (20
 /20) | 38.41 s
+
    [Task 22/25]  Current/Best:   14.46/  18.35 GFLOPS | Progress: (4/20) | 5.61 s
    [Task 22/25]  Current/Best:   10.56/  18.35 GFLOPS | Progress: (8/20) | 8.58 s
    [Task 22/25]  Current/Best:   17.83/  18.35 GFLOPS | Progress: (12/20) | 14.14 s
    [Task 22/25]  Current/Best:   10.03/  21.45 GFLOPS | Progress: (16/20) | 16.47 s
    [Task 22/25]  Current/Best:   17.95/  21.45 GFLOPS | Progress: (20/20) | 18.98 s Done.
+
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   20.89/  20.89 GFLOPS | Progress: (4/20) | 5.11 s
    [Task 23/25]  Current/Best:   18.49/  20.96 GFLOPS | Progress: (8/20) | 7.77 s
    [Task 23/25]  Current/Best:    9.51/  20.96 GFLOPS | Progress: (12/20) | 16.69 s
    [Task 23/25]  Current/Best:   12.09/  21.98 GFLOPS | Progress: (16/20) | 19.33 s
    [Task 23/25]  Current/Best:   14.46/  21.98 GFLOPS | Progress: (20/20) | 23.03 s Done.
+
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    1.32/   1.32 GFLOPS | Progress: (4/20) | 13.91 s
    [Task 24/25]  Current/Best:    5.89/   6.93 GFLOPS | Progress: (8/20) | 26.29 s
    [Task 24/25]  Current/Best:    2.69/   8.14 GFLOPS | Progress: (12/20) | 32.11 s
    [Task 24/25]  Current/Best:    3.12/   8.14 GFLOPS | Progress: (16/20) | 43.14 s
    [Task 24/25]  Current/Best:    4.53/   8.14 GFLOPS | Progress: (20/20) | 55.23 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    8.84/   8.84 GFLOPS | Progress: (4/20) | 11.75 s
    [Task 25/25]  Current/Best:    1.55/   8.84 GFLOPS | Progress: (8/20) | 22.81 s
    [Task 25/25]  Current/Best:    1.49/   9.13 GFLOPS | Progress: (12/20) | 25.72 s
    [Task 25/25]  Current/Best:    1.55/   9.13 GFLOPS | Progress: (16/20) | 27.38 s
    [Task 25/25]  Current/Best:    8.10/   9.25 GFLOPS | Progress: (2
 0/20) | 38.39 s
 
 
 
@@ -766,8 +766,8 @@ improvement in comparing the optimized model to the unoptimized model.
 
  .. code-block:: none
 
-    optimized: {'mean': 414.8339746599959, 'median': 413.5761416000605, 'std': 2.7188403795600826}
-    unoptimized: {'mean': 495.85341244997835, 'median': 495.2317347000644, 'std': 4.122248126917765}
+    optimized: {'mean': 405.46805702011625, 'median': 405.5535070001497, 'std': 1.8920621967570168}
+    unoptimized: {'mean': 493.6643143499532, 'median': 494.8347487999854, 'std': 3.130444482823425}
 
 
 
@@ -790,7 +790,7 @@ profiling/benchmarking.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 13 minutes  26.830 seconds)
+   **Total running time of the script:** ( 14 minutes  29.607 seconds)
 
 
 .. _sphx_glr_download_tutorial_autotvm_relay_x86.py:
diff --git a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
index 47721c1e7a..b2959d824a 100644
--- a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
@@ -274,7 +274,7 @@ device and returns the measured cost. Network overhead is excluded.
 
  .. code-block:: none
 
-    1.196e-07 secs/op
+    1.282e-07 secs/op
 
 
 
diff --git a/docs/_sources/tutorial/intro_topi.rst.txt b/docs/_sources/tutorial/intro_topi.rst.txt
index 103de77b7d..d3fa2b9e3f 100644
--- a/docs/_sources/tutorial/intro_topi.rst.txt
+++ b/docs/_sources/tutorial/intro_topi.rst.txt
@@ -270,7 +270,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
 
  .. code-block:: none
 
-    [stage(a, placeholder(a, 0x1c829ba0)), stage(b, placeholder(b, 0x1d948770)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T [...]
+    [stage(a, placeholder(a, 0xe6f1590)), stage(b, placeholder(b, 0x1c5ca650)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T. [...]
 
 
 
diff --git a/docs/_sources/tutorial/sg_execution_times.rst.txt b/docs/_sources/tutorial/sg_execution_times.rst.txt
index 720a5bc37c..962dbca398 100644
--- a/docs/_sources/tutorial/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorial/sg_execution_times.rst.txt
@@ -5,31 +5,31 @@
 
 Computation times
 =================
-**17:14.930** total execution time for **tutorial** files:
+**18:29.900** total execution time for **tutorial** files:
 
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 13:26.830 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 14:29.607 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:40.584 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:51.453 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 00:58.807 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 00:59.197 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:40.103 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:40.122 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:26.583 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:27.510 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.969 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.967 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.860 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.853 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.193 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.191 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_uma.py` (``uma.py``)                                             | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tvmc_python.py` (``tvmc_python.py``)                             | 00:00.000 | 0.0 MB |
-+------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)   | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
+| :ref:`sphx_glr_tutorial_tvmc_python.py` (``tvmc_python.py``)                             | 00:00.000 | 0.0 MB |
++------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_introduction.py` (``introduction.py``)                           | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_install.py` (``install.py``)                                     | 00:00.000 | 0.0 MB |
diff --git a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
index 2f4c22fda4..df66b1ffdf 100644
--- a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
@@ -286,7 +286,7 @@ helper function to run a profile of the TVM generated code.
  .. code-block:: none
 
     Numpy running time: 0.000007
-    naive: 0.000007
+    naive: 0.000008
 
 
 
@@ -498,10 +498,10 @@ We can now compare the different schedules
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                   numpy    7.37280999601353e-06                     1.0
-                   naive              6.6388e-06      0.9004436576542175
-                parallel              6.9603e-06      0.9440498268317543
-                  vector    3.9225500000000006e-05     5.320291723401144
+                   numpy    7.104640026227571e-06                    1.0
+                   naive              7.9066e-06      1.1128783401849922
+                parallel    7.033200000000001e-06      0.989944595931132
+                  vector    3.9490500000000004e-05     5.558409694821472
 
 
 
@@ -922,7 +922,7 @@ matrix multiplication.
 
  .. code-block:: none
 
-    Numpy running time: 0.018423
+    Numpy running time: 0.018047
 
 
 
@@ -980,7 +980,7 @@ optimizations.
 
  .. code-block:: none
 
-    none: 3.299926
+    none: 3.314150
 
 
 
@@ -1080,7 +1080,7 @@ schedule.
 
  .. code-block:: none
 
-    blocking: 0.304426
+    blocking: 0.306366
 
 
 
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.
 
  .. code-block:: none
 
-    vectorization: 0.283520
+    vectorization: 0.296275
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1230,7 +1230,7 @@ more cache friendly.
 
  .. code-block:: none
 
-    loop permutation: 0.114691
+    loop permutation: 0.118549
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1321,7 +1321,7 @@ optimized schedule.
 
  .. code-block:: none
 
-    array packing: 0.104664
+    array packing: 0.106734
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1404,7 +1404,7 @@ to `C` when all the block results are ready.
 
  .. code-block:: none
 
-    block caching: 0.111098
+    block caching: 0.111422
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1478,7 +1478,7 @@ of thread-level parallelization.
 
  .. code-block:: none
 
-    parallelization: 0.131157
+    parallelization: 0.132259
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1548,13 +1548,13 @@ working, we can compare the results.
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                    none            3.2999255782                     1.0
-                blocking             0.304425778     0.09225231623740258
-           vectorization            0.2835203943     0.08591720861009568
-        loop permutation     0.11469136309999999     0.03475574232875892
-           array packing     0.10466361099999999     0.03171696104040338
-           block caching     0.11109770830000001     0.03366673146628965
-         parallelization            0.1311569975     0.03974544103856482
+                    none      3.3141503092999995                     1.0
+                blocking     0.30636633280000003     0.09244189436438367
+           vectorization            0.2962752545     0.08939704806647045
+        loop permutation            0.1185491122    0.035770590086796464
+           array packing     0.10673433839999999     0.03220564200135628
+           block caching     0.11142193279999998     0.03362006016665371
+         parallelization     0.13225905990000003    0.039907381246065216
 
 
 
diff --git a/docs/commit_hash b/docs/commit_hash
index b1730c80b3..3a442adcbd 100644
--- a/docs/commit_hash
+++ b/docs/commit_hash
@@ -1 +1 @@
-cca7d78334e6c3f12d11926df25ae90cd3740271
+a1c1ccafa16cfdc155519fa38f9a5b782a1a5571
diff --git a/docs/how_to/compile_models/from_darknet.html b/docs/how_to/compile_models/from_darknet.html
index f012c0ce96..180858ed99 100644
--- a/docs/how_to/compile_models/from_darknet.html
+++ b/docs/how_to/compile_models/from_darknet.html
@@ -590,7 +590,7 @@ class:[&#39;truck 0.9266&#39;] left:471 top:83 right:689 bottom:169
 class:[&#39;bicycle 0.9984&#39;] left:111 top:113 right:577 bottom:447
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  30.217 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  30.774 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-darknet-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7716f96385bd5abb6e822041e285be54/from_darknet.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_darknet.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_mxnet.html b/docs/how_to/compile_models/from_mxnet.html
index 6170e6e1a2..1a988eb864 100644
--- a/docs/how_to/compile_models/from_mxnet.html
+++ b/docs/how_to/compile_models/from_mxnet.html
@@ -444,7 +444,7 @@
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;x&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#tuple" title="builtins.tuple" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">x</span><span class="o">.</span><span class="n">shape</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip994aa5d7-bcb2-4090-a662-c279738f84d3 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipa81aa688-ee2d-4db9-8c75-4fbb38864279 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
 x (1, 3, 224, 224)
 </pre></div>
 </div>
diff --git a/docs/how_to/compile_models/from_oneflow.html b/docs/how_to/compile_models/from_oneflow.html
index 18e35b6b36..53e02c95cf 100644
--- a/docs/how_to/compile_models/from_oneflow.html
+++ b/docs/how_to/compile_models/from_oneflow.html
@@ -454,13 +454,13 @@ Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdo
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip&quot; to /workspace/.oneflow/flowvision_cache/resnet18.zip
 
   0%|          | 0.00/41.5M [00:00&lt;?, ?B/s]
- 17%|#7        | 7.16M/41.5M [00:00&lt;00:00, 75.0MB/s]
- 34%|###4      | 14.3M/41.5M [00:00&lt;00:00, 56.4MB/s]
- 48%|####8     | 20.0M/41.5M [00:00&lt;00:00, 40.1MB/s]
- 58%|#####8    | 24.2M/41.5M [00:00&lt;00:00, 33.6MB/s]
- 77%|#######7  | 32.0M/41.5M [00:00&lt;00:00, 44.3MB/s]
- 91%|######### | 37.6M/41.5M [00:00&lt;00:00, 47.9MB/s]
-100%|##########| 41.5M/41.5M [00:01&lt;00:00, 43.5MB/s]
+ 15%|#5        | 6.33M/41.5M [00:00&lt;00:00, 61.5MB/s]
+ 29%|##9       | 12.2M/41.5M [00:00&lt;00:00, 54.2MB/s]
+ 42%|####1     | 17.4M/41.5M [00:00&lt;00:00, 45.8MB/s]
+ 58%|#####7    | 24.0M/41.5M [00:00&lt;00:00, 37.1MB/s]
+ 77%|#######7  | 32.0M/41.5M [00:00&lt;00:00, 40.8MB/s]
+ 96%|#########6| 40.0M/41.5M [00:00&lt;00:00, 47.6MB/s]
+100%|##########| 41.5M/41.5M [00:00&lt;00:00, 47.2MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_paddle.html b/docs/how_to/compile_models/from_paddle.html
index 022c738886..9926a7247e 100644
--- a/docs/how_to/compile_models/from_paddle.html
+++ b/docs/how_to/compile_models/from_paddle.html
@@ -489,7 +489,7 @@ To begin, we’ll install PaddlePaddle&gt;=2.1.3:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>TVM prediction top-1 id: 282, class name:  282: &#39;tiger cat&#39;,
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  3.082 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  1.081 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-paddle-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/16269b77359771348d507395692524cf/from_paddle.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_paddle.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_pytorch.html b/docs/how_to/compile_models/from_pytorch.html
index f625271129..29f610e989 100644
--- a/docs/how_to/compile_models/from_pytorch.html
+++ b/docs/how_to/compile_models/from_pytorch.html
@@ -437,13 +437,13 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/resnet18-f37072fd.pth&quot; to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
 
   0%|          | 0.00/44.7M [00:00&lt;?, ?B/s]
- 18%|#7        | 7.99M/44.7M [00:00&lt;00:00, 64.9MB/s]
- 36%|###5      | 16.0M/44.7M [00:00&lt;00:00, 70.6MB/s]
- 54%|#####3    | 24.1M/44.7M [00:00&lt;00:00, 74.9MB/s]
- 70%|#######   | 31.3M/44.7M [00:00&lt;00:00, 69.2MB/s]
- 85%|########4 | 38.0M/44.7M [00:00&lt;00:00, 66.0MB/s]
- 99%|#########9| 44.3M/44.7M [00:00&lt;00:00, 53.3MB/s]
-100%|##########| 44.7M/44.7M [00:00&lt;00:00, 61.2MB/s]
+ 18%|#7        | 7.99M/44.7M [00:00&lt;00:00, 71.8MB/s]
+ 33%|###3      | 14.8M/44.7M [00:00&lt;00:00, 47.6MB/s]
+ 44%|####4     | 19.8M/44.7M [00:00&lt;00:00, 48.9MB/s]
+ 58%|#####8    | 26.1M/44.7M [00:00&lt;00:00, 50.6MB/s]
+ 72%|#######1  | 32.0M/44.7M [00:00&lt;00:00, 40.6MB/s]
+ 90%|########9 | 40.0M/44.7M [00:00&lt;00:00, 44.0MB/s]
+100%|##########| 44.7M/44.7M [00:00&lt;00:00, 49.7MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_tensorflow.html b/docs/how_to/compile_models/from_tensorflow.html
index 94ac6aa713..360f8578f2 100644
--- a/docs/how_to/compile_models/from_tensorflow.html
+++ b/docs/how_to/compile_models/from_tensorflow.html
@@ -657,7 +657,7 @@ banana (score = 0.00022)
 desk (score = 0.00019)
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  31.594 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  30.903 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-tensorflow-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7f1d3d1b878694c201c614c807cdebc8/from_tensorflow.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_tensorflow.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/sg_execution_times.html b/docs/how_to/compile_models/sg_execution_times.html
index d486ec1c9f..58719a4f42 100644
--- a/docs/how_to/compile_models/sg_execution_times.html
+++ b/docs/how_to/compile_models/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-compile-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>07:01.605</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
+<p><strong>06:57.822</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -354,43 +354,43 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
-<td><p>01:31.594</p></td>
+<td><p>01:30.903</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
-<td><p>01:30.217</p></td>
+<td><p>01:30.774</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></td>
-<td><p>01:03.082</p></td>
+<td><p>01:01.081</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></td>
-<td><p>00:40.766</p></td>
+<td><p>00:39.926</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
-<td><p>00:36.563</p></td>
+<td><p>00:35.969</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></td>
-<td><p>00:32.386</p></td>
+<td><p>00:32.369</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
-<td><p>00:27.561</p></td>
+<td><p>00:27.840</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
-<td><p>00:25.426</p></td>
+<td><p>00:24.911</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></td>
-<td><p>00:11.222</p></td>
+<td><p>00:11.208</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></td>
-<td><p>00:02.787</p></td>
+<td><p>00:02.841</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno.html b/docs/how_to/deploy_models/deploy_model_on_adreno.html
index 27ce7534aa..a441f14bd4 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno.html
@@ -835,10 +835,10 @@ Top5 predictions:
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
- 4226.3831    4226.3092    4229.8331    4223.6465      2.2606
+ 4223.1643    4223.5665    4226.1886    4218.2129      2.3249
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  19.527 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  19.724 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-model-on-adreno-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/2387d8448da213eb625e6b3d916327d4/deploy_model_on_adreno.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_model_on_adreno.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
index ae6c4fcd18..8a937d2db6 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
@@ -443,23 +443,23 @@ to run this tutorial with a real device over rpc.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
 
      8192/102967424 [..............................] - ETA: 0s
-  8380416/102967424 [=&gt;............................] - ETA: 1s
- 16769024/102967424 [===&gt;..........................] - ETA: 1s
- 17481728/102967424 [====&gt;.........................] - ETA: 1s
- 22216704/102967424 [=====&gt;........................] - ETA: 1s
+  8380416/102967424 [=&gt;............................] - ETA: 0s
+ 16769024/102967424 [===&gt;..........................] - ETA: 0s
  25157632/102967424 [======&gt;.......................] - ETA: 1s
  33546240/102967424 [========&gt;.....................] - ETA: 1s
- 40189952/102967424 [==========&gt;...................] - ETA: 1s
- 41934848/102967424 [===========&gt;..................] - ETA: 1s
+ 41934848/102967424 [===========&gt;..................] - ETA: 0s
  50323456/102967424 [=============&gt;................] - ETA: 0s
+ 56967168/102967424 [===============&gt;..............] - ETA: 0s
  58712064/102967424 [================&gt;.............] - ETA: 0s
  65355776/102967424 [==================&gt;...........] - ETA: 0s
  67100672/102967424 [==================&gt;...........] - ETA: 0s
- 69296128/102967424 [===================&gt;..........] - ETA: 0s
+ 73482240/102967424 [====================&gt;.........] - ETA: 0s
  75489280/102967424 [====================&gt;.........] - ETA: 0s
- 82944000/102967424 [=======================&gt;......] - ETA: 0s
+ 82124800/102967424 [======================&gt;.......] - ETA: 0s
  83877888/102967424 [=======================&gt;......] - ETA: 0s
+ 89325568/102967424 [=========================&gt;....] - ETA: 0s
  92266496/102967424 [=========================&gt;....] - ETA: 0s
+ 98910208/102967424 [===========================&gt;..] - ETA: 0s
 100646912/102967424 [============================&gt;.] - ETA: 0s
 102967424/102967424 [==============================] - 2s 0us/step
 </pre></div>
diff --git a/docs/how_to/deploy_models/deploy_model_on_android.html b/docs/how_to/deploy_models/deploy_model_on_android.html
index ea2e164da1..b6ac2fdeda 100644
--- a/docs/how_to/deploy_models/deploy_model_on_android.html
+++ b/docs/how_to/deploy_models/deploy_model_on_android.html
@@ -667,7 +667,7 @@ to the remote android device.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  16.0945      16.1149      16.8290      15.2921       0.4424
+  16.0708      15.8173      17.3136      15.6391       0.5556
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
index 8fdfd0e060..89a7b164bf 100644
--- a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
+++ b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
@@ -459,32 +459,33 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth&quot; to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
 
   0%|          | 0.00/170M [00:00&lt;?, ?B/s]
-  5%|4         | 7.99M/170M [00:00&lt;00:03, 51.5MB/s]
-  8%|8         | 14.3M/170M [00:00&lt;00:03, 45.1MB/s]
- 13%|#3        | 22.3M/170M [00:00&lt;00:03, 44.0MB/s]
- 16%|#5        | 26.5M/170M [00:00&lt;00:04, 36.6MB/s]
- 18%|#7        | 30.3M/170M [00:00&lt;00:04, 35.7MB/s]
- 21%|##1       | 36.2M/170M [00:00&lt;00:03, 42.2MB/s]
- 24%|##3       | 40.4M/170M [00:01&lt;00:03, 40.7MB/s]
- 28%|##8       | 48.0M/170M [00:01&lt;00:03, 42.3MB/s]
- 33%|###2      | 56.0M/170M [00:01&lt;00:02, 41.7MB/s]
- 38%|###7      | 64.0M/170M [00:01&lt;00:02, 43.5MB/s]
- 42%|####2     | 72.0M/170M [00:01&lt;00:02, 47.8MB/s]
- 47%|####7     | 80.0M/170M [00:01&lt;00:01, 47.8MB/s]
- 52%|#####1    | 87.7M/170M [00:02&lt;00:01, 54.7MB/s]
- 56%|#####5    | 94.3M/170M [00:02&lt;00:01, 55.4MB/s]
- 59%|#####9    | 101M/170M [00:02&lt;00:01, 58.5MB/s]
- 63%|######2   | 107M/170M [00:02&lt;00:01, 47.5MB/s]
- 67%|######7   | 114M/170M [00:02&lt;00:01, 54.3MB/s]
- 71%|#######   | 120M/170M [00:02&lt;00:00, 52.6MB/s]
- 75%|#######5  | 128M/170M [00:02&lt;00:00, 53.2MB/s]
- 80%|########  | 136M/170M [00:02&lt;00:00, 55.1MB/s]
- 85%|########4 | 144M/170M [00:03&lt;00:00, 54.8MB/s]
- 88%|########8 | 150M/170M [00:03&lt;00:00, 54.3MB/s]
- 92%|#########1| 156M/170M [00:03&lt;00:00, 47.5MB/s]
- 94%|#########4| 160M/170M [00:03&lt;00:00, 30.4MB/s]
- 98%|#########7| 166M/170M [00:03&lt;00:00, 35.8MB/s]
-100%|##########| 170M/170M [00:03&lt;00:00, 45.5MB/s]
+  5%|4         | 7.99M/170M [00:00&lt;00:03, 44.7MB/s]
+  8%|8         | 14.3M/170M [00:00&lt;00:03, 43.4MB/s]
+ 11%|#         | 18.4M/170M [00:00&lt;00:03, 41.5MB/s]
+ 14%|#4        | 24.0M/170M [00:00&lt;00:03, 38.4MB/s]
+ 19%|#8        | 32.0M/170M [00:00&lt;00:03, 40.0MB/s]
+ 24%|##3       | 40.0M/170M [00:01&lt;00:03, 40.9MB/s]
+ 28%|##7       | 47.3M/170M [00:01&lt;00:02, 48.6MB/s]
+ 31%|###       | 52.3M/170M [00:01&lt;00:02, 42.2MB/s]
+ 35%|###4      | 58.7M/170M [00:01&lt;00:02, 47.5MB/s]
+ 38%|###7      | 64.0M/170M [00:01&lt;00:02, 40.0MB/s]
+ 42%|####2     | 72.0M/170M [00:01&lt;00:02, 45.9MB/s]
+ 47%|####6     | 79.5M/170M [00:01&lt;00:01, 53.2MB/s]
+ 50%|#####     | 85.1M/170M [00:01&lt;00:01, 50.9MB/s]
+ 53%|#####3    | 90.3M/170M [00:02&lt;00:02, 41.0MB/s]
+ 57%|#####6    | 96.0M/170M [00:02&lt;00:02, 38.7MB/s]
+ 61%|######1   | 104M/170M [00:02&lt;00:01, 41.8MB/s]
+ 66%|######5   | 112M/170M [00:02&lt;00:01, 47.7MB/s]
+ 71%|#######   | 120M/170M [00:02&lt;00:01, 51.8MB/s]
+ 75%|#######5  | 128M/170M [00:02&lt;00:00, 51.5MB/s]
+ 79%|#######9  | 134M/170M [00:03&lt;00:00, 52.7MB/s]
+ 82%|########2 | 139M/170M [00:03&lt;00:00, 50.9MB/s]
+ 85%|########5 | 144M/170M [00:03&lt;00:00, 49.3MB/s]
+ 89%|########9 | 152M/170M [00:03&lt;00:00, 50.6MB/s]
+ 93%|#########3| 158M/170M [00:03&lt;00:00, 52.7MB/s]
+ 96%|#########6| 163M/170M [00:03&lt;00:00, 45.0MB/s]
+ 99%|#########8| 168M/170M [00:03&lt;00:00, 42.7MB/s]
+100%|##########| 170M/170M [00:03&lt;00:00, 46.0MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
   (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -578,7 +579,7 @@ torchvision rcnn models.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Get 9 valid boxes
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  40.156 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  39.367 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-object-detection-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7795da4b258c8feff986668b95ef57ad/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized.html b/docs/how_to/deploy_models/deploy_prequantized.html
index 12f29bef85..9264cfd07a 100644
--- a/docs/how_to/deploy_models/deploy_prequantized.html
+++ b/docs/how_to/deploy_models/deploy_prequantized.html
@@ -500,8 +500,8 @@ training. Other models require a full post training calibration.</p>
 Downloading: &quot;https://download.pytorch.org/models/mobilenet_v2-b0353104.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
 
   0%|          | 0.00/13.6M [00:00&lt;?, ?B/s]
- 59%|#####8    | 7.99M/13.6M [00:00&lt;00:00, 77.9MB/s]
-100%|##########| 13.6M/13.6M [00:00&lt;00:00, 95.9MB/s]
+ 59%|#####9    | 8.04M/13.6M [00:00&lt;00:00, 84.3MB/s]
+100%|##########| 13.6M/13.6M [00:00&lt;00:00, 61.9MB/s]
 </pre></div>
 </div>
 </div>
@@ -592,7 +592,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  89.1452      89.1043      91.6734      88.5759       0.4102
+  88.7922      88.7505      90.0318      88.3844       0.2541
 </pre></div>
 </div>
 <div class="admonition note">
@@ -631,7 +631,7 @@ This includes support for the VNNI 8 bit dot product instruction (CascadeLake or
 <div class="section" id="deploy-a-quantized-tflite-model">
 <h2>Deploy a quantized TFLite Model<a class="headerlink" href="#deploy-a-quantized-tflite-model" title="Permalink to this headline">¶</a></h2>
 <p>TODO</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  25.446 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  25.313 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/fb8217c13f4351224c6cf3aacf1a87fc/deploy_prequantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized_tflite.html b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
index 9a6429c0b9..570a89b48a 100644
--- a/docs/how_to/deploy_models/deploy_prequantized_tflite.html
+++ b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
@@ -585,7 +585,7 @@ TFLite Top-5 labels: [387 102 386 341 349]
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  109.4427     109.4240     110.9627     108.2313      0.4822
+  110.0824     110.0108     114.0589     109.1821      0.6752
 </pre></div>
 </div>
 <div class="admonition note">
diff --git a/docs/how_to/deploy_models/deploy_quantized.html b/docs/how_to/deploy_models/deploy_quantized.html
index 68be4cf233..52b856dffa 100644
--- a/docs/how_to/deploy_models/deploy_quantized.html
+++ b/docs/how_to/deploy_models/deploy_quantized.html
@@ -526,7 +526,7 @@ for calibration. But the accuracy might be impacted.</p>
   warnings.warn(
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  42.016 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  3.466 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-quantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7810ecf51bfc05f7d5e8a400ac3e815d/deploy_quantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_quantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/sg_execution_times.html b/docs/how_to/deploy_models/sg_execution_times.html
index 94820c9a63..bc164879ba 100644
--- a/docs/how_to/deploy_models/sg_execution_times.html
+++ b/docs/how_to/deploy_models/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-deploy-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>11:32.973</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
+<p><strong>11:53.613</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 86%" />
@@ -354,43 +354,43 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></td>
-<td><p>03:40.156</p></td>
+<td><p>03:39.367</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></td>
-<td><p>01:42.016</p></td>
+<td><p>02:03.466</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></td>
-<td><p>01:25.446</p></td>
+<td><p>01:25.313</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_adreno.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno.py</span></code>)</p></td>
-<td><p>01:19.527</p></td>
+<td><p>01:19.724</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></td>
-<td><p>00:51.482</p></td>
+<td><p>00:51.778</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></td>
-<td><p>00:49.082</p></td>
+<td><p>00:49.045</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_adreno_tvmc.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-tvmc-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™ with tvmc Interface</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno_tvmc.py</span></code>)</p></td>
-<td><p>00:45.431</p></td>
+<td><p>00:44.992</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_nano.html#sphx-glr-how-to-deploy-models-deploy-model-on-nano-py"><span class="std std-ref">Deploy the Pretrained Model on Jetson Nano</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_nano.py</span></code>)</p></td>
-<td><p>00:30.221</p></td>
+<td><p>00:30.243</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></td>
-<td><p>00:29.605</p></td>
+<td><p>00:29.679</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></td>
-<td><p>00:00.007</p></td>
+<td><p>00:00.006</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/extend_tvm/bring_your_own_datatypes.html b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
index e34348435e..a4f0f6f802 100644
--- a/docs/how_to/extend_tvm/bring_your_own_datatypes.html
+++ b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
@@ -624,7 +624,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
 <span class="n">module</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">params</span></a> <span class="o">=</span> <span class="n">get_mobilenet</span><span class="p">()</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipa6115f43-09d1-4ce3-8870-1a27fbd3c6d5 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipdf54d405-fcae-450c-91dc-ad980f825cba from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 </pre></div>
 </div>
 <p>It’s easy to execute MobileNet with native TVM:</p>
diff --git a/docs/how_to/extend_tvm/sg_execution_times.html b/docs/how_to/extend_tvm/sg_execution_times.html
index ec717230ee..b43f76b437 100644
--- a/docs/how_to/extend_tvm/sg_execution_times.html
+++ b/docs/how_to/extend_tvm/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-extend-tvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:56.661</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
+<p><strong>00:55.828</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -354,19 +354,19 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></td>
-<td><p>00:52.800</p></td>
+<td><p>00:51.997</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></td>
-<td><p>00:02.681</p></td>
+<td><p>00:02.672</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></td>
-<td><p>00:01.173</p></td>
+<td><p>00:01.151</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></td>
-<td><p>00:00.007</p></td>
+<td><p>00:00.008</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/extend_tvm/use_pass_instrument.html b/docs/how_to/extend_tvm/use_pass_instrument.html
index 2ad2b8bb43..45a950505d 100644
--- a/docs/how_to/extend_tvm/use_pass_instrument.html
+++ b/docs/how_to/extend_tvm/use_pass_instrument.html
@@ -531,10 +531,10 @@ profile the execution time of each passes.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 23997us [23997us] (48.92%; 48.92%)
-FoldScaleAxis: 25060us [9us] (51.08%; 51.08%)
-        FoldConstant: 25050us [1864us] (51.06%; 99.96%)
-                InferType: 23187us [23187us] (47.27%; 92.56%)
+InferType: 23353us [23353us] (47.64%; 47.64%)
+FoldScaleAxis: 25667us [9us] (52.36%; 52.36%)
+        FoldConstant: 25658us [2134us] (52.34%; 99.96%)
+                InferType: 23524us [23524us] (47.99%; 91.68%)
 </pre></div>
 </div>
 </div>
@@ -556,10 +556,10 @@ Refer to following sections and <a class="reference internal" href="../../refere
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 23117us [23117us] (48.42%; 48.42%)
-FoldScaleAxis: 24624us [6us] (51.58%; 51.58%)
-        FoldConstant: 24618us [1823us] (51.57%; 99.97%)
-                InferType: 22795us [22795us] (47.75%; 92.59%)
+InferType: 23119us [23119us] (47.39%; 47.39%)
+FoldScaleAxis: 25661us [8us] (52.61%; 52.61%)
+        FoldConstant: 25653us [1813us] (52.59%; 99.97%)
+                InferType: 23840us [23840us] (48.87%; 92.93%)
 </pre></div>
 </div>
 <p>Register empty list to clear existing instruments.</p>
diff --git a/docs/how_to/optimize_operators/opt_conv_cuda.html b/docs/how_to/optimize_operators/opt_conv_cuda.html
index a96957a8f7..45f7609085 100644
--- a/docs/how_to/optimize_operators/opt_conv_cuda.html
+++ b/docs/how_to/optimize_operators/opt_conv_cuda.html
@@ -580,7 +580,7 @@ latency of convolution.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Convolution: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">*</span> <span cl [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 47.510208 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 53.594974 ms
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-optimize-operators-opt-conv-cuda-py">
diff --git a/docs/how_to/optimize_operators/opt_conv_tensorcore.html b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
index 86e84e929e..a007d4f6ea 100644
--- a/docs/how_to/optimize_operators/opt_conv_tensorcore.html
+++ b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
@@ -862,7 +862,7 @@ be able to run on our build server</p>
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;conv2d with tensor core: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">* [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 11.556883 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 12.268387 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/optimize_operators/opt_gemm.html b/docs/how_to/optimize_operators/opt_gemm.html
index 11d92e3cfa..d6ff09d662 100644
--- a/docs/how_to/optimize_operators/opt_gemm.html
+++ b/docs/how_to/optimize_operators/opt_gemm.html
@@ -477,8 +477,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Baseline: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018083
-Baseline: 3.314446
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018092
+Baseline: 3.298170
 </pre></div>
 </div>
 <p>In TVM, we can always inspect lower level IR to debug or optimize our schedule.
@@ -537,7 +537,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt1: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.296930
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.297649
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -594,7 +594,7 @@ vastly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt2: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.283093
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.280033
 </pre></div>
 </div>
 <p>Here is the generated IR after vectorization.</p>
@@ -649,7 +649,7 @@ the access pattern for A matrix is more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt3: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.116886
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.119562
 </pre></div>
 </div>
 <p>Here is the generated IR after loop permutation.</p>
@@ -726,7 +726,7 @@ flattening.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt4: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.107717
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.107184
 </pre></div>
 </div>
 <p>Here is the generated IR after array packing.</p>
@@ -804,7 +804,7 @@ write to C when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt5: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.112268
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.112017
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -884,7 +884,7 @@ class Module:
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt6: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">opt6_time</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.132823
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.132650
 </pre></div>
 </div>
 <p>Here is the generated IR after parallelization.</p>
diff --git a/docs/how_to/optimize_operators/sg_execution_times.html b/docs/how_to/optimize_operators/sg_execution_times.html
index b7f0059f82..724cb43cc7 100644
--- a/docs/how_to/optimize_operators/sg_execution_times.html
+++ b/docs/how_to/optimize_operators/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-optimize-operators-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:34.167</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
+<p><strong>00:34.028</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -354,15 +354,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></td>
-<td><p>00:30.635</p></td>
+<td><p>00:30.546</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></td>
-<td><p>00:02.078</p></td>
+<td><p>00:02.048</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></td>
-<td><p>00:01.454</p></td>
+<td><p>00:01.434</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
index 07a0a066b0..157c976d06 100644
--- a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
+++ b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autoscheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>03:33.170</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
+<p><strong>03:33.098</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 85%" />
@@ -354,23 +354,23 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></td>
-<td><p>01:29.491</p></td>
+<td><p>01:29.073</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></td>
-<td><p>01:15.002</p></td>
+<td><p>01:14.843</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></td>
-<td><p>00:17.245</p></td>
+<td><p>00:17.760</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></td>
-<td><p>00:15.871</p></td>
+<td><p>00:15.940</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></td>
-<td><p>00:15.457</p></td>
+<td><p>00:15.378</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></td>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
index 29bb7a93f9..475811d0dd 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
@@ -1018,7 +1018,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.350 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.352 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
index d9a12879c2..45bbc874ae 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
@@ -921,7 +921,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   8.0976       8.0975       8.1085       8.0868       0.0089
+   8.1058       8.1094       8.1205       8.0874       0.0137
 </pre></div>
 </div>
 </div>
@@ -943,7 +943,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  15.002 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  14.843 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-cuda-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/eafe360d52540634c9eea0fa89e804bd/tune_network_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
index 5a6936f786..428e86d163 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
@@ -940,7 +940,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  761.3511     759.2872     766.4372     758.3289      3.6176
+  757.4948     756.7690     759.2098     756.5055      1.2175
 </pre></div>
 </div>
 </div>
@@ -962,7 +962,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.491 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.073 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/e416b94ca1090b0897c0f6e0df95b911/tune_network_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_x86.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autotvm/sg_execution_times.html b/docs/how_to/tune_with_autotvm/sg_execution_times.html
index 8008c1de13..1a664feee2 100644
--- a/docs/how_to/tune_with_autotvm/sg_execution_times.html
+++ b/docs/how_to/tune_with_autotvm/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:23.771</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
+<p><strong>00:23.462</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -354,11 +354,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></td>
-<td><p>00:23.734</p></td>
+<td><p>00:23.423</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></td>
-<td><p>00:00.021</p></td>
+<td><p>00:00.023</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a Convolutional Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></td>
diff --git a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
index 1f381e62c4..7f11416333 100644
--- a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
@@ -615,7 +615,7 @@ and measure running time.</p>
 
 Best config:
 ,None
-Time cost of this operator: 0.037135
+Time cost of this operator: 0.037083
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autotvm-tune-conv2d-cuda-py">
diff --git a/docs/how_to/work_with_microtvm/micro_autotune.html b/docs/how_to/work_with_microtvm/micro_autotune.html
index db54124fe4..328fbd86f3 100644
--- a/docs/how_to/work_with_microtvm/micro_autotune.html
+++ b/docs/how_to/work_with_microtvm/micro_autotune.html
@@ -649,10 +649,10 @@ the tuned operator.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build without Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  302.5     98.737   (1, 2, 10, 10, 3)  2       1        [302.5]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.901     0.947    (1, 6, 10, 10)     1       1        [2.901]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.967     0.316    (1, 1, 10, 10, 3)  1       1        [0.967]
-Total_time                                    -                                             306.368   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  303.5     98.738   (1, 2, 10, 10, 3)  2       1        [303.5]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.923     0.951    (1, 6, 10, 10)     1       1        [2.923]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.956     0.311    (1, 1, 10, 10, 3)  1       1        [0.956]
+Total_time                                    -                                             307.379   -        -                  -       -        -
 </pre></div>
 </div>
 </div>
@@ -704,13 +704,13 @@ Total_time                                    -
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build with Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  101.8     97.499   (1, 6, 10, 10, 1)  2       1        [101.8]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.752     1.678    (1, 6, 10, 10)     1       1        [1.752]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.859     0.823    (1, 3, 10, 10, 1)  1       1        [0.859]
-Total_time                                    -                                             104.411   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  101.6     97.439   (1, 6, 10, 10, 1)  2       1        [101.6]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.817     1.742    (1, 6, 10, 10)     1       1        [1.817]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.854     0.819    (1, 3, 10, 10, 1)  1       1        [0.854]
+Total_time                                    -                                             104.27    -        -                  -       -        -
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  26.205 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  26.302 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-autotune-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/9ccca8fd489a1486ac71b55a55c320c5/micro_autotune.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_autotune.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_pytorch.html b/docs/how_to/work_with_microtvm/micro_pytorch.html
index 466e467fb2..b0aaf01d37 100644
--- a/docs/how_to/work_with_microtvm/micro_pytorch.html
+++ b/docs/how_to/work_with_microtvm/micro_pytorch.html
@@ -460,7 +460,8 @@ download a cat image and preprocess it to use as the model input.</p>
 Downloading: &quot;https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
 
   0%|          | 0.00/3.42M [00:00&lt;?, ?B/s]
-100%|##########| 3.42M/3.42M [00:00&lt;00:00, 36.2MB/s]
+ 61%|######    | 2.09M/3.42M [00:00&lt;00:00, 19.1MB/s]
+100%|##########| 3.42M/3.42M [00:00&lt;00:00, 30.0MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
   device=storage.device,
 /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -588,7 +589,7 @@ via the host <cite>main.cc`</cite> or if a Zephyr emulated board is selected as
 Torch top-1 id: 282, class name: tiger cat
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  26.374 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  26.805 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/12b9ecc04c41abaa12022061771821d1/micro_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_train.html b/docs/how_to/work_with_microtvm/micro_train.html
index 67f70bd124..8e6c385493 100644
--- a/docs/how_to/work_with_microtvm/micro_train.html
+++ b/docs/how_to/work_with_microtvm/micro_train.html
@@ -528,7 +528,7 @@ take about <strong>2 minutes</strong> to download the Stanford Cars, while COCO
 <a href="https://docs.python.org/3/library/shutil.html#shutil.move" title="shutil.move" class="sphx-glr-backref-module-shutil sphx-glr-backref-type-py-function"><span class="n">shutil</span><span class="o">.</span><span class="n">move</span></a><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx-glr-backref-typ [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmp7no100ul/images/random&#39;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmp2p4dqbop/images/random&#39;
 </pre></div>
 </div>
 </div>
@@ -588,8 +588,8 @@ objects to other stuff? We can display some examples from our datasets using <co
     <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmp7no100ul/images/target contains 8144 images
-/tmp/tmp7no100ul/images/random contains 5000 images
+<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmp2p4dqbop/images/target contains 8144 images
+/tmp/tmp2p4dqbop/images/random contains 5000 images
 </pre></div>
 </div>
 </div>
@@ -701,13 +701,13 @@ the time on our validation set).</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Epoch 1/3
-328/328 - 41s - loss: 0.2290 - accuracy: 0.9233 - val_loss: 0.1310 - val_accuracy: 0.9558 - 41s/epoch - 125ms/step
+328/328 - 41s - loss: 0.2254 - accuracy: 0.9230 - val_loss: 0.1422 - val_accuracy: 0.9494 - 41s/epoch - 125ms/step
 Epoch 2/3
-328/328 - 35s - loss: 0.1034 - accuracy: 0.9618 - val_loss: 0.1087 - val_accuracy: 0.9641 - 35s/epoch - 108ms/step
+328/328 - 35s - loss: 0.1025 - accuracy: 0.9644 - val_loss: 0.1077 - val_accuracy: 0.9645 - 35s/epoch - 108ms/step
 Epoch 3/3
-328/328 - 35s - loss: 0.0641 - accuracy: 0.9774 - val_loss: 0.1012 - val_accuracy: 0.9687 - 35s/epoch - 108ms/step
+328/328 - 35s - loss: 0.0655 - accuracy: 0.9750 - val_loss: 0.0967 - val_accuracy: 0.9705 - 35s/epoch - 108ms/step
 
-&lt;keras.callbacks.History object at 0x7f8d8c558850&gt;
+&lt;keras.callbacks.History object at 0x7fb2b4992d30&gt;
 </pre></div>
 </div>
 </div>
@@ -971,7 +971,7 @@ as intended.</p>
 <p>From here, we could modify the model to read live images from the camera - we have another
 Arduino tutorial for how to do that <a class="reference external" href="https://github.com/guberti/tvm-arduino-demos/tree/master/examples/person_detection">on GitHub</a>. Alternatively, we could also
 <a class="reference external" href="https://tvm.apache.org/docs/how_to/work_with_microtvm/micro_autotune.html">use TVM’s autotuning capabilities</a> to dramatically improve the model’s performance.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  33.356 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  26.868 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-train-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/b52cec46baf4f78d6bcd94cbe269c8a6/micro_train.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_train.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/sg_execution_times.html b/docs/how_to/work_with_microtvm/sg_execution_times.html
index d1a738c895..1d4a0e745a 100644
--- a/docs/how_to/work_with_microtvm/sg_execution_times.html
+++ b/docs/how_to/work_with_microtvm/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-microtvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>07:55.277</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
+<p><strong>07:49.074</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -354,27 +354,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_train.html#sphx-glr-how-to-work-with-microtvm-micro-train-py"><span class="std std-ref">5. Training Vision Models for microTVM on Arduino</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_train.py</span></code>)</p></td>
-<td><p>04:33.356</p></td>
+<td><p>04:26.868</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_pytorch.html#sphx-glr-how-to-work-with-microtvm-micro-pytorch-py"><span class="std std-ref">4. microTVM PyTorch Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_pytorch.py</span></code>)</p></td>
-<td><p>01:26.374</p></td>
+<td><p>01:26.805</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">6. Model Tuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
-<td><p>01:26.205</p></td>
+<td><p>01:26.302</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_aot.html#sphx-glr-how-to-work-with-microtvm-micro-aot-py"><span class="std std-ref">3. microTVM Ahead-of-Time (AOT) Compilation</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_aot.py</span></code>)</p></td>
-<td><p>00:11.779</p></td>
+<td><p>00:11.821</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">2. microTVM TFLite Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></td>
-<td><p>00:09.098</p></td>
+<td><p>00:09.042</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_custom_ide.html#sphx-glr-how-to-work-with-microtvm-micro-custom-ide-py"><span class="std std-ref">9. Bring microTVM to your own development environment</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_custom_ide.py</span></code>)</p></td>
-<td><p>00:08.464</p></td>
+<td><p>00:08.236</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">7. Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_relay/sg_execution_times.html b/docs/how_to/work_with_relay/sg_execution_times.html
index de76833b66..729da2a5bf 100644
--- a/docs/how_to/work_with_relay/sg_execution_times.html
+++ b/docs/how_to/work_with_relay/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-relay-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:40.588</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
+<p><strong>00:40.353</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -354,15 +354,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="using_pipeline_executor.html#sphx-glr-how-to-work-with-relay-using-pipeline-executor-py"><span class="std std-ref">Using Pipeline Executor in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_pipeline_executor.py</span></code>)</p></td>
-<td><p>00:35.473</p></td>
+<td><p>00:35.249</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></td>
-<td><p>00:03.228</p></td>
+<td><p>00:03.274</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></td>
-<td><p>00:01.881</p></td>
+<td><p>00:01.824</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/intrin_math.html b/docs/how_to/work_with_schedules/intrin_math.html
index 0fd762f486..e190909a85 100644
--- a/docs/how_to/work_with_schedules/intrin_math.html
+++ b/docs/how_to/work_with_schedules/intrin_math.html
@@ -554,7 +554,7 @@ The following example customizes CUDA lowering rule for <code class="code docuti
 <a href="../../reference/api/python/ir.html#tvm.ir.register_intrin_lowering" title="tvm.ir.register_intrin_lowering" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-function"><span class="n">register_intrin_lowering</span></a><span class="p">(</span><span class="s2">&quot;tir.exp&quot;</span><span class="p">,</span> <span class="n">target</span><span class="o">=</span><span class="s2">&quot;cuda&quot;</span><span class="p">,</span> <span class="n">f</span><span class="o">= [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7f8c104d91f0&gt;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7fb14837eaf0&gt;
 </pre></div>
 </div>
 <p>Register the rule to TVM with override option to override existing rule.
diff --git a/docs/how_to/work_with_schedules/sg_execution_times.html b/docs/how_to/work_with_schedules/sg_execution_times.html
index bdf0a0d58f..435c8b5d91 100644
--- a/docs/how_to/work_with_schedules/sg_execution_times.html
+++ b/docs/how_to/work_with_schedules/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-schedules-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:06.477</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
+<p><strong>00:06.424</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -354,27 +354,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></td>
-<td><p>00:03.426</p></td>
+<td><p>00:03.385</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></td>
-<td><p>00:01.258</p></td>
+<td><p>00:01.230</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></td>
-<td><p>00:00.775</p></td>
+<td><p>00:00.777</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></td>
-<td><p>00:00.763</p></td>
+<td><p>00:00.770</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></td>
-<td><p>00:00.114</p></td>
+<td><p>00:00.120</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></td>
-<td><p>00:00.059</p></td>
+<td><p>00:00.063</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></td>
@@ -382,7 +382,7 @@
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tuple_inputs.html#sphx-glr-how-to-work-with-schedules-tuple-inputs-py"><span class="std std-ref">Compute and Reduce with Tuple Inputs</span></a> (<code class="docutils literal notranslate"><span class="pre">tuple_inputs.py</span></code>)</p></td>
-<td><p>00:00.028</p></td>
+<td><p>00:00.027</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/install/nnpack.html b/docs/install/nnpack.html
index 60011897a9..2412542d0d 100644
--- a/docs/install/nnpack.html
+++ b/docs/install/nnpack.html
@@ -234,17 +234,7 @@
               <p class="caption" role="heading"><span class="caption-text">Getting Started</span></p>
 <ul class="current">
 <li class="toctree-l1 current"><a class="reference internal" href="index.html">Installing TVM</a><ul class="current">
-<li class="toctree-l2 current"><a class="reference internal" href="from_source.html">Install from Source</a><ul class="current">
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#developers-get-source-from-github">Developers: Get Source from Github</a></li>
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#build-the-shared-library">Build the Shared Library</a></li>
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#python-package-installation">Python Package Installation</a></li>
-<li class="toctree-l3 current"><a class="reference internal" href="from_source.html#install-contrib-libraries">Install Contrib Libraries</a><ul class="current">
-<li class="toctree-l4 current"><a class="current reference internal" href="#">NNPACK Contrib Installation</a></li>
-</ul>
-</li>
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#enable-c-tests">Enable C++ Tests</a></li>
-</ul>
-</li>
+<li class="toctree-l2"><a class="reference internal" href="from_source.html">Install from Source</a></li>
 <li class="toctree-l2"><a class="reference internal" href="docker.html">Docker Images</a></li>
 <li class="toctree-l2 current"><a class="current reference internal" href="#">NNPACK Contrib Installation</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#conditions">Conditions</a></li>
diff --git a/docs/reference/api/python/auto_scheduler.html b/docs/reference/api/python/auto_scheduler.html
index 1813171e47..239e83d464 100644
--- a/docs/reference/api/python/auto_scheduler.html
+++ b/docs/reference/api/python/auto_scheduler.html
@@ -1622,7 +1622,7 @@ history states as starting point to perform Evolutionary Search).</p></li>
 
 <dl class="py class">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.SketchPolicy">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
 <dd><p>The search policy that searches in a hierarchical search space defined by sketches.
 The policy randomly samples programs from the space defined by sketches and use evolutionary
 search to fine-tune them.</p>
@@ -1906,7 +1906,7 @@ Candidates:
 
 <dl class="py function">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.auto_schedule">
-<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
+<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
 <dd><p>THIS API IS DEPRECATED.</p>
 <p>Run auto scheduling search for a task.</p>
 <dl class="field-list simple">
diff --git a/docs/reference/api/typedoc/classes/bytestreamreader.html b/docs/reference/api/typedoc/classes/bytestreamreader.html
index 58a781396b..2605c97755 100644
--- a/docs/reference/api/typedoc/classes/bytestreamreader.html
+++ b/docs/reference/api/typedoc/classes/bytestreamreader.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -141,7 +141,7 @@
 					<div class="tsd-signature tsd-kind-icon">bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Uint8Array</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -151,7 +151,7 @@
 					<div class="tsd-signature tsd-kind-icon">offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -168,7 +168,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">Uint8Array</span></h4>
@@ -185,7 +185,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -202,7 +202,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/cachedcallstack.html b/docs/reference/api/typedoc/classes/cachedcallstack.html
index c084a1dc18..fb6b677039 100644
--- a/docs/reference/api/typedoc/classes/cachedcallstack.html
+++ b/docs/reference/api/typedoc/classes/cachedcallstack.html
@@ -144,7 +144,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L223">memory.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L223">memory.ts:223</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">temp<wbr>Args<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><a href="../interfaces/disposable.html" class="tsd-signature-type">Disposable</a><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L208">memory.ts:208</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L208">memory.ts:208</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -194,7 +194,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L312">memory.ts:312</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L312">memory.ts:312</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L284">memory.ts:284</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L284">memory.ts:284</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L388">memory.ts:388</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L388">memory.ts:388</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -300,7 +300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L376">memory.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L376">memory.ts:376</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -340,7 +340,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L267">memory.ts:267</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L267">memory.ts:267</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -373,7 +373,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L243">memory.ts:243</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L243">memory.ts:243</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -390,7 +390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L321">memory.ts:321</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L321">memory.ts:321</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -422,7 +422,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L252">memory.ts:252</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L252">memory.ts:252</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -444,7 +444,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L359">memory.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L359">memory.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -470,7 +470,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L342">memory.ts:342</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L342">memory.ts:342</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -496,7 +496,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L350">memory.ts:350</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L350">memory.ts:350</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -522,7 +522,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L326">memory.ts:326</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L326">memory.ts:326</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -548,7 +548,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L363">memory.ts:363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L363">memory.ts:363</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -574,7 +574,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L346">memory.ts:346</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L346">memory.ts:346</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -600,7 +600,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L334">memory.ts:334</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L334">memory.ts:334</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/dldatatype.html b/docs/reference/api/typedoc/classes/dldatatype.html
index 3a0f9603ec..584812fb11 100644
--- a/docs/reference/api/typedoc/classes/dldatatype.html
+++ b/docs/reference/api/typedoc/classes/dldatatype.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">bits<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L357">runtime.ts:357</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L357">runtime.ts:357</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">code<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L355">runtime.ts:355</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L355">runtime.ts:355</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -177,7 +177,7 @@
 					<div class="tsd-signature tsd-kind-icon">lanes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -199,7 +199,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L376">runtime.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L376">runtime.ts:376</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -216,7 +216,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L367">runtime.ts:367</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L367">runtime.ts:367</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/dldevice.html b/docs/reference/api/typedoc/classes/dldevice.html
index 4a81332a50..ed8fe46ee4 100644
--- a/docs/reference/api/typedoc/classes/dldevice.html
+++ b/docs/reference/api/typedoc/classes/dldevice.html
@@ -118,7 +118,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L299">runtime.ts:299</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L299">runtime.ts:299</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L297">runtime.ts:297</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L297">runtime.ts:297</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -161,7 +161,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L295">runtime.ts:295</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L295">runtime.ts:295</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -183,7 +183,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L320">runtime.ts:320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L320">runtime.ts:320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -205,7 +205,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L327">runtime.ts:327</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L327">runtime.ts:327</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/environment.html b/docs/reference/api/typedoc/classes/environment.html
index 0c1fa188b8..3301f7412d 100644
--- a/docs/reference/api/typedoc/classes/environment.html
+++ b/docs/reference/api/typedoc/classes/environment.html
@@ -125,7 +125,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/environment.ts#L86">environment.ts:86</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/environment.ts#L86">environment.ts:86</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 					<aside class="tsd-sources">
 						<p>Implementation of <a href="../interfaces/libraryprovider.html">LibraryProvider</a>.<a href="../interfaces/libraryprovider.html#imports">imports</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/environment.ts#L70">environment.ts:70</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/environment.ts#L70">environment.ts:70</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/environment.ts#L69">environment.ts:69</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/environment.ts#L69">environment.ts:69</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -210,7 +210,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">ctypes.FTVMWasmPackedCFunc</span><span class="tsd-signature-symbol"> | </span><span class="tsd-signature-type">undefined</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = [undefined,]</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/environment.ts#L78">environment.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/environment.ts#L78">environment.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -228,7 +228,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<wbr>Free<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/environment.ts#L84">environment.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/environment.ts#L84">environment.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/environment.ts#L105">environment.ts:105</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/environment.ts#L105">environment.ts:105</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ffilibrary.html b/docs/reference/api/typedoc/classes/ffilibrary.html
index bbdb6d20a5..f40cf3da6a 100644
--- a/docs/reference/api/typedoc/classes/ffilibrary.html
+++ b/docs/reference/api/typedoc/classes/ffilibrary.html
@@ -131,7 +131,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L50">runtime.ts:50</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L50">runtime.ts:50</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L47">runtime.ts:47</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L47">runtime.ts:47</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L46">runtime.ts:46</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L46">runtime.ts:46</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L45">runtime.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L45">runtime.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">webGPUContext<span class="tsd-signature-symbol">:</span> <a href="webgpucontext.html" class="tsd-signature-type">WebGPUContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L48">runtime.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L48">runtime.ts:48</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -203,7 +203,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L77">runtime.ts:77</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L77">runtime.ts:77</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L67">runtime.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L67">runtime.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -243,7 +243,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L85">runtime.ts:85</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L85">runtime.ts:85</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <a href="cachedcallstack.html" class="tsd-signature-type">CachedCallStack</a></h4>
@@ -260,7 +260,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L96">runtime.ts:96</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L96">runtime.ts:96</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -283,7 +283,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L73">runtime.ts:73</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L73">runtime.ts:73</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/instance.html b/docs/reference/api/typedoc/classes/instance.html
index 1b37d3ba64..a7813cea18 100644
--- a/docs/reference/api/typedoc/classes/instance.html
+++ b/docs/reference/api/typedoc/classes/instance.html
@@ -161,7 +161,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L844">runtime.ts:844</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L844">runtime.ts:844</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L834">runtime.ts:834</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L834">runtime.ts:834</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -234,7 +234,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L833">runtime.ts:833</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L833">runtime.ts:833</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -251,7 +251,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L973">runtime.ts:973</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L973">runtime.ts:973</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -296,7 +296,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L932">runtime.ts:932</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L932">runtime.ts:932</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -318,7 +318,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L901">runtime.ts:901</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L901">runtime.ts:901</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -381,7 +381,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -412,7 +412,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -453,7 +453,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -491,7 +491,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L922">runtime.ts:922</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L922">runtime.ts:922</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -508,7 +508,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -552,7 +552,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L943">runtime.ts:943</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L943">runtime.ts:943</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -577,7 +577,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -609,7 +609,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -640,7 +640,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -672,7 +672,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -695,7 +695,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -729,7 +729,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L986">runtime.ts:986</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L986">runtime.ts:986</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -769,7 +769,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -817,7 +817,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -857,7 +857,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -900,7 +900,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -938,7 +938,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -990,7 +990,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1014,7 +1014,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1046,7 +1046,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1078,7 +1078,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1110,7 +1110,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1141,7 +1141,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L957">runtime.ts:957</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L957">runtime.ts:957</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/memory.html b/docs/reference/api/typedoc/classes/memory.html
index f63040e126..6bcd284375 100644
--- a/docs/reference/api/typedoc/classes/memory.html
+++ b/docs/reference/api/typedoc/classes/memory.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L40">memory.ts:40</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L40">memory.ts:40</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Memory</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L32">memory.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L32">memory.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span><span class="tsd-signature-symbol"> = true</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L33">memory.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L33">memory.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L154">memory.ts:154</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L154">memory.ts:154</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -210,7 +210,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L90">memory.ts:90</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L90">memory.ts:90</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -233,7 +233,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L97">memory.ts:97</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L97">memory.ts:97</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -256,7 +256,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L74">memory.ts:74</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L74">memory.ts:74</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -279,7 +279,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L81">memory.ts:81</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L81">memory.ts:81</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -302,7 +302,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L104">memory.ts:104</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L104">memory.ts:104</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -325,7 +325,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L132">memory.ts:132</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L132">memory.ts:132</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -362,7 +362,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L145">memory.ts:145</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L145">memory.ts:145</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -393,7 +393,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L60">memory.ts:60</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L60">memory.ts:60</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -416,7 +416,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L67">memory.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L67">memory.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -439,7 +439,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L53">memory.ts:53</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L53">memory.ts:53</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -462,7 +462,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L114">memory.ts:114</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L114">memory.ts:114</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -485,7 +485,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L124">memory.ts:124</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L124">memory.ts:124</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -502,7 +502,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/memory.ts#L175">memory.ts:175</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/memory.ts#L175">memory.ts:175</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/module.html b/docs/reference/api/typedoc/classes/module.html
index f51101c25d..a8fd5ec583 100644
--- a/docs/reference/api/typedoc/classes/module.html
+++ b/docs/reference/api/typedoc/classes/module.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L614">runtime.ts:614</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L614">runtime.ts:614</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L626">runtime.ts:626</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L626">runtime.ts:626</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -186,7 +186,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L653">runtime.ts:653</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L653">runtime.ts:653</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -218,7 +218,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L641">runtime.ts:641</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L641">runtime.ts:641</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L687">runtime.ts:687</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L687">runtime.ts:687</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ndarray.html b/docs/reference/api/typedoc/classes/ndarray.html
index 4ac685833e..8466dbf486 100644
--- a/docs/reference/api/typedoc/classes/ndarray.html
+++ b/docs/reference/api/typedoc/classes/ndarray.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L401">runtime.ts:401</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L401">runtime.ts:401</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <a href="dldevice.html" class="tsd-signature-type">DLDevice</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L394">runtime.ts:394</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L394">runtime.ts:394</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -173,7 +173,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L390">runtime.ts:390</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L390">runtime.ts:390</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -188,7 +188,7 @@
 					<div class="tsd-signature tsd-kind-icon">ndim<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L388">runtime.ts:388</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L388">runtime.ts:388</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -203,7 +203,7 @@
 					<div class="tsd-signature tsd-kind-icon">shape<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L392">runtime.ts:392</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L392">runtime.ts:392</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -225,7 +225,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L480">runtime.ts:480</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L480">runtime.ts:480</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -258,7 +258,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L524">runtime.ts:524</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L524">runtime.ts:524</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -290,7 +290,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L465">runtime.ts:465</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L465">runtime.ts:465</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -307,7 +307,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L458">runtime.ts:458</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L458">runtime.ts:458</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -339,7 +339,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L584">runtime.ts:584</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L584">runtime.ts:584</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -363,7 +363,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L553">runtime.ts:553</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L553">runtime.ts:553</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/packedfunccell.html b/docs/reference/api/typedoc/classes/packedfunccell.html
index c35c3b1660..ef2f1cb991 100644
--- a/docs/reference/api/typedoc/classes/packedfunccell.html
+++ b/docs/reference/api/typedoc/classes/packedfunccell.html
@@ -117,7 +117,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L248">runtime.ts:248</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L248">runtime.ts:248</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L255">runtime.ts:255</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L255">runtime.ts:255</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -163,7 +163,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L264">runtime.ts:264</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L264">runtime.ts:264</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/rpcserver.html b/docs/reference/api/typedoc/classes/rpcserver.html
index 220aedfa75..3edc2c825e 100644
--- a/docs/reference/api/typedoc/classes/rpcserver.html
+++ b/docs/reference/api/typedoc/classes/rpcserver.html
@@ -115,7 +115,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">unknown</span><span class="tsd-signat [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -201,7 +201,7 @@
 					<div class="tsd-signature tsd-kind-icon">key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -211,7 +211,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -242,7 +242,7 @@
 					<div class="tsd-signature tsd-kind-icon">socket<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">WebSocket</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -252,7 +252,7 @@
 					<div class="tsd-signature tsd-kind-icon">state<span class="tsd-signature-symbol">:</span> <a href="../enums/rpcserverstate.html" class="tsd-signature-type">RPCServerState</a><span class="tsd-signature-symbol"> = RPCServerState.InitHeader</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -262,7 +262,7 @@
 					<div class="tsd-signature tsd-kind-icon">url<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/classes/runtimecontext.html b/docs/reference/api/typedoc/classes/runtimecontext.html
index 82a87974c7..ed6d19b74a 100644
--- a/docs/reference/api/typedoc/classes/runtimecontext.html
+++ b/docs/reference/api/typedoc/classes/runtimecontext.html
@@ -132,7 +132,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L148">runtime.ts:148</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L148">runtime.ts:148</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Item<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L143">runtime.ts:143</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L143">runtime.ts:143</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Size<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L144">runtime.ts:144</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L144">runtime.ts:144</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -192,7 +192,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Make<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L145">runtime.ts:145</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -202,7 +202,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Sys<wbr>Lib<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L146">runtime.ts:146</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L146">runtime.ts:146</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -219,7 +219,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L189">runtime.ts:189</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L189">runtime.ts:189</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -263,7 +263,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L163">runtime.ts:163</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L163">runtime.ts:163</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -280,7 +280,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L208">runtime.ts:208</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L208">runtime.ts:208</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
@@ -309,7 +309,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L157">runtime.ts:157</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L157">runtime.ts:157</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -326,7 +326,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L167">runtime.ts:167</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L167">runtime.ts:167</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -343,7 +343,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L198">runtime.ts:198</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L198">runtime.ts:198</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/scalar.html b/docs/reference/api/typedoc/classes/scalar.html
index 4a1bbf5da1..b950467eae 100644
--- a/docs/reference/api/typedoc/classes/scalar.html
+++ b/docs/reference/api/typedoc/classes/scalar.html
@@ -112,7 +112,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -137,7 +137,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">value<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L233">runtime.ts:233</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L233">runtime.ts:233</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmarray.html b/docs/reference/api/typedoc/classes/tvmarray.html
index 96e62c980e..fc35c0b414 100644
--- a/docs/reference/api/typedoc/classes/tvmarray.html
+++ b/docs/reference/api/typedoc/classes/tvmarray.html
@@ -133,7 +133,7 @@
 							<aside class="tsd-sources">
 								<p>Overrides <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#constructor">constructor</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L784">runtime.ts:784</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L784">runtime.ts:784</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -162,7 +162,7 @@
 					<aside class="tsd-sources">
 						<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#ctx">ctx</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#dispose">dispose</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -197,7 +197,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L804">runtime.ts:804</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L804">runtime.ts:804</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -230,7 +230,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#gethandle">getHandle</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L796">runtime.ts:796</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L796">runtime.ts:796</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -283,7 +283,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typeindex">typeIndex</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -306,7 +306,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typekey">typeKey</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmobject.html b/docs/reference/api/typedoc/classes/tvmobject.html
index fa971f5606..a92ea6f5dd 100644
--- a/docs/reference/api/typedoc/classes/tvmobject.html
+++ b/docs/reference/api/typedoc/classes/tvmobject.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">ctx<span class="tsd-signature-symbol">:</span> <a href="runtimecontext.html" class="tsd-signature-type">RuntimeContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -175,7 +175,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -192,7 +192,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -246,7 +246,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/webgpucontext.html b/docs/reference/api/typedoc/classes/webgpucontext.html
index 9ab7b7ea0f..92a986aebb 100644
--- a/docs/reference/api/typedoc/classes/webgpucontext.html
+++ b/docs/reference/api/typedoc/classes/webgpucontext.html
@@ -120,7 +120,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -145,7 +145,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">GPUDevice</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -155,7 +155,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -172,7 +172,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -209,7 +209,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/enums/argtypecode.html b/docs/reference/api/typedoc/enums/argtypecode.html
index 0c5859c80c..b7ace6c1e1 100644
--- a/docs/reference/api/typedoc/enums/argtypecode.html
+++ b/docs/reference/api/typedoc/enums/argtypecode.html
@@ -106,7 +106,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 6</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -116,7 +116,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -126,7 +126,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -136,7 +136,7 @@
 					<div class="tsd-signature tsd-kind-icon">Null<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 12</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMDLTensor<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 7</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMModule<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 9</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMNDArray<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 13</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -196,7 +196,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -206,7 +206,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObjectRValue<wbr>Ref<wbr>Arg<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 14</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -216,7 +216,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMOpaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -226,7 +226,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMPacked<wbr>Func<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 10</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -236,7 +236,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 11</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -246,7 +246,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/aynccallbackcode.html b/docs/reference/api/typedoc/enums/aynccallbackcode.html
index c0210588fe..20c5c23dbb 100644
--- a/docs/reference/api/typedoc/enums/aynccallbackcode.html
+++ b/docs/reference/api/typedoc/enums/aynccallbackcode.html
@@ -93,7 +93,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Exception<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L812">runtime.ts:812</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L812">runtime.ts:812</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -103,7 +103,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L811">runtime.ts:811</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L811">runtime.ts:811</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/dldatatypecode.html b/docs/reference/api/typedoc/enums/dldatatypecode.html
index 8af9ebaeb8..d7ba61cb14 100644
--- a/docs/reference/api/typedoc/enums/dldatatypecode.html
+++ b/docs/reference/api/typedoc/enums/dldatatypecode.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L339">runtime.ts:339</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L339">runtime.ts:339</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L337">runtime.ts:337</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L337">runtime.ts:337</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">Opaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L340">runtime.ts:340</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L340">runtime.ts:340</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -125,7 +125,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L338">runtime.ts:338</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L338">runtime.ts:338</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/rpcserverstate.html b/docs/reference/api/typedoc/enums/rpcserverstate.html
index 7bf9a0e4ca..12a0ad1fb6 100644
--- a/docs/reference/api/typedoc/enums/rpcserverstate.html
+++ b/docs/reference/api/typedoc/enums/rpcserverstate.html
@@ -90,7 +90,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<wbr>Key<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Server<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Body<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">Wait<wbr>For<wbr>Callback<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/sizeof.html b/docs/reference/api/typedoc/enums/sizeof.html
index eb3c8ce833..49ce9644eb 100644
--- a/docs/reference/api/typedoc/enums/sizeof.html
+++ b/docs/reference/api/typedoc/enums/sizeof.html
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32 + I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">F32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">F64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">I32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -150,7 +150,7 @@
 					<div class="tsd-signature tsd-kind-icon">I64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -160,7 +160,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMValue<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -170,7 +170,7 @@
 					<div class="tsd-signature tsd-kind-icon">U16<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 					<div class="tsd-signature tsd-kind-icon">U8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/index.html b/docs/reference/api/typedoc/index.html
index c94ee9acbe..8e0d550bc1 100644
--- a/docs/reference/api/typedoc/index.html
+++ b/docs/reference/api/typedoc/index.html
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">FObject<wbr>Constructor<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, lib<span class="tsd-signature-symbol">: </span><a href="classes/ffilibrary.html" class="tsd-signature-type">FFILibrary</a>, ctx<span class="tsd-signature-symbol">: </span><a href="classes/runtimecontext.html" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L778">runtime.ts:778</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L778">runtime.ts:778</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Alloc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>shape<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, ndim<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeCode<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeBits<span class="tsd [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -288,7 +288,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>Bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">num [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -332,7 +332,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>To<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>from<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, to<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-sig [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -376,7 +376,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>ToBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</sp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -420,7 +420,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -456,7 +456,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMBackend<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number< [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -508,7 +508,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCFunc<wbr>Set<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ret<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -556,7 +556,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCb<wbr>Arg<wbr>ToReturn<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, code<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span c [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -595,7 +595,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Call<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -651,7 +651,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -687,7 +687,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Get<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span cla [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -726,7 +726,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>List<wbr>Global<wbr>Names<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>outSize<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, outArray<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -765,7 +765,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Register<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, f<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, override<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -808,7 +808,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMGet<wbr>Last<wbr>Error<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -838,7 +838,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -874,7 +874,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Get<wbr>Function<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, funcName<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, queryImports<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">numbe [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -922,7 +922,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Import<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, dep<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-si [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -962,7 +962,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -998,7 +998,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Get<wbr>Type<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt;  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1037,7 +1037,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Index2<wbr>Key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_index<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, out_type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1076,7 +1076,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Key2<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol">  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1115,7 +1115,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMSynchronize<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>deviceType<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, deviceId<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signatur [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1157,7 +1157,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Alloc<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>size<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1193,7 +1193,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Free<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ptr<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1229,7 +1229,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Func<wbr>Create<wbr>FromCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resource<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1269,7 +1269,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>args<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1321,7 +1321,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<wbr>Finalizer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resourceHandle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1357,7 +1357,7 @@
 					<div class="tsd-signature tsd-kind-icon">GPUPointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1372,7 +1372,7 @@
 					<div class="tsd-signature tsd-kind-icon">Packed<wbr>Func<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">...</span>args<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol"> &amp; </span><a href="interfaces/disp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L37">runtime.ts:37</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L37">runtime.ts:37</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1387,7 +1387,7 @@
 					<div class="tsd-signature tsd-kind-icon">Pointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1402,7 +1402,7 @@
 					<div class="tsd-signature tsd-kind-icon">Ptr<wbr>Offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1417,7 +1417,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Base<span class="tsd-signature-symbol">:</span> <a href="classes/tvmobject.html" class="tsd-signature-type">TVMObject</a><span class="tsd-signature-symbol"> | </span><a href="classes/ndarray.html" class="tsd-signature-type">NDArray</a><span class="tsd-signature-symbol"> | </span><a href="classes/module.html" class="tsd-signature-type">Module</a><span class="tsd-signature-symbol"> | </span><a href="index.html#packedfunc" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L781">runtime.ts:781</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L781">runtime.ts:781</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1435,7 +1435,7 @@
 					<div class="tsd-signature tsd-kind-icon">RPC_<wbr>MAGIC<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">1045105</span><span class="tsd-signature-symbol"> = 1045105</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1457,7 +1457,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/support.ts#L25">support.ts:25</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/support.ts#L25">support.ts:25</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1489,7 +1489,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/support.ts#L39">support.ts:39</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/support.ts#L39">support.ts:39</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1518,7 +1518,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/support.ts#L52">support.ts:52</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/support.ts#L52">support.ts:52</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1555,7 +1555,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/compact.ts#L38">compact.ts:38</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/compact.ts#L38">compact.ts:38</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1586,7 +1586,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1608,7 +1608,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/environment.ts#L32">environment.ts:32</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/environment.ts#L32">environment.ts:32</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1639,7 +1639,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/compact.ts#L24">compact.ts:24</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/compact.ts#L24">compact.ts:24</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1661,7 +1661,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1726,7 +1726,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/support.ts#L62">support.ts:62</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/support.ts#L62">support.ts:62</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1748,7 +1748,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<wbr>Code<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L343">runtime.ts:343</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L343">runtime.ts:343</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1757,7 +1757,7 @@
 						<div class="tsd-signature tsd-kind-icon">0<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;int&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L344">runtime.ts:344</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L344">runtime.ts:344</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1767,7 +1767,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;uint&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L345">runtime.ts:345</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L345">runtime.ts:345</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1777,7 +1777,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;float&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L346">runtime.ts:346</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L346">runtime.ts:346</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1787,7 +1787,7 @@
 						<div class="tsd-signature tsd-kind-icon">3<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;handle&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L347">runtime.ts:347</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L347">runtime.ts:347</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1798,7 +1798,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Enum<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L272">runtime.ts:272</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L272">runtime.ts:272</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1807,7 +1807,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L273">runtime.ts:273</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L273">runtime.ts:273</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1817,7 +1817,7 @@
 						<div class="tsd-signature tsd-kind-icon">15<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;webgpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L277">runtime.ts:277</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L277">runtime.ts:277</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1827,7 +1827,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cuda&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L274">runtime.ts:274</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L274">runtime.ts:274</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1837,7 +1837,7 @@
 						<div class="tsd-signature tsd-kind-icon">4<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;opencl&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L275">runtime.ts:275</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L275">runtime.ts:275</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1847,7 +1847,7 @@
 						<div class="tsd-signature tsd-kind-icon">8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;metal&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L276">runtime.ts:276</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L276">runtime.ts:276</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1858,7 +1858,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Str<wbr>ToEnum<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L280">runtime.ts:280</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L280">runtime.ts:280</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1867,7 +1867,7 @@
 						<div class="tsd-signature tsd-kind-icon">cl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L283">runtime.ts:283</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L283">runtime.ts:283</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1877,7 +1877,7 @@
 						<div class="tsd-signature tsd-kind-icon">cpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 1</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L281">runtime.ts:281</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L281">runtime.ts:281</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1887,7 +1887,7 @@
 						<div class="tsd-signature tsd-kind-icon">cuda<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 2</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L282">runtime.ts:282</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L282">runtime.ts:282</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1897,7 +1897,7 @@
 						<div class="tsd-signature tsd-kind-icon">metal<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 8</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L286">runtime.ts:286</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L286">runtime.ts:286</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1907,7 +1907,7 @@
 						<div class="tsd-signature tsd-kind-icon">opencl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L284">runtime.ts:284</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L284">runtime.ts:284</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1917,7 +1917,7 @@
 						<div class="tsd-signature tsd-kind-icon">vulkan<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 7</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L285">runtime.ts:285</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L285">runtime.ts:285</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1927,7 +1927,7 @@
 						<div class="tsd-signature tsd-kind-icon">webgpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 15</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/runtime.ts#L287">runtime.ts:287</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/runtime.ts#L287">runtime.ts:287</a></li>
 							</ul>
 						</aside>
 					</section>
diff --git a/docs/reference/api/typedoc/interfaces/disposable.html b/docs/reference/api/typedoc/interfaces/disposable.html
index 303c4e95e0..89659e8198 100644
--- a/docs/reference/api/typedoc/interfaces/disposable.html
+++ b/docs/reference/api/typedoc/interfaces/disposable.html
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">dispose<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/types.ts#L52">types.ts:52</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/types.ts#L52">types.ts:52</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/interfaces/functioninfo.html b/docs/reference/api/typedoc/interfaces/functioninfo.html
index 958a686845..e998a4084d 100644
--- a/docs/reference/api/typedoc/interfaces/functioninfo.html
+++ b/docs/reference/api/typedoc/interfaces/functioninfo.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">arg_<wbr>types<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">launch_<wbr>param_<wbr>tags<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">name<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/interfaces/libraryprovider.html b/docs/reference/api/typedoc/interfaces/libraryprovider.html
index b3b5c9ee25..e57dcdbca9 100644
--- a/docs/reference/api/typedoc/interfaces/libraryprovider.html
+++ b/docs/reference/api/typedoc/interfaces/libraryprovider.html
@@ -112,7 +112,7 @@
 					<div class="tsd-signature tsd-kind-icon">imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/types.ts#L34">types.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/types.ts#L34">types.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -127,7 +127,7 @@
 					<div class="tsd-signature tsd-kind-icon">start<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>inst<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">Instance</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/cca7d7833/web/src/types.ts#L39">types.ts:39</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/a1c1ccafa/web/src/types.ts#L39">types.ts:39</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/searchindex.js b/docs/searchindex.js
index 157a03e4b6..0382167654 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
+Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
diff --git a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
index 49f14cdd5e..4c4eddd8c6 100644
--- a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:34.356</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
+<p><strong>00:34.583</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -354,7 +354,7 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-relay-vta-py"><span class="std std-ref">Auto-tuning a convolutional network on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_vta.py</span></code>)</p></td>
-<td><p>00:34.349</p></td>
+<td><p>00:34.575</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_alu_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-alu-vta-py"><span class="std std-ref">Auto-tuning a ALU fused op on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_alu_vta.py</span></code>)</p></td>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_classification.html b/docs/topic/vta/tutorials/frontend/deploy_classification.html
index 65ff23838a..6912e9cb5c 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_classification.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_classification.html
@@ -588,7 +588,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
   warnings.warn(
 /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
   graph, lib, params = relay.build(
-resnet18_v1 inference graph built in 36.86s!
+resnet18_v1 inference graph built in 36.40s!
 </pre></div>
 </div>
 </div>
@@ -685,7 +685,7 @@ resnet18_v1 prediction for sample 0
         #5: weasel
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  4.176 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  3.760 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-topic-vta-tutorials-frontend-deploy-classification-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../../../_downloads/9e8de33a5822b31748bfd76861009f92/deploy_classification.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_classification.py</span></code></a></p>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_detection.html b/docs/topic/vta/tutorials/frontend/deploy_detection.html
index 5fda82815e..210d1f681a 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_detection.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_detection.html
@@ -606,7 +606,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
   warnings.warn(
-yolov3-tiny inference graph built in 24.90s!
+yolov3-tiny inference graph built in 25.12s!
 </pre></div>
 </div>
 </div>
@@ -691,7 +691,7 @@ Download test image</p>
         alu_counter     :           849056
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  8.251 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  8.513 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-topic-vta-tutorials-frontend-deploy-detection-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../../../_downloads/65b9451c8de050d7cd9da2fe5a49acc6/deploy_detection.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_detection.py</span></code></a></p>
diff --git a/docs/topic/vta/tutorials/frontend/sg_execution_times.html b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
index 775d6a1c4b..78bdd12649 100644
--- a/docs/topic/vta/tutorials/frontend/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-frontend-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>02:12.427</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
+<p><strong>02:12.273</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -354,11 +354,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_detection.html#sphx-glr-topic-vta-tutorials-frontend-deploy-detection-py"><span class="std std-ref">Deploy Pretrained Vision Detection Model from Darknet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_detection.py</span></code>)</p></td>
-<td><p>01:08.251</p></td>
+<td><p>01:08.513</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_classification.html#sphx-glr-topic-vta-tutorials-frontend-deploy-classification-py"><span class="std std-ref">Deploy Pretrained Vision Model from MxNet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_classification.py</span></code>)</p></td>
-<td><p>01:04.176</p></td>
+<td><p>01:03.760</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/optimize/sg_execution_times.html b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
index 6b188e114c..b177760c1c 100644
--- a/docs/topic/vta/tutorials/optimize/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-optimize-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:03.408</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
+<p><strong>00:03.451</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -354,11 +354,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="convolution_opt.html#sphx-glr-topic-vta-tutorials-optimize-convolution-opt-py"><span class="std std-ref">2D Convolution Optimization</span></a> (<code class="docutils literal notranslate"><span class="pre">convolution_opt.py</span></code>)</p></td>
-<td><p>00:02.856</p></td>
+<td><p>00:02.897</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="matrix_multiply_opt.html#sphx-glr-topic-vta-tutorials-optimize-matrix-multiply-opt-py"><span class="std std-ref">Matrix Multiply Blocking</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply_opt.py</span></code>)</p></td>
-<td><p>00:00.553</p></td>
+<td><p>00:00.554</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/sg_execution_times.html b/docs/topic/vta/tutorials/sg_execution_times.html
index 7562ae2bce..9acd0480e4 100644
--- a/docs/topic/vta/tutorials/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:00.952</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
+<p><strong>00:00.969</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -354,11 +354,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="matrix_multiply.html#sphx-glr-topic-vta-tutorials-matrix-multiply-py"><span class="std std-ref">Simple Matrix Multiply</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply.py</span></code>)</p></td>
-<td><p>00:00.487</p></td>
+<td><p>00:00.498</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="vta_get_started.html#sphx-glr-topic-vta-tutorials-vta-get-started-py"><span class="std std-ref">Get Started with VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">vta_get_started.py</span></code>)</p></td>
-<td><p>00:00.465</p></td>
+<td><p>00:00.471</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/tutorial/auto_scheduler_matmul_x86.html b/docs/tutorial/auto_scheduler_matmul_x86.html
index b4fdbe961f..4ee47b0daa 100644
--- a/docs/tutorial/auto_scheduler_matmul_x86.html
+++ b/docs/tutorial/auto_scheduler_matmul_x86.html
@@ -497,6 +497,9 @@ trials, we can load the best schedule from the log file and apply it.</p>
 <a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">sch</span></a><span class="p">,</span> <a href="../reference/api/python/ir.html#tvm.ir.Array" title="tvm.ir.Array" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">args</span></a> <span class="o">=</span> <a href="../reference/api/pyth [...]
 </pre></div>
 </div>
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>*E
+</pre></div>
+</div>
 </div>
 <div class="section" id="inspecting-the-optimized-schedule">
 <h2>Inspecting the Optimized Schedule<a class="headerlink" href="#inspecting-the-optimized-schedule" title="Permalink to this headline">¶</a></h2>
@@ -574,7 +577,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 96.577 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 98.264 ms
 </pre></div>
 </div>
 </div>
@@ -646,7 +649,7 @@ automatically optimize a matrix multiplication, without the need to specify a
 search template.  It ends a series of examples that starts from the Tensor
 Expression (TE) language that demonstrates how TVM can optimize computational
 operations.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  40.584 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  51.453 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-auto-scheduler-matmul-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/eac4389b114db015e95cb3cdf8b86b83/auto_scheduler_matmul_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">auto_scheduler_matmul_x86.py</span></code></a></p>
diff --git a/docs/tutorial/autotvm_matmul_x86.html b/docs/tutorial/autotvm_matmul_x86.html
index 230c2bfbb3..c59acea88d 100644
--- a/docs/tutorial/autotvm_matmul_x86.html
+++ b/docs/tutorial/autotvm_matmul_x86.html
@@ -685,16 +685,173 @@ reduce variance, we take 5 measurements and average them.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>waiting for device...
 device available
 Get devices for measurement successfully!
-No: 1   GFLOPS: 1.02/1.02       result: MeasureResult(costs=(0.2624888102,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.4670820236206055, timestamp=1683740196.5253062)       [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 2])],None,18
-No: 2   GFLOPS: 10.71/10.71     result: MeasureResult(costs=(0.0250643258,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6679961681365967, timestamp=1683740197.1942677)       [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 16])],None,43
-No: 3   GFLOPS: 4.98/10.71      result: MeasureResult(costs=(0.05387433740000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.1240603923797607, timestamp=1683740198.3227458)        [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 2])],None,12
-No: 4   GFLOPS: 10.06/10.71     result: MeasureResult(costs=(0.0266928622,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6939327716827393, timestamp=1683740199.0207431)       [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 32])],None,51
-No: 5   GFLOPS: 10.42/10.71     result: MeasureResult(costs=(0.025769643200000004,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7046566009521484, timestamp=1683740199.879057)        [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 16])],None,41
-No: 6   GFLOPS: 10.18/10.71     result: MeasureResult(costs=(0.026357603999999996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7504997253417969, timestamp=1683740200.5699663)       [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 64])],None,60
-No: 7   GFLOPS: 4.53/10.71      result: MeasureResult(costs=(0.0592993322,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.221484661102295, timestamp=1683740201.78052)  [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 2])],None,13
-No: 8   GFLOPS: 11.76/11.76     result: MeasureResult(costs=(0.022832740799999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6648235321044922, timestamp=1683740202.4155614)       [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 256])],None,83
-No: 9   GFLOPS: 10.54/11.76     result: MeasureResult(costs=(0.025470962800000004,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6552300453186035, timestamp=1683740203.180015)        [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 256])],None,81
-No: 10  GFLOPS: 0.51/11.76      result: MeasureResult(costs=(0.5307238545999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.736793756484985, timestamp=1683740211.9417796)  [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 1])],None,8
+No: 1   GFLOPS: 0.00/0.00       result: Traceback (most recent call last):
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 742, in __call__
+    yield remote, remote.load_module(os.path.split(build_result.filename)[1])
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 706, in run_through_rpc
+    costs = time_f(*args).results
+  File &quot;/workspace/python/tvm/runtime/module.py&quot;, line 399, in evaluator
+    blob = feval(*args)
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 262, in tvm._ffi._cy3.core.FuncCall
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 251, in tvm._ffi._cy3.core.FuncCall3
+  File &quot;tvm/_ffi/_cython/./base.pxi&quot;, line 181, in tvm._ffi._cy3.core.CHECK_CALL
+tvm._ffi.base.TVMError: Traceback (most recent call last):
+  4: TVMFuncCall
+        at /workspace/src/runtime/c_runtime_api.cc:477
+  3: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+        at /workspace/include/tvm/runtime/packed_func.h:1217
+  2: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+        at /workspace/src/runtime/rpc/rpc_module.cc:129
+  1: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function&lt;void (tvm::runtime::TVMArgs)&gt; const&amp;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:1012
+  0: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function&lt;void (tvm::runtime::TVMArgs)&gt;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:804
+  File &quot;/workspace/src/runtime/rpc/rpc_endpoint.cc&quot;, line 804
+TVMError:
+---------------------------------------------------------------
+An error occurred during the execution of TVM.
+For more information, please see: https://tvm.apache.org/docs/errors.html
+---------------------------------------------------------------
+  Check failed: (code == RPCCode::kReturn) is false: code=kShutdown
+
+During handling of the above exception, another exception occurred:
+
+Traceback (most recent call last):
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 706, in run_through_rpc
+    costs = time_f(*args).results
+  File &quot;/usr/lib/python3.8/contextlib.py&quot;, line 131, in __exit__
+    self.gen.throw(type, value, traceback)
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 746, in __call__
+    remote.remove(build_result.filename)
+  File &quot;/workspace/python/tvm/rpc/client.py&quot;, line 145, in remove
+    self._remote_funcs[&quot;remove&quot;] = self.get_function(&quot;tvm.rpc.server.remove&quot;)
+  File &quot;/workspace/python/tvm/rpc/client.py&quot;, line 73, in get_function
+    return self._sess.get_function(name)
+  File &quot;/workspace/python/tvm/runtime/module.py&quot;, line 177, in get_function
+    check_call(
+  File &quot;/workspace/python/tvm/_ffi/base.py&quot;, line 348, in check_call
+    raise get_last_ffi_error()
+tvm._ffi.base.TVMError: Traceback (most recent call last):
+  54: 0xffffffffffffffff
+  53: _start
+  52: __libc_start_main
+  51: 0x00007f9ade2d2d8f
+  50: Py_BytesMain
+  49: Py_RunMain
+  48: 0x00000000005f3021
+  47: PyObject_Call
+  46: _PyFunction_Vectorcall
+  45: _PyEval_EvalCodeWithName
+  44: _PyEval_EvalFrameDefault
+  43: _PyFunction_Vectorcall
+  42: _PyEval_EvalCodeWithName
+  41: _PyEval_EvalFrameDefault
+  40: 0x000000000051546f
+  39: 0x00000000005dabd0
+  38: PyEval_EvalCode
+  37: _PyEval_EvalCodeWithName
+  36: _PyEval_EvalFrameDefault
+  35: _PyFunction_Vectorcall
+  34: _PyEval_EvalCodeWithName
+  33: _PyEval_EvalFrameDefault
+  32: PyObject_Call
+  31: _PyFunction_Vectorcall
+  30: _PyEval_EvalCodeWithName
+  29: _PyEval_EvalFrameDefault
+  28: 0x0000000000521f82
+  27: _PyFunction_Vectorcall
+  26: _PyEval_EvalFrameDefault
+  25: 0x000000000052f0a9
+  24: 0x0000000000626e1c
+  23: 0x0000000000626f00
+  22: 0x00000000005291f0
+  21: _PyEval_EvalFrameDefault
+  20: _PyFunction_Vectorcall
+  19: _PyEval_EvalFrameDefault
+  18: _PyFunction_Vectorcall
+  17: _PyEval_EvalFrameDefault
+  16: _PyFunction_Vectorcall
+  15: _PyEval_EvalCodeWithName
+  14: _PyEval_EvalFrameDefault
+  13: _PyObject_MakeTpCall
+  12: 0x00007f9ade0ee429
+  11: _ctypes_callproc
+  10: 0x00007f9adde4f492
+  9: 0x00007f9adde52e2d
+  8: TVMModGetFunction
+        at /workspace/src/runtime/c_runtime_api.cc:408
+  7: tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, bool)
+        at /workspace/src/runtime/module.cc:66
+  6: tvm::runtime::RPCModuleNode::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, tvm::runtime::ObjectPtr&lt;tvm::runtime::Object&gt; const&amp;)
+        at /workspace/src/runtime/rpc/rpc_module.cc:187
+  5: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:1007
+  4: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote&lt;std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;&gt;(tvm::runtime::RPCCode, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.h:223
+  3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()&lt;int, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;&gt;(int&amp;&amp;, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;) const
+        at /workspace/include/tvm/runtime/packed_func.h:1621
+  2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+        at /workspace/include/tvm/runtime/packed_func.h:1217
+  1: Call
+        at /workspace/include/tvm/runtime/packed_func.h:1213
+  0: operator()
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:684
+  File &quot;/workspace/src/runtime/rpc/rpc_endpoint.cc&quot;, line 684
+TVMError:
+---------------------------------------------------------------
+An error occurred during the execution of TVM.
+For more information, please see: https://tvm.apache.org/docs/errors.html
+---------------------------------------------------------------
+  Check failed: (code == RPCCode::kReturn) is false: code=1
+
+Traceback (most recent call last):
+  54: 0xffffffffffffffff
+  53: _start
+  52: __libc_start_main
+  51: 0x00007f9ade2d2d8f
+  50: Py_BytesMain
+  49: Py_RunMain
+  48: 0x00000000005f3021
+  47: PyObject_Call
+  46: _PyFunction_Vectorcall
+  45: _PyEval_EvalCodeWithName
+  44: _PyEval_EvalFrameDefault
+  43: _PyFunction_Vectorcall
+  42: _PyEval_EvalCodeWithName
+  41: _PyEval_EvalFrameDefault
+  40: 0x000000000051546f
+  39: 0x00000000005dabd0
+  38: PyEval_EvalCode
+  37: _PyEval_EvalCodeWithName
+  36: _PyEval_EvalFrameDefault
+  35: _PyFunction_Vectorcall
+  34: _PyEval_EvalCodeWithName
+  33: _PyEval_EvalFrameDefault
+  32: PyObject_Call
+  31: _PyFunction_Vectorcall
+  30: _PyEval_EvalCodeWithName
+  29: _PyEval_EvalFrameDefault
+  28: 0x0000000000521f82
+  27: _PyFunction_Vectorcall
+  26: _PyEval_EvalFrameDefault
+  25: 0x000000000052f0a9
+  24: 0x0000000000626e1c
+  23: 0x0000000000626f00
+  22: 0x00000000005291f0
+  21: _PyEval_EvalFrameDefault
+  20: _PyFunction_Vectorcall
+  19: _PyEval_EvalFrameDefault
+  18: _PyFunction_Vectorcal     [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 1])],None,9
+No: 2   GFLOPS: 8.06/8.06       result: MeasureResult(costs=(0.033303479399999995,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7889261245727539, timestamp=1683750991.224603)        [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 16])],None,40
+No: 3   GFLOPS: 2.03/8.06       result: MeasureResult(costs=(0.13210675519999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3505711555480957, timestamp=1683750993.597124) [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 4])],None,28
+No: 4   GFLOPS: 11.41/11.41     result: MeasureResult(costs=(0.0235300276,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6041018962860107, timestamp=1683750994.2387557)       [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 512])],None,95
+No: 5   GFLOPS: 9.37/11.41      result: MeasureResult(costs=(0.0286352852,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7317562103271484, timestamp=1683750995.1117728)       [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 64])],None,61
+No: 6   GFLOPS: 3.76/11.41      result: MeasureResult(costs=(0.0713379094,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4077887535095215, timestamp=1683750996.5105119)       [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 8])],None,35
+No: 7   GFLOPS: 16.69/16.69     result: MeasureResult(costs=(0.0160871336,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5407025814056396, timestamp=1683750997.0280185)       [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 64])],None,64
+No: 8   GFLOPS: 11.22/16.69     result: MeasureResult(costs=(0.0239241816,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6765973567962646, timestamp=1683750997.6746368)       [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 256])],None,87
+No: 9   GFLOPS: 1.98/16.69      result: MeasureResult(costs=(0.135866018,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3858609199523926, timestamp=1683751000.1886578)        [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 1])],None,2
+No: 10  GFLOPS: 12.55/16.69     result: MeasureResult(costs=(0.0213917062,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5771560668945312, timestamp=1683751000.7944303)       [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 128])],None,75
 </pre></div>
 </div>
 <p>With tuning completed, we can choose the configuration from the log file that
diff --git a/docs/tutorial/autotvm_relay_x86.html b/docs/tutorial/autotvm_relay_x86.html
index 696c8631a1..1fd4ac3d25 100644
--- a/docs/tutorial/autotvm_relay_x86.html
+++ b/docs/tutorial/autotvm_relay_x86.html
@@ -563,7 +563,7 @@ standard deviation.</p>
 <span class="nb">print</span><span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 495.85341244997835, &#39;median&#39;: 495.2317347000644, &#39;std&#39;: 4.122248126917765}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 493.6643143499532, &#39;median&#39;: 494.8347487999854, &#39;std&#39;: 3.130444482823425}
 </pre></div>
 </div>
 </div>
@@ -752,178 +752,178 @@ depending on the specifics of the model and the target platform.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  1/25]  Current/Best:   11.67/  11.67 GFLOPS | Progress: (4/20) | 13.21 s
-[Task  1/25]  Current/Best:   11.10/  23.94 GFLOPS | Progress: (8/20) | 16.62 s
-[Task  1/25]  Current/Best:   13.63/  23.98 GFLOPS | Progress: (12/20) | 20.51 s
-[Task  1/25]  Current/Best:   12.93/  23.98 GFLOPS | Progress: (16/20) | 22.77 s
-[Task  1/25]  Current/Best:   20.64/  23.98 GFLOPS | Progress: (20/20) | 25.89 s Done.
+[Task  1/25]  Current/Best:   17.63/  17.63 GFLOPS | Progress: (4/20) | 10.21 s
+[Task  1/25]  Current/Best:   14.17/  21.69 GFLOPS | Progress: (8/20) | 12.92 s
+[Task  1/25]  Current/Best:   13.35/  24.22 GFLOPS | Progress: (12/20) | 15.64 s
+[Task  1/25]  Current/Best:   14.84/  24.22 GFLOPS | Progress: (16/20) | 18.95 s
+[Task  1/25]  Current/Best:   11.99/  24.22 GFLOPS | Progress: (20/20) | 21.72 s Done.
 
 [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  2/25]  Current/Best:   19.47/  19.47 GFLOPS | Progress: (4/20) | 4.51 s
-[Task  2/25]  Current/Best:    8.08/  19.47 GFLOPS | Progress: (8/20) | 6.17 s
-[Task  2/25]  Current/Best:   19.80/  19.80 GFLOPS | Progress: (12/20) | 7.61 s
-[Task  2/25]  Current/Best:   20.48/  20.48 GFLOPS | Progress: (16/20) | 9.01 s
-[Task  2/25]  Current/Best:    6.40/  20.48 GFLOPS | Progress: (20/20) | 10.37 s Done.
+[Task  2/25]  Current/Best:   16.20/  16.20 GFLOPS | Progress: (4/20) | 4.80 s
+[Task  2/25]  Current/Best:   11.30/  16.20 GFLOPS | Progress: (8/20) | 6.57 s
+[Task  2/25]  Current/Best:   15.00/  16.20 GFLOPS | Progress: (12/20) | 8.35 s
+[Task  2/25]  Current/Best:   13.84/  16.20 GFLOPS | Progress: (16/20) | 10.37 s
+[Task  2/25]  Current/Best:   17.71/  17.71 GFLOPS | Progress: (20/20) | 11.83 s Done.
 
 [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  3/25]  Current/Best:   19.31/  19.31 GFLOPS | Progress: (4/20) | 5.38 s
-[Task  3/25]  Current/Best:   19.16/  19.31 GFLOPS | Progress: (8/20) | 8.08 s
-[Task  3/25]  Current/Best:   12.73/  20.20 GFLOPS | Progress: (12/20) | 10.39 s
-[Task  3/25]  Current/Best:   12.59/  20.20 GFLOPS | Progress: (16/20) | 12.52 s
-[Task  3/25]  Current/Best:   21.89/  21.89 GFLOPS | Progress: (20/20) | 14.62 s Done.
+[Task  3/25]  Current/Best:    5.36/  22.51 GFLOPS | Progress: (4/20) | 5.53 s
+[Task  3/25]  Current/Best:   15.06/  22.51 GFLOPS | Progress: (8/20) | 7.94 s
+[Task  3/25]  Current/Best:   16.98/  22.51 GFLOPS | Progress: (12/20) | 9.95 s
+[Task  3/25]  Current/Best:    1.63/  24.22 GFLOPS | Progress: (16/20) | 13.36 s
+[Task  3/25]  Current/Best:    9.07/  24.22 GFLOPS | Progress: (20/20) | 15.71 s Done.
 
 [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  4/25]  Current/Best:   20.34/  20.34 GFLOPS | Progress: (4/20) | 4.87 s
-[Task  4/25]  Current/Best:   14.09/  20.34 GFLOPS | Progress: (8/20) | 9.00 s
-[Task  4/25]  Current/Best:    7.29/  20.34 GFLOPS | Progress: (12/20) | 13.24 s
-[Task  4/25]  Current/Best:    9.49/  20.34 GFLOPS | Progress: (16/20) | 15.21 s
-[Task  4/25]  Current/Best:   14.97/  21.55 GFLOPS | Progress: (20/20) | 21.55 s Done.
+[Task  4/25]  Current/Best:   14.51/  20.45 GFLOPS | Progress: (4/20) | 10.52 s
+[Task  4/25]  Current/Best:   16.53/  20.45 GFLOPS | Progress: (8/20) | 13.46 s
+[Task  4/25]  Current/Best:   12.21/  20.45 GFLOPS | Progress: (12/20) | 16.09 s
+[Task  4/25]  Current/Best:   14.53/  20.45 GFLOPS | Progress: (16/20) | 18.18 s
+[Task  4/25]  Current/Best:   17.87/  20.45 GFLOPS | Progress: (20/20) | 20.05 s Done.
 
 [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  5/25]  Current/Best:   11.43/  13.59 GFLOPS | Progress: (4/20) | 5.50 s
-[Task  5/25]  Current/Best:   17.89/  17.89 GFLOPS | Progress: (8/20) | 7.79 s
-[Task  5/25]  Current/Best:   16.28/  23.41 GFLOPS | Progress: (12/20) | 9.37 s
-[Task  5/25]  Current/Best:   17.68/  23.41 GFLOPS | Progress: (16/20) | 11.55 s
-[Task  5/25]  Current/Best:   19.02/  23.41 GFLOPS | Progress: (20/20) | 13.86 s Done.
+[Task  5/25]  Current/Best:   13.33/  19.62 GFLOPS | Progress: (4/20) | 5.15 s
+[Task  5/25]  Current/Best:   10.99/  19.62 GFLOPS | Progress: (8/20) | 7.44 s
+[Task  5/25]  Current/Best:   12.99/  19.62 GFLOPS | Progress: (12/20) | 9.73 s
+[Task  5/25]  Current/Best:    9.33/  20.57 GFLOPS | Progress: (16/20) | 12.08 s
+[Task  5/25]  Current/Best:    4.85/  20.57 GFLOPS | Progress: (20/20) | 14.09 s Done.
 
 [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  6/25]  Current/Best:   11.82/  16.23 GFLOPS | Progress: (4/20) | 6.39 s
-[Task  6/25]  Current/Best:   12.77/  16.23 GFLOPS | Progress: (8/20) | 9.66 s
-[Task  6/25]  Current/Best:    3.06/  16.23 GFLOPS | Progress: (12/20) | 13.07 s
-[Task  6/25]  Current/Best:   20.45/  20.45 GFLOPS | Progress: (16/20) | 15.64 s
-[Task  6/25]  Current/Best:   14.18/  20.45 GFLOPS | Progress: (20/20) | 18.24 s Done.
+[Task  6/25]  Current/Best:   13.91/  15.06 GFLOPS | Progress: (4/20) | 6.56 s
+[Task  6/25]  Current/Best:   11.59/  19.38 GFLOPS | Progress: (8/20) | 9.17 s
+[Task  6/25]  Current/Best:   15.66/  19.38 GFLOPS | Progress: (12/20) | 13.93 s
+[Task  6/25]  Current/Best:   15.06/  19.38 GFLOPS | Progress: (16/20) | 15.95 s
+[Task  6/25]  Current/Best:   14.46/  19.38 GFLOPS | Progress: (20/20) | 18.64 s Done.
 
 [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  7/25]  Current/Best:    7.05/  17.37 GFLOPS | Progress: (4/20) | 5.04 s
-[Task  7/25]  Current/Best:   14.90/  19.06 GFLOPS | Progress: (8/20) | 7.36 s
-[Task  7/25]  Current/Best:   12.26/  19.06 GFLOPS | Progress: (12/20) | 9.75 s
-[Task  7/25]  Current/Best:   22.99/  22.99 GFLOPS | Progress: (16/20) | 11.84 s
-[Task  7/25]  Current/Best:   10.82/  22.99 GFLOPS | Progress: (20/20) | 14.63 s Done.
+[Task  7/25]  Current/Best:    6.32/  18.15 GFLOPS | Progress: (4/20) | 5.06 s
+[Task  7/25]  Current/Best:    8.42/  21.09 GFLOPS | Progress: (8/20) | 7.64 s
+[Task  7/25]  Current/Best:   10.39/  21.09 GFLOPS | Progress: (12/20) | 12.54 s
+[Task  7/25]  Current/Best:    8.56/  21.09 GFLOPS | Progress: (16/20) | 15.51 s
+[Task  7/25]  Current/Best:   15.01/  21.09 GFLOPS | Progress: (20/20) | 18.30 s Done.
 
 [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  8/25]  Current/Best:    7.66/  13.61 GFLOPS | Progress: (4/20) | 9.14 s
-[Task  8/25]  Current/Best:   10.90/  15.56 GFLOPS | Progress: (8/20) | 11.52 s
-[Task  8/25]  Current/Best:   12.89/  15.56 GFLOPS | Progress: (12/20) | 15.84 s
-[Task  8/25]  Current/Best:    9.62/  15.56 GFLOPS | Progress: (16/20) | 19.89 s
-[Task  8/25]  Current/Best:    3.88/  15.56 GFLOPS | Progress: (20/20) | 24.70 s Done.
+[Task  8/25]  Current/Best:   17.07/  17.07 GFLOPS | Progress: (4/20) | 5.20 s
+[Task  8/25]  Current/Best:    4.77/  17.07 GFLOPS | Progress: (8/20) | 13.23 s
+[Task  8/25]  Current/Best:    3.21/  17.07 GFLOPS | Progress: (12/20) | 21.52 s
+[Task  8/25]  Current/Best:   14.93/  17.07 GFLOPS | Progress: (16/20) | 29.86 s
+[Task  8/25]  Current/Best:   13.43/  17.07 GFLOPS | Progress: (20/20) | 33.04 s Done.
 
 [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  9/25]  Current/Best:   19.96/  19.96 GFLOPS | Progress: (4/20) | 4.32 s
-[Task  9/25]  Current/Best:   15.73/  19.96 GFLOPS | Progress: (8/20) | 15.38 s
-[Task  9/25]  Current/Best:   18.27/  19.96 GFLOPS | Progress: (12/20) | 18.27 s
-[Task  9/25]  Current/Best:   12.29/  19.96 GFLOPS | Progress: (16/20) | 21.10 s
-[Task  9/25]  Current/Best:   16.03/  21.28 GFLOPS | Progress: (20/20) | 23.04 s
-[Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
-
-[Task 10/25]  Current/Best:    5.78/  18.10 GFLOPS | Progress: (4/20) | 4.78 s
-[Task 10/25]  Current/Best:   13.59/  18.10 GFLOPS | Progress: (8/20) | 6.57 s
-[Task 10/25]  Current/Best:   14.90/  18.10 GFLOPS | Progress: (12/20) | 9.45 s
-[Task 10/25]  Current/Best:   14.46/  18.10 GFLOPS | Progress: (16/20) | 11.93 s
-[Task 10/25]  Current/Best:   14.16/  18.10 GFLOPS | Progress: (20/20) | 14.17 s Done.
+[Task  9/25]  Current/Best:   18.69/  18.69 GFLOPS | Progress: (4/20) | 4.65 s
+[Task  9/25]  Current/Best:   10.28/  18.69 GFLOPS | Progress: (8/20) | 7.30 s
+[Task  9/25]  Current/Best:    7.48/  22.13 GFLOPS | Progress: (12/20) | 15.28 s
+[Task  9/25]  Current/Best:   14.78/  22.13 GFLOPS | Progress: (16/20) | 17.08 s
+[Task  9/25]  Current/Best:   14.86/  22.13 GFLOPS | Progress: (20/20) | 28.18 s
+[Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
+[Task 10/25]  Current/Best:    7.45/  20.95 GFLOPS | Progress: (4/20) | 5.80 s Done.
+
+[Task 10/25]  Current/Best:   13.86/  20.95 GFLOPS | Progress: (8/20) | 8.72 s
+[Task 10/25]  Current/Best:   17.51/  21.90 GFLOPS | Progress: (12/20) | 10.48 s
+[Task 10/25]  Current/Best:   15.72/  21.90 GFLOPS | Progress: (16/20) | 12.64 s
+[Task 10/25]  Current/Best:   14.26/  21.90 GFLOPS | Progress: (20/20) | 14.76 s Done.
 
 [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 11/25]  Current/Best:   12.29/  20.05 GFLOPS | Progress: (4/20) | 5.11 s
-[Task 11/25]  Current/Best:   10.34/  22.44 GFLOPS | Progress: (8/20) | 7.08 s
-[Task 11/25]  Current/Best:   19.24/  22.44 GFLOPS | Progress: (12/20) | 9.75 s
-[Task 11/25]  Current/Best:   20.80/  22.91 GFLOPS | Progress: (16/20) | 12.15 s
-[Task 11/25]  Current/Best:    9.36/  23.53 GFLOPS | Progress: (20/20) | 14.23 s Done.
+[Task 11/25]  Current/Best:   19.90/  20.79 GFLOPS | Progress: (4/20) | 5.31 s
+[Task 11/25]  Current/Best:    8.50/  20.79 GFLOPS | Progress: (8/20) | 8.17 s
+[Task 11/25]  Current/Best:   16.95/  20.80 GFLOPS | Progress: (12/20) | 10.48 s
+[Task 11/25]  Current/Best:    6.29/  20.80 GFLOPS | Progress: (16/20) | 12.80 s
+[Task 11/25]  Current/Best:   19.27/  20.80 GFLOPS | Progress: (20/20) | 14.97 s Done.
 
 [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 12/25]  Current/Best:   10.84/  11.39 GFLOPS | Progress: (4/20) | 6.29 s
-[Task 12/25]  Current/Best:    4.58/  15.96 GFLOPS | Progress: (8/20) | 9.04 s
-[Task 12/25]  Current/Best:   12.62/  15.96 GFLOPS | Progress: (12/20) | 11.50 s
-[Task 12/25]  Current/Best:    9.08/  16.53 GFLOPS | Progress: (16/20) | 14.40 s
-[Task 12/25]  Current/Best:   16.58/  16.58 GFLOPS | Progress: (20/20) | 17.84 s Done.
+[Task 12/25]  Current/Best:   13.47/  17.38 GFLOPS | Progress: (4/20) | 5.12 s
+[Task 12/25]  Current/Best:    5.42/  19.45 GFLOPS | Progress: (8/20) | 9.24 s
+[Task 12/25]  Current/Best:   14.05/  19.45 GFLOPS | Progress: (12/20) | 12.81 s
+[Task 12/25]  Current/Best:   15.88/  19.45 GFLOPS | Progress: (16/20) | 16.55 s
+[Task 12/25]  Current/Best:   14.00/  19.45 GFLOPS | Progress: (20/20) | 19.22 s Done.
 
 [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 13/25]  Current/Best:    6.17/  20.69 GFLOPS | Progress: (4/20) | 6.35 s
-[Task 13/25]  Current/Best:   19.18/  20.69 GFLOPS | Progress: (8/20) | 8.91 s
-[Task 13/25]  Current/Best:   18.44/  20.69 GFLOPS | Progress: (12/20) | 11.73 s
-[Task 13/25]  Current/Best:    9.68/  20.69 GFLOPS | Progress: (16/20) | 17.32 s
-[Task 13/25]  Current/Best:    3.08/  22.73 GFLOPS | Progress: (20/20) | 21.39 s Done.
+[Task 13/25]  Current/Best:    8.53/  19.45 GFLOPS | Progress: (4/20) | 5.59 s
+[Task 13/25]  Current/Best:   11.67/  20.57 GFLOPS | Progress: (8/20) | 9.58 s
+[Task 13/25]  Current/Best:   18.36/  22.66 GFLOPS | Progress: (12/20) | 12.58 s
+[Task 13/25]  Current/Best:   16.65/  22.66 GFLOPS | Progress: (16/20) | 15.33 s
+[Task 13/25]  Current/Best:    9.60/  22.66 GFLOPS | Progress: (20/20) | 17.65 s Done.
 
 [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 14/25]  Current/Best:   10.45/  13.74 GFLOPS | Progress: (4/20) | 5.30 s
-[Task 14/25]  Current/Best:    7.20/  17.73 GFLOPS | Progress: (8/20) | 12.71 s
-[Task 14/25]  Current/Best:   13.04/  17.73 GFLOPS | Progress: (12/20) | 15.84 s
-[Task 14/25]  Current/Best:   20.63/  20.63 GFLOPS | Progress: (16/20) | 18.49 s
-[Task 14/25]  Current/Best:   17.57/  20.63 GFLOPS | Progress: (20/20) | 25.81 s Done.
+[Task 14/25]  Current/Best:   18.30/  18.30 GFLOPS | Progress: (4/20) | 4.91 s
+[Task 14/25]  Current/Best:    1.62/  18.30 GFLOPS | Progress: (8/20) | 18.47 s
+[Task 14/25]  Current/Best:    6.33/  21.36 GFLOPS | Progress: (12/20) | 30.58 s
+[Task 14/25]  Current/Best:   20.41/  21.36 GFLOPS | Progress: (16/20) | 33.11 s
+[Task 14/25]  Current/Best:   10.31/  21.36 GFLOPS | Progress: (20/20) | 40.45 s Done.
 
 [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 15/25]  Current/Best:   18.97/  18.97 GFLOPS | Progress: (4/20) | 7.18 s
-[Task 15/25]  Current/Best:   19.34/  20.58 GFLOPS | Progress: (8/20) | 18.27 s
-[Task 15/25]  Current/Best:   10.33/  20.58 GFLOPS | Progress: (12/20) | 21.23 s
-[Task 15/25]  Current/Best:   14.29/  20.58 GFLOPS | Progress: (16/20) | 23.15 s
-[Task 15/25]  Current/Best:    7.12/  22.43 GFLOPS | Progress: (20/20) | 30.28 s
+[Task 15/25]  Current/Best:   12.40/  21.51 GFLOPS | Progress: (4/20) | 13.43 s
+[Task 15/25]  Current/Best:   14.64/  21.51 GFLOPS | Progress: (8/20) | 15.71 s
+[Task 15/25]  Current/Best:    6.46/  21.51 GFLOPS | Progress: (12/20) | 21.34 s
+[Task 15/25]  Current/Best:    3.15/  21.51 GFLOPS | Progress: (16/20) | 29.28 s
+[Task 15/25]  Current/Best:   16.07/  21.51 GFLOPS | Progress: (20/20) | 31.67 s Done.
+
 [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 16/25]  Current/Best:    5.68/  16.23 GFLOPS | Progress: (4/20) | 4.63 s
-[Task 16/25]  Current/Best:   19.28/  19.28 GFLOPS | Progress: (8/20) | 6.80 s
-[Task 16/25]  Current/Best:   14.31/  19.28 GFLOPS | Progress: (12/20) | 9.12 s
-[Task 16/25]  Current/Best:   12.45/  19.28 GFLOPS | Progress: (16/20) | 11.27 s
-[Task 16/25]  Current/Best:    6.50/  19.28 GFLOPS | Progress: (20/20) | 13.54 s Done.
+[Task 16/25]  Current/Best:   12.90/  12.90 GFLOPS | Progress: (4/20) | 5.46 s
+[Task 16/25]  Current/Best:    6.89/  16.17 GFLOPS | Progress: (8/20) | 7.17 s
+[Task 16/25]  Current/Best:    4.95/  18.91 GFLOPS | Progress: (12/20) | 9.14 s
+[Task 16/25]  Current/Best:   12.38/  18.91 GFLOPS | Progress: (16/20) | 11.22 s
+[Task 16/25]  Current/Best:   15.48/  20.55 GFLOPS | Progress: (20/20) | 13.56 s Done.
 
 [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 17/25]  Current/Best:   23.27/  23.27 GFLOPS | Progress: (4/20) | 5.36 s
-[Task 17/25]  Current/Best:   11.68/  23.27 GFLOPS | Progress: (8/20) | 8.26 s
-[Task 17/25]  Current/Best:    1.56/  23.27 GFLOPS | Progress: (12/20) | 11.85 s
-[Task 17/25]  Current/Best:    9.76/  23.41 GFLOPS | Progress: (16/20) | 16.37 s
-[Task 17/25]  Current/Best:   11.23/  23.41 GFLOPS | Progress: (20/20) | 20.02 s Done.
+[Task 17/25]  Current/Best:   19.21/  19.68 GFLOPS | Progress: (4/20) | 5.41 s
+[Task 17/25]  Current/Best:   21.04/  21.77 GFLOPS | Progress: (8/20) | 7.83 s
+[Task 17/25]  Current/Best:   10.90/  21.77 GFLOPS | Progress: (12/20) | 11.26 s
+[Task 17/25]  Current/Best:   20.00/  21.77 GFLOPS | Progress: (16/20) | 14.25 s
+[Task 17/25]  Current/Best:    9.30/  22.72 GFLOPS | Progress: (20/20) | 17.43 s Done.
 
 [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 18/25]  Current/Best:    3.93/  18.68 GFLOPS | Progress: (4/20) | 10.30 s
-[Task 18/25]  Current/Best:   15.37/  18.68 GFLOPS | Progress: (8/20) | 13.34 s
-[Task 18/25]  Current/Best:   15.77/  19.17 GFLOPS | Progress: (12/20) | 15.62 s
-[Task 18/25]  Current/Best:   15.15/  19.17 GFLOPS | Progress: (16/20) | 17.98 s
-[Task 18/25]  Current/Best:    6.26/  19.17 GFLOPS | Progress: (20/20) | 20.61 s Done.
+[Task 18/25]  Current/Best:    5.15/  10.00 GFLOPS | Progress: (4/20) | 8.03 s
+[Task 18/25]  Current/Best:    7.97/  16.40 GFLOPS | Progress: (8/20) | 11.68 s
+[Task 18/25]  Current/Best:   14.65/  16.40 GFLOPS | Progress: (12/20) | 14.17 s
+[Task 18/25]  Current/Best:    7.36/  18.97 GFLOPS | Progress: (16/20) | 16.20 s
+[Task 18/25]  Current/Best:    1.57/  20.49 GFLOPS | Progress: (20/20) | 19.36 s Done.
 
 [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 19/25]  Current/Best:    7.83/  12.29 GFLOPS | Progress: (4/20) | 5.61 s
-[Task 19/25]  Current/Best:   20.90/  20.90 GFLOPS | Progress: (8/20) | 9.39 s
-[Task 19/25]  Current/Best:    6.66/  20.90 GFLOPS | Progress: (12/20) | 13.59 s
-[Task 19/25]  Current/Best:    5.94/  20.90 GFLOPS | Progress: (16/20) | 19.41 s
-[Task 19/25]  Current/Best:    9.20/  20.90 GFLOPS | Progress: (20/20) | 23.77 s Done.
-
+[Task 19/25]  Current/Best:   19.98/  19.98 GFLOPS | Progress: (4/20) | 8.27 s
+[Task 19/25]  Current/Best:   18.18/  19.98 GFLOPS | Progress: (8/20) | 13.08 s
+[Task 19/25]  Current/Best:   21.18/  21.18 GFLOPS | Progress: (12/20) | 24.58 s
+[Task 19/25]  Current/Best:    2.70/  21.18 GFLOPS | Progress: (16/20) | 28.75 s
+[Task 19/25]  Current/Best:    2.65/  21.18 GFLOPS | Progress: (20/20) | 33.88 s
 [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 20/25]  Current/Best:   12.05/  21.42 GFLOPS | Progress: (4/20) | 5.22 s
-[Task 20/25]  Current/Best:    2.70/  21.42 GFLOPS | Progress: (8/20) | 8.87 s
-[Task 20/25]  Current/Best:    2.68/  21.42 GFLOPS | Progress: (12/20) | 21.15 s
-[Task 20/25]  Current/Best:   12.52/  21.42 GFLOPS | Progress: (16/20) | 25.92 s
-[Task 20/25]  Current/Best:    5.96/  21.42 GFLOPS | Progress: (20/20) | 28.31 s
+[Task 20/25]  Current/Best:    6.57/  17.99 GFLOPS | Progress: (4/20) | 8.44 s
+[Task 20/25]  Current/Best:    6.20/  17.99 GFLOPS | Progress: (8/20) | 17.63 s
+[Task 20/25]  Current/Best:   13.53/  17.99 GFLOPS | Progress: (12/20) | 29.43 s
+[Task 20/25]  Current/Best:    9.85/  17.99 GFLOPS | Progress: (16/20) | 33.03 s
+[Task 20/25]  Current/Best:    9.37/  21.86 GFLOPS | Progress: (20/20) | 35.30 s
 [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 21/25]  Current/Best:   18.80/  18.80 GFLOPS | Progress: (4/20) | 5.47 s
-[Task 21/25]  Current/Best:    7.21/  18.80 GFLOPS | Progress: (8/20) | 8.29 s
-[Task 21/25]  Current/Best:    1.61/  18.80 GFLOPS | Progress: (12/20) | 19.74 s
-[Task 21/25]  Current/Best:    9.15/  18.80 GFLOPS | Progress: (16/20) | 23.50 s
-[Task 21/25]  Current/Best:    2.74/  18.80 GFLOPS | Progress: (20/20) | 34.90 s
+[Task 21/25]  Current/Best:    7.35/  19.18 GFLOPS | Progress: (4/20) | 10.07 s
+[Task 21/25]  Current/Best:   17.61/  23.15 GFLOPS | Progress: (8/20) | 21.11 s
+[Task 21/25]  Current/Best:   12.17/  23.15 GFLOPS | Progress: (12/20) | 32.83 s
+[Task 21/25]  Current/Best:    8.43/  23.15 GFLOPS | Progress: (16/20) | 35.34 s
+[Task 21/25]  Current/Best:    9.64/  23.15 GFLOPS | Progress: (20/20) | 46.76 s
 [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
  Done.
  Done.
 
-[Task 22/25]  Current/Best:   20.57/  20.57 GFLOPS | Progress: (4/20) | 5.73 s
-[Task 22/25]  Current/Best:    5.16/  21.47 GFLOPS | Progress: (8/20) | 8.38 s
-[Task 22/25]  Current/Best:   20.11/  21.47 GFLOPS | Progress: (12/20) | 13.05 s
-[Task 22/25]  Current/Best:   12.73/  21.47 GFLOPS | Progress: (16/20) | 14.93 s
-[Task 22/25]  Current/Best:    7.04/  21.47 GFLOPS | Progress: (20/20) | 19.34 s Done.
+[Task 22/25]  Current/Best:   14.46/  18.35 GFLOPS | Progress: (4/20) | 5.61 s
+[Task 22/25]  Current/Best:   10.56/  18.35 GFLOPS | Progress: (8/20) | 8.58 s
+[Task 22/25]  Current/Best:   17.83/  18.35 GFLOPS | Progress: (12/20) | 14.14 s
+[Task 22/25]  Current/Best:   10.03/  21.45 GFLOPS | Progress: (16/20) | 16.47 s
+[Task 22/25]  Current/Best:   17.95/  21.45 GFLOPS | Progress: (20/20) | 18.98 s Done.
 
 [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 23/25]  Current/Best:   20.99/  20.99 GFLOPS | Progress: (4/20) | 6.50 s
-[Task 23/25]  Current/Best:    9.59/  20.99 GFLOPS | Progress: (8/20) | 10.47 s
-[Task 23/25]  Current/Best:   12.45/  20.99 GFLOPS | Progress: (12/20) | 13.55 s
-[Task 23/25]  Current/Best:   10.16/  20.99 GFLOPS | Progress: (16/20) | 16.51 s
-[Task 23/25]  Current/Best:    8.19/  20.99 GFLOPS | Progress: (20/20) | 21.15 s Done.
+[Task 23/25]  Current/Best:   20.89/  20.89 GFLOPS | Progress: (4/20) | 5.11 s
+[Task 23/25]  Current/Best:   18.49/  20.96 GFLOPS | Progress: (8/20) | 7.77 s
+[Task 23/25]  Current/Best:    9.51/  20.96 GFLOPS | Progress: (12/20) | 16.69 s
+[Task 23/25]  Current/Best:   12.09/  21.98 GFLOPS | Progress: (16/20) | 19.33 s
+[Task 23/25]  Current/Best:   14.46/  21.98 GFLOPS | Progress: (20/20) | 23.03 s Done.
 
 [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 24/25]  Current/Best:    3.98/   9.52 GFLOPS | Progress: (4/20) | 6.73 s
-[Task 24/25]  Current/Best:    2.96/   9.52 GFLOPS | Progress: (8/20) | 15.73 s
-[Task 24/25]  Current/Best:    7.02/   9.52 GFLOPS | Progress: (12/20) | 26.76 s
-[Task 24/25]  Current/Best:    3.70/   9.52 GFLOPS | Progress: (16/20) | 37.81 s
-[Task 24/25]  Current/Best:    3.41/   9.52 GFLOPS | Progress: (20/20) | 49.98 s
+[Task 24/25]  Current/Best:    1.32/   1.32 GFLOPS | Progress: (4/20) | 13.91 s
+[Task 24/25]  Current/Best:    5.89/   6.93 GFLOPS | Progress: (8/20) | 26.29 s
+[Task 24/25]  Current/Best:    2.69/   8.14 GFLOPS | Progress: (12/20) | 32.11 s
+[Task 24/25]  Current/Best:    3.12/   8.14 GFLOPS | Progress: (16/20) | 43.14 s
+[Task 24/25]  Current/Best:    4.53/   8.14 GFLOPS | Progress: (20/20) | 55.23 s
 [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 25/25]  Current/Best:    5.99/   7.16 GFLOPS | Progress: (4/20) | 15.86 s
-[Task 25/25]  Current/Best:    1.55/   8.82 GFLOPS | Progress: (8/20) | 18.50 s
-[Task 25/25]  Current/Best:    7.38/   8.82 GFLOPS | Progress: (12/20) | 21.40 s
-[Task 25/25]  Current/Best:    1.51/   8.82 GFLOPS | Progress: (16/20) | 27.44 s
-[Task 25/25]  Current/Best:    2.76/   8.82 GFLOPS | Progress: (20/20) | 38.41 s
+[Task 25/25]  Current/Best:    8.84/   8.84 GFLOPS | Progress: (4/20) | 11.75 s
+[Task 25/25]  Current/Best:    1.55/   8.84 GFLOPS | Progress: (8/20) | 22.81 s
+[Task 25/25]  Current/Best:    1.49/   9.13 GFLOPS | Progress: (12/20) | 25.72 s
+[Task 25/25]  Current/Best:    1.55/   9.13 GFLOPS | Progress: (16/20) | 27.38 s
+[Task 25/25]  Current/Best:    8.10/   9.25 GFLOPS | Progress: (20/20) | 38.39 s
 </pre></div>
 </div>
 <p>The output from this tuning process will look something like this:</p>
@@ -1026,8 +1026,8 @@ improvement in comparing the optimized model to the unoptimized model.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;unoptimized: </span><span class="si">%s</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">))</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 414.8339746599959, &#39;median&#39;: 413.5761416000605, &#39;std&#39;: 2.7188403795600826}
-unoptimized: {&#39;mean&#39;: 495.85341244997835, &#39;median&#39;: 495.2317347000644, &#39;std&#39;: 4.122248126917765}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 405.46805702011625, &#39;median&#39;: 405.5535070001497, &#39;std&#39;: 1.8920621967570168}
+unoptimized: {&#39;mean&#39;: 493.6643143499532, &#39;median&#39;: 494.8347487999854, &#39;std&#39;: 3.130444482823425}
 </pre></div>
 </div>
 </div>
@@ -1041,7 +1041,7 @@ models.</p>
 <p>Here we presented a simple example using ResNet-50 v2 locally. However, TVM
 supports many more features including cross-compilation, remote execution and
 profiling/benchmarking.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 13 minutes  26.830 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 14 minutes  29.607 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-autotvm-relay-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/57a45d9bef1af358191e7d50043e652c/autotvm_relay_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">autotvm_relay_x86.py</span></code></a></p>
diff --git a/docs/tutorial/cross_compilation_and_rpc.html b/docs/tutorial/cross_compilation_and_rpc.html
index 39507695e0..35e71c66fb 100644
--- a/docs/tutorial/cross_compilation_and_rpc.html
+++ b/docs/tutorial/cross_compilation_and_rpc.html
@@ -543,7 +543,7 @@ device and returns the measured cost. Network overhead is excluded.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;</span><span class="si">%g</span><span class="s2"> secs/op&quot;</span> <span class="o">%</span> <span class="n">cost</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.196e-07 secs/op
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.282e-07 secs/op
 </pre></div>
 </div>
 </div>
diff --git a/docs/tutorial/intro_topi.html b/docs/tutorial/intro_topi.html
index 5fbb5644c0..40cb357ea7 100644
--- a/docs/tutorial/intro_topi.html
+++ b/docs/tutorial/intro_topi.html
@@ -513,7 +513,7 @@ class Module:
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/ir.html#tvm.ir.Array" title="tvm.ir.Array" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">sg</span><span class="o">.</span><span class="n">stages</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0x1c829ba0)), stage(b, placeholder(b, 0x1d948770)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attr [...]
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0xe6f1590)), stage(b, placeholder(b, 0x1c5ca650)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attrs [...]
 </pre></div>
 </div>
 <p>We can test the correctness by comparing with <code class="code docutils literal notranslate"><span class="pre">numpy</span></code> result as follows</p>
diff --git a/docs/tutorial/sg_execution_times.html b/docs/tutorial/sg_execution_times.html
index c2b7e4dd5d..c301f715ff 100644
--- a/docs/tutorial/sg_execution_times.html
+++ b/docs/tutorial/sg_execution_times.html
@@ -345,7 +345,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-tutorial-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>17:14.930</strong> total execution time for <strong>tutorial</strong> files:</p>
+<p><strong>18:29.900</strong> total execution time for <strong>tutorial</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -354,46 +354,46 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_relay_x86.html#sphx-glr-tutorial-autotvm-relay-x86-py"><span class="std std-ref">Compiling and Optimizing a Model with the Python Interface (AutoTVM)</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_relay_x86.py</span></code>)</p></td>
-<td><p>13:26.830</p></td>
+<td><p>14:29.607</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="auto_scheduler_matmul_x86.html#sphx-glr-tutorial-auto-scheduler-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Auto-scheduling</span></a> (<code class="docutils literal notranslate"><span class="pre">auto_scheduler_matmul_x86.py</span></code>)</p></td>
-<td><p>01:40.584</p></td>
+<td><p>01:51.453</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_expr_get_started.html#sphx-glr-tutorial-tensor-expr-get-started-py"><span class="std std-ref">Working with Operators Using Tensor Expression</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_expr_get_started.py</span></code>)</p></td>
-<td><p>00:58.807</p></td>
+<td><p>00:59.197</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="relay_quick_start.html#sphx-glr-tutorial-relay-quick-start-py"><span class="std std-ref">Quick Start Tutorial for Compiling Deep Learning Models</span></a> (<code class="docutils literal notranslate"><span class="pre">relay_quick_start.py</span></code>)</p></td>
-<td><p>00:40.103</p></td>
+<td><p>00:40.122</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_matmul_x86.html#sphx-glr-tutorial-autotvm-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Schedule Templates and AutoTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_matmul_x86.py</span></code>)</p></td>
-<td><p>00:26.583</p></td>
+<td><p>00:27.510</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="intro_topi.html#sphx-glr-tutorial-intro-topi-py"><span class="std std-ref">Introduction to TOPI</span></a> (<code class="docutils literal notranslate"><span class="pre">intro_topi.py</span></code>)</p></td>
-<td><p>00:00.969</p></td>
+<td><p>00:00.967</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_ir_blitz_course.html#sphx-glr-tutorial-tensor-ir-blitz-course-py"><span class="std std-ref">Blitz Course to TensorIR</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_ir_blitz_course.py</span></code>)</p></td>
-<td><p>00:00.860</p></td>
+<td><p>00:00.853</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="cross_compilation_and_rpc.html#sphx-glr-tutorial-cross-compilation-and-rpc-py"><span class="std std-ref">Cross Compilation and RPC</span></a> (<code class="docutils literal notranslate"><span class="pre">cross_compilation_and_rpc.py</span></code>)</p></td>
-<td><p>00:00.193</p></td>
+<td><p>00:00.191</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="uma.html#sphx-glr-tutorial-uma-py"><span class="std std-ref">Making your Hardware Accelerator TVM-ready with UMA</span></a> (<code class="docutils literal notranslate"><span class="pre">uma.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="tvmc_python.html#sphx-glr-tutorial-tvmc-python-py"><span class="std std-ref">Getting Starting using TVMC Python: a high-level API for TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_python.py</span></code>)</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="tvmc_command_line_driver.html#sphx-glr-tutorial-tvmc-command-line-driver-py"><span class="std std-ref">Compiling and Optimizing a Model with TVMC</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_command_line_driver.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tvmc_command_line_driver.html#sphx-glr-tutorial-tvmc-command-line-driver-py"><span class="std std-ref">Compiling and Optimizing a Model with TVMC</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_command_line_driver.py</span></code>)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="tvmc_python.html#sphx-glr-tutorial-tvmc-python-py"><span class="std std-ref">Getting Starting using TVMC Python: a high-level API for TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_python.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
diff --git a/docs/tutorial/tensor_expr_get_started.html b/docs/tutorial/tensor_expr_get_started.html
index 6b04126d85..5d1c216a69 100644
--- a/docs/tutorial/tensor_expr_get_started.html
+++ b/docs/tutorial/tensor_expr_get_started.html
@@ -555,7 +555,7 @@ helper function to run a profile of the TVM generated code.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.000007
-naive: 0.000007
+naive: 0.000008
 </pre></div>
 </div>
 </div>
@@ -686,10 +686,10 @@ class Module:
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Operator                  Timing             Performance
-   numpy    7.37280999601353e-06                     1.0
-   naive              6.6388e-06      0.9004436576542175
-parallel              6.9603e-06      0.9440498268317543
-  vector    3.9225500000000006e-05     5.320291723401144
+   numpy    7.104640026227571e-06                    1.0
+   naive              7.9066e-06      1.1128783401849922
+parallel    7.033200000000001e-06      0.989944595931132
+  vector    3.9490500000000004e-05     5.558409694821472
 </pre></div>
 </div>
 <div class="admonition-code-specialization admonition">
@@ -1005,7 +1005,7 @@ matrix multiplication.</p>
 <span class="n">answer</span> <span class="o">=</span> <span class="n">numpy</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a</span><span class="o">.</span><span class="n">numpy</span><span class="p">(),</span> <span class="n">b</span><span class="o">.</span><span class="n">numpy</span><span class="p">())</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018423
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018047
 </pre></div>
 </div>
 <p>Now we write a basic matrix multiplication using TVM TE and verify that it
@@ -1046,7 +1046,7 @@ optimizations.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.299926
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.314150
 </pre></div>
 </div>
 <p>Let’s take a look at the intermediate representation of the operator and
@@ -1110,7 +1110,7 @@ schedule.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.304426
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.306366
 </pre></div>
 </div>
 <p>By reordering the computation to take advantage of caching, you should see a
@@ -1159,7 +1159,7 @@ already cache friendly from our previous optimizations.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.283520
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.296275
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1208,7 +1208,7 @@ more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.114691
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.118549
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1278,7 +1278,7 @@ optimized schedule.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.104664
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.106734
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1344,7 +1344,7 @@ to `C</cite> when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.111098
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.111422
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1401,7 +1401,7 @@ of thread-level parallelization.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.131157
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.132259
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1454,13 +1454,13 @@ working, we can compare the results.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>        Operator                  Timing             Performance
-            none            3.2999255782                     1.0
-        blocking             0.304425778     0.09225231623740258
-   vectorization            0.2835203943     0.08591720861009568
-loop permutation     0.11469136309999999     0.03475574232875892
-   array packing     0.10466361099999999     0.03171696104040338
-   block caching     0.11109770830000001     0.03366673146628965
- parallelization            0.1311569975     0.03974544103856482
+            none      3.3141503092999995                     1.0
+        blocking     0.30636633280000003     0.09244189436438367
+   vectorization            0.2962752545     0.08939704806647045
+loop permutation            0.1185491122    0.035770590086796464
+   array packing     0.10673433839999999     0.03220564200135628
+   block caching     0.11142193279999998     0.03362006016665371
+ parallelization     0.13225905990000003    0.039907381246065216
 </pre></div>
 </div>
 <p>Note that the outputs on the web page reflect the running times on a