You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2023/07/26 05:20:58 UTC

[tvm-site] branch asf-site updated: deploying docs (apache/tvm@304aa1e084be8cf294a9f851287a09deed102285)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 3318b7a794 deploying docs (apache/tvm@304aa1e084be8cf294a9f851287a09deed102285)
3318b7a794 is described below

commit 3318b7a79414db34028698d63e46898ae5a56fcc
Author: tvm-bot <95...@users.noreply.github.com>
AuthorDate: Wed Jul 26 05:20:52 2023 +0000

    deploying docs (apache/tvm@304aa1e084be8cf294a9f851287a09deed102285)
---
 .../how_to/compile_models/from_darknet.rst.txt     |   2 +-
 .../how_to/compile_models/from_mxnet.rst.txt       |   2 +-
 .../how_to/compile_models/from_oneflow.rst.txt     |   2 +-
 .../how_to/compile_models/from_paddle.rst.txt      |   2 +-
 .../how_to/compile_models/from_pytorch.rst.txt     |   2 +-
 .../how_to/compile_models/from_tensorflow.rst.txt  |   2 +-
 .../compile_models/sg_execution_times.rst.txt      |  22 +-
 .../deploy_models/deploy_model_on_adreno.rst.txt   |   4 +-
 .../deploy_model_on_adreno_tvmc.rst.txt            |   2 +-
 .../deploy_models/deploy_model_on_android.rst.txt  |   2 +-
 .../deploy_object_detection_pytorch.rst.txt        |   4 +-
 .../deploy_models/deploy_prequantized.rst.txt      |   6 +-
 .../deploy_prequantized_tflite.rst.txt             |   2 +-
 .../how_to/deploy_models/deploy_quantized.rst.txt  |   2 +-
 .../deploy_models/sg_execution_times.rst.txt       |  20 +-
 .../extend_tvm/bring_your_own_datatypes.rst.txt    |   2 +-
 .../how_to/extend_tvm/sg_execution_times.rst.txt   |  10 +-
 .../how_to/extend_tvm/use_pass_instrument.rst.txt  |  16 +-
 .../optimize_operators/opt_conv_cuda.rst.txt       |   2 +-
 .../optimize_operators/opt_conv_tensorcore.rst.txt |   2 +-
 .../how_to/optimize_operators/opt_gemm.rst.txt     |  16 +-
 .../optimize_operators/sg_execution_times.rst.txt  |   8 +-
 .../sg_execution_times.rst.txt                     |  14 +-
 .../tune_conv2d_layer_cuda.rst.txt                 |   2 +-
 .../tune_network_cuda.rst.txt                      |   4 +-
 .../tune_network_x86.rst.txt                       |   4 +-
 .../tune_with_autotvm/sg_execution_times.rst.txt   |  10 +-
 .../tune_with_autotvm/tune_conv2d_cuda.rst.txt     |   2 +-
 .../work_with_microtvm/micro_autotune.rst.txt      |  18 +-
 .../work_with_microtvm/micro_pytorch.rst.txt       |   4 +-
 .../how_to/work_with_microtvm/micro_train.rst.txt  |  16 +-
 .../work_with_microtvm/sg_execution_times.rst.txt  |  14 +-
 .../work_with_relay/sg_execution_times.rst.txt     |   8 +-
 .../how_to/work_with_schedules/intrin_math.rst.txt |   2 +-
 .../work_with_schedules/sg_execution_times.rst.txt |  18 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |   4 +-
 .../frontend/deploy_classification.rst.txt         |   4 +-
 .../tutorials/frontend/deploy_detection.rst.txt    |   4 +-
 .../tutorials/frontend/sg_execution_times.rst.txt  |   6 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |   6 +-
 .../topic/vta/tutorials/sg_execution_times.rst.txt |   6 +-
 .../tutorial/auto_scheduler_matmul_x86.rst.txt     |  13 +-
 docs/_sources/tutorial/autotvm_matmul_x86.rst.txt  | 202 ++++++++++++++-
 docs/_sources/tutorial/autotvm_relay_x86.rst.txt   |  52 ++--
 .../tutorial/cross_compilation_and_rpc.rst.txt     |   2 +-
 docs/_sources/tutorial/intro_topi.rst.txt          |   2 +-
 docs/_sources/tutorial/sg_execution_times.rst.txt  |  18 +-
 .../tutorial/tensor_expr_get_started.rst.txt       |  44 ++--
 docs/api/rust/help.html                            |   2 +-
 docs/api/rust/settings.html                        |   2 +-
 docs/commit_hash                                   |   2 +-
 docs/how_to/compile_models/from_darknet.html       |   2 +-
 docs/how_to/compile_models/from_mxnet.html         |   2 +-
 docs/how_to/compile_models/from_oneflow.html       |  16 +-
 docs/how_to/compile_models/from_paddle.html        |   2 +-
 docs/how_to/compile_models/from_pytorch.html       |  17 +-
 docs/how_to/compile_models/from_tensorflow.html    |   2 +-
 docs/how_to/compile_models/sg_execution_times.html |  26 +-
 .../deploy_models/deploy_model_on_adreno.html      |   4 +-
 .../deploy_models/deploy_model_on_adreno_tvmc.html |  27 +-
 .../deploy_models/deploy_model_on_android.html     |   2 +-
 .../deploy_object_detection_pytorch.html           |  63 ++---
 docs/how_to/deploy_models/deploy_prequantized.html |  10 +-
 .../deploy_models/deploy_prequantized_tflite.html  |   2 +-
 docs/how_to/deploy_models/deploy_quantized.html    |   2 +-
 docs/how_to/deploy_models/sg_execution_times.html  |  24 +-
 .../extend_tvm/bring_your_own_datatypes.html       |   2 +-
 docs/how_to/extend_tvm/sg_execution_times.html     |  10 +-
 docs/how_to/extend_tvm/use_pass_instrument.html    |  16 +-
 docs/how_to/optimize_operators/opt_conv_cuda.html  |   2 +-
 .../optimize_operators/opt_conv_tensorcore.html    |   2 +-
 docs/how_to/optimize_operators/opt_gemm.html       |  16 +-
 .../optimize_operators/sg_execution_times.html     |   8 +-
 .../sg_execution_times.html                        |  14 +-
 .../tune_conv2d_layer_cuda.html                    |   2 +-
 .../tune_with_autoscheduler/tune_network_cuda.html |   4 +-
 .../tune_with_autoscheduler/tune_network_x86.html  |   4 +-
 .../tune_with_autotvm/sg_execution_times.html      |  10 +-
 .../how_to/tune_with_autotvm/tune_conv2d_cuda.html |   2 +-
 docs/how_to/work_with_microtvm/micro_autotune.html |  18 +-
 docs/how_to/work_with_microtvm/micro_pytorch.html  |   6 +-
 docs/how_to/work_with_microtvm/micro_train.html    |  16 +-
 .../work_with_microtvm/sg_execution_times.html     |  18 +-
 .../how_to/work_with_relay/sg_execution_times.html |   8 +-
 docs/how_to/work_with_schedules/intrin_math.html   |   2 +-
 .../work_with_schedules/sg_execution_times.html    |  18 +-
 docs/install/nnpack.html                           |  12 +-
 docs/reference/api/python/auto_scheduler.html      |   4 +-
 .../api/typedoc/classes/bytestreamreader.html      |  12 +-
 .../api/typedoc/classes/cachedcallstack.html       |  34 +--
 docs/reference/api/typedoc/classes/dldatatype.html |  12 +-
 docs/reference/api/typedoc/classes/dldevice.html   |  10 +-
 .../reference/api/typedoc/classes/environment.html |  12 +-
 docs/reference/api/typedoc/classes/ffilibrary.html |  20 +-
 docs/reference/api/typedoc/classes/instance.html   |  58 ++---
 docs/reference/api/typedoc/classes/memory.html     |  34 +--
 docs/reference/api/typedoc/classes/module.html     |  10 +-
 docs/reference/api/typedoc/classes/ndarray.html    |  22 +-
 .../api/typedoc/classes/packedfunccell.html        |   6 +-
 docs/reference/api/typedoc/classes/rpcserver.html  |  14 +-
 .../api/typedoc/classes/runtimecontext.html        |  22 +-
 docs/reference/api/typedoc/classes/scalar.html     |   6 +-
 docs/reference/api/typedoc/classes/tvmarray.html   |  16 +-
 docs/reference/api/typedoc/classes/tvmobject.html  |  12 +-
 .../api/typedoc/classes/webgpucontext.html         |  12 +-
 docs/reference/api/typedoc/enums/argtypecode.html  |  30 +--
 .../api/typedoc/enums/aynccallbackcode.html        |   4 +-
 .../api/typedoc/enums/dldatatypecode.html          |   8 +-
 .../api/typedoc/enums/rpcserverstate.html          |  12 +-
 docs/reference/api/typedoc/enums/sizeof.html       |  18 +-
 docs/reference/api/typedoc/index.html              | 124 +++++-----
 .../api/typedoc/interfaces/disposable.html         |   2 +-
 .../api/typedoc/interfaces/functioninfo.html       |   6 +-
 .../api/typedoc/interfaces/libraryprovider.html    |   4 +-
 docs/searchindex.js                                |   2 +-
 .../vta/tutorials/autotvm/sg_execution_times.html  |   4 +-
 .../tutorials/frontend/deploy_classification.html  |   4 +-
 .../vta/tutorials/frontend/deploy_detection.html   |   4 +-
 .../vta/tutorials/frontend/sg_execution_times.html |   6 +-
 .../vta/tutorials/optimize/sg_execution_times.html |   6 +-
 docs/topic/vta/tutorials/sg_execution_times.html   |   6 +-
 docs/tutorial/auto_scheduler_matmul_x86.html       |   8 +-
 docs/tutorial/autotvm_matmul_x86.html              | 202 ++++++++++++++-
 docs/tutorial/autotvm_relay_x86.html               | 274 ++++++++++-----------
 docs/tutorial/cross_compilation_and_rpc.html       |   2 +-
 docs/tutorial/intro_topi.html                      |   2 +-
 docs/tutorial/sg_execution_times.html              |  18 +-
 docs/tutorial/tensor_expr_get_started.html         |  44 ++--
 128 files changed, 1233 insertions(+), 881 deletions(-)

diff --git a/docs/_sources/how_to/compile_models/from_darknet.rst.txt b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
index cb79d8d489..185a79c136 100644
--- a/docs/_sources/how_to/compile_models/from_darknet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
@@ -318,7 +318,7 @@ The process is no different from other examples.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  36.871 seconds)
+   **Total running time of the script:** ( 1 minutes  36.924 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_darknet.py:
diff --git a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
index 600479e7f8..7844968c67 100644
--- a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
@@ -116,7 +116,7 @@ In this section, we download a pretrained imagenet model and classify an image.
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip578cabc6-c1fc-4405-8b72-98f7aacf70eb from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip276663bb-29f7-4498-88ce-3368a555ad78 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
     x (1, 3, 224, 224)
 
 
diff --git a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
index b064495210..e0a64a984c 100644
--- a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
@@ -121,7 +121,7 @@ Load a pretrained OneFlow model and save model
  .. code-block:: none
 
     Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
-
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     19%|#9        | 7.99M/41.5M [00:00<00:00, 43.6MB/s]
     35%|###4      | 14.3M/41.5M [00:00<00:00, 40.9MB/s]
     44%|####3     | 18.2M/41.5M [00:00<00:00, 35.7MB/s]
     58%|#####7    | 24.0M/41.5M [00:00<00:00, 37.4MB/s]
     77%|#######7  | 32.0M/41.5M [00:00<00:00, 44.4MB/s]
     87%|########7 | 36.3M/41.5M [00:00<00:00, 39.8MB/s]
     97%|#########6| 40.1M/41.5M [00:01<00:00, 36.6MB/s]
    100%|##########| 41.5M/41.5M [00:01<00:00, 39.8MB/s]
+
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     15%|#5        | 6.33M/41.5M [00:00<00:00, 59.4MB/s]
     29%|##8       | 12.0M/41.5M [00:00<00:00, 55.4MB/s]
     42%|####1     | 17.3M/41.5M [00:00<00:00, 37.4MB/s]
     54%|#####3    | 22.3M/41.5M [00:00<00:00, 38.5MB/s]
     63%|######3   | 26.3M/41.5M [00:00<00:00, 38.6MB/s]
     77%|#######7  | 32.0M/41.5M [00:00<00:00, 37.4MB/s]
     96%|#########6| 40.0M/41.5M [00:00<00:00, 43.3MB/s]
    100%|##########| 41.5M/41.5M [00:01<00:00, 43.3MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_paddle.rst.txt b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
index c36bf71fa4..bbc2b6169f 100644
--- a/docs/_sources/how_to/compile_models/from_paddle.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
@@ -209,7 +209,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  3.743 seconds)
+   **Total running time of the script:** ( 1 minutes  4.090 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_paddle.py:
diff --git a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
index edfccbe3f7..5704a97bbd 100644
--- a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
@@ -101,7 +101,7 @@ Load a pretrained PyTorch model
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     14%|#4        | 6.30M/44.7M [00:00<00:00, 53.7MB/s]
     26%|##5       | 11.4M/44.7M [00:00<00:00, 42.1MB/s]
     36%|###5      | 16.0M/44.7M [00:00<00:00, 42.0MB/s]
     54%|#####3    | 24.0M/44.7M [00:00<00:00, 42.2MB/s]
     68%|######7   | 30.3M/44.7M [00:00<00:00, 44.1MB/s]
     81%|########  | 36.2M/44.7M [00:00<00:00, 48.5MB/s]
     92%|#########1| 41.0M/44.7M [00:00<00:00, 41.7MB/s]
    100%|##########| 44.7M/44.7M [00:01<00:00, 46.6MB/s]
+
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     14%|#4        | 6.30M/44.7M [00:00<00:00, 48.1MB/s]
     24%|##4       | 10.9M/44.7M [00:00<00:00, 35.5MB/s]
     32%|###2      | 14.4M/44.7M [00:00<00:00, 35.8MB/s]
     45%|####4     | 20.1M/44.7M [00:00<00:00, 43.8MB/s]
     55%|#####4    | 24.4M/44.7M [00:00<00:00, 36.0MB/s]
     72%|#######1  | 32.0M/44.7M [00:00<00:00, 37.1MB/s]
     86%|########5 | 38.3M/44.7M [00:01<00:00, 41.5MB/s]
     95%|#########5| 42.5M/44.7M [00:01<00:00, 41.3MB/s]
    100%|##########| 44.7M/44.7M [00:01<00:00, 40.1MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
index d7961be725..cb748dd401 100644
--- a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
@@ -430,7 +430,7 @@ Run the corresponding model on tensorflow
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  47.707 seconds)
+   **Total running time of the script:** ( 1 minutes  40.402 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_tensorflow.py:
diff --git a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
index 94d8c16c59..c3859d4660 100644
--- a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**07:40.914** total execution time for **how_to_compile_models** files:
+**07:33.864** total execution time for **how_to_compile_models** files:
 
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:47.707 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:40.402 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:36.871 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:36.924 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:03.743 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:04.090 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:42.938 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:42.876 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:38.379 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:38.641 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:36.548 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:36.101 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:29.616 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:29.175 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:28.842 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:28.962 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:13.353 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:13.819 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.917 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.875 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
index bc636b9269..d35381b734 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
@@ -673,7 +673,7 @@ well as provides information about the model's performance
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-     4224.9461    4224.8727    4227.2275    4223.2891      1.1490                  
+     4223.1123    4223.3072    4224.3717    4221.5586      0.8522                  
 
 
 
@@ -681,7 +681,7 @@ well as provides information about the model's performance
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  21.495 seconds)
+   **Total running time of the script:** ( 1 minutes  21.201 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_model_on_adreno.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
index fcce297e4a..41c67f8857 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
@@ -127,7 +127,7 @@ Make a Keras Resnet50 Model
  .. code-block:: none
 
     Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
-
         8192/102967424 [..............................] - ETA: 0s
      8380416/102967424 [=>............................] - ETA: 0s
     16769024/102967424 [===>..........................] - ETA: 0s
     23412736/102967424 [=====>........................] - ETA: 0s
     24870912/102967424 [======>.......................] - ETA: 1s
     25157632/102967424 [======>.......................] - ETA: 1s
     31113216/102967424 [========>.....................] - ETA: 1s
     36765696/102967424 [=========>....................] - ETA: 0s
 
     40189952/102967424 [==========>...................] - ETA: 0s
     41934848/102967424 [===========>..................] - ETA: 0s
     50323456/102967424 [=============>................] - ETA: 0s
     54771712/102967424 [==============>...............] - ETA: 0s
     58712064/102967424 [================>.............] - ETA: 0s
     58851328/102967424 [================>.............] - ETA: 0s
     65355776/102967424 [==================>...........] - ETA: 0s
     69296128/102967424 [===================>..........] -
  ETA: 0s
     75489280/102967424 [====================>.........] - ETA: 0s
     83877888/102967424 [=======================>......] - ETA: 0s
     90521600/102967424 [=========================>....] - ETA: 0s
     93937664/102967424 [==========================>...] - ETA: 0s
    100646912/102967424 [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 2s 0us/step
+
         8192/102967424 [..............................] - ETA: 0s
      6635520/102967424 [>.............................] - ETA: 1s
      8380416/102967424 [=>............................] - ETA: 2s
     15024128/102967424 [===>..........................] - ETA: 1s
     16769024/102967424 [===>..........................] - ETA: 2s
     25157632/102967424 [======>.......................] - ETA: 1s
     33546240/102967424 [========>.....................] - ETA: 1s
     40189952/102967424 [==========>...................] - ETA: 1s
 
     41934848/102967424 [===========>..................] - ETA: 1s
     48578560/102967424 [=============>................] - ETA: 1s
     50323456/102967424 [=============>................] - ETA: 1s
     56967168/102967424 [===============>..............] - ETA: 0s
     58712064/102967424 [================>.............] - ETA: 0s
     65355776/102967424 [==================>...........] - ETA: 0s
     67100672/102967424 [==================>...........] - ETA: 0s
     69296128/102967424 [===================>..........] -
  ETA: 0s
     75489280/102967424 [====================>.........] - ETA: 0s
     82124800/102967424 [======================>.......] - ETA: 0s
     83877888/102967424 [=======================>......] - ETA: 0s
     90521600/102967424 [=========================>....] - ETA: 0s
     92266496/102967424 [=========================>....] - ETA: 0s
     98910208/102967424 [===========================>..] - ETA: 0s
    100646912/102967424 [============================>.] - ETA: 0s
    102850560/102967424
  [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 2s 0us/step
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
index e9ef300272..653fc908a1 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
@@ -437,7 +437,7 @@ Execute on TVM
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      16.3026      16.2233      17.3629      15.5300       0.4568                  
+      15.0815      15.0722      15.2434      14.8671       0.1197                  
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
index fa1f2e80d8..13db4d94ea 100644
--- a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
@@ -130,7 +130,7 @@ Load pre-trained maskrcnn from torchvision and do tracing
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MaskRCNN_ResNet50_FPN_Weights.COCO_V1`. You can also use `weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-
      0%|          | 0.00/170M [00:00<?, ?B/s]
      4%|3         | 6.30M/170M [00:00<00:02, 66.1MB/s]
      7%|7         | 12.6M/170M [00:00<00:03, 45.0MB/s]
     10%|#         | 17.3M/170M [00:00<00:04, 32.2MB/s]
     14%|#4        | 24.0M/170M [00:00<00:04, 37.9MB/s]
     18%|#8        | 31.3M/170M [00:00<00:03, 47.6MB/s]
     21%|##1       | 36.4M/170M [00:00<00:03, 35.9MB/s]
     24%|##3       | 40.6M/170M [00:01<00:03, 34.5MB/s]
     28%|##8       | 48.0M/170M [00:01<00:03, 39.0MB/s]
     31%|###1      | 53.2M/170M [00:01<00:02, 41.5MB/s]
     34%|###3      | 57.4M/170M [00:01<00:03, 37.4MB/s]
     38%|###7      | 64.0M/170M [00:01<00:02, 38.8MB/s]
     42%|####2     | 72.0M/170M [00:01<00:02, 40.8MB/s]
     46%|####6     | 78.6M/170M [00:02<00:02, 46.7MB/s]
     49%|####9     | 83.4M/170M [00:02<00:01, 46.8MB/s]
     52%|#####1    | 88.0M/170M [00:02<00:02, 30.6MB/s]
     57%|#####6    | 96.0M/170M [00:02<00:02, 34.9MB/s]
     61%|######1   | 104M/170M [00:02<00:01, 38.0MB/s
 ] 
     66%|######5   | 112M/170M [00:02<00:01, 39.8MB/s]
     71%|#######   | 120M/170M [00:03<00:01, 45.6MB/s]
     74%|#######4  | 126M/170M [00:03<00:01, 44.6MB/s]
     77%|#######7  | 131M/170M [00:03<00:00, 45.8MB/s]
     80%|########  | 136M/170M [00:03<00:00, 39.1MB/s]
     85%|########4 | 144M/170M [00:03<00:00, 41.2MB/s]
     88%|########8 | 150M/170M [00:03<00:00, 43.5MB/s]
     91%|#########1| 155M/170M [00:04<00:00, 41.7MB/s]
     94%|#########4| 160M/170M [00:04<00:00, 34.7MB/s]
     98%|#########7| 166M/170M [00:04<00:00, 41.1MB/s]
    100%|##########| 170M/170M [00:04<00:00, 40.3MB/s]
+
      0%|          | 0.00/170M [00:00<?, ?B/s]
      4%|3         | 6.30M/170M [00:00<00:03, 48.7MB/s]
      6%|6         | 11.0M/170M [00:00<00:03, 45.6MB/s]
      9%|9         | 16.0M/170M [00:00<00:04, 33.2MB/s]
     13%|#3        | 22.3M/170M [00:00<00:04, 34.5MB/s]
     15%|#5        | 25.8M/170M [00:00<00:04, 31.5MB/s]
     18%|#7        | 30.3M/170M [00:00<00:04, 30.6MB/s]
     20%|#9        | 33.3M/170M [00:01<00:04, 30.4MB/s]
     24%|##3       | 40.0M/170M [00:01<00:03, 36.1MB/s]
     27%|##7       | 46.3M/170M [00:01<00:03, 34.2MB/s]
     29%|##9       | 49.6M/170M [00:01<00:03, 33.0MB/s]
     33%|###2      | 56.0M/170M [00:01<00:03, 35.8MB/s]
     38%|###7      | 64.0M/170M [00:01<00:02, 38.1MB/s]
     42%|####2     | 72.0M/170M [00:02<00:02, 41.7MB/s]
     46%|####6     | 78.3M/170M [00:02<00:02, 44.5MB/s]
     49%|####8     | 82.6M/170M [00:02<00:02, 43.2MB/s]
     51%|#####1    | 86.7M/170M [00:02<00:02, 35.3MB/s]
     53%|#####3    | 90.2M/170M [00:02<00:02, 34.1MB/
 s]
     57%|#####6    | 96.0M/170M [00:02<00:02, 32.3MB/s]
     61%|######1   | 104M/170M [00:02<00:01, 40.4MB/s] 
     66%|######5   | 112M/170M [00:03<00:01, 43.8MB/s]
     70%|######9   | 118M/170M [00:03<00:01, 46.6MB/s]
     72%|#######2  | 123M/170M [00:03<00:01, 46.5MB/s]
     75%|#######5  | 127M/170M [00:03<00:00, 44.8MB/s]
     78%|#######7  | 132M/170M [00:03<00:00, 42.1MB/s]
     80%|########  | 136M/170M [00:03<00:00, 42.6MB/s]
     82%|########2 | 140M/170M [00:03<00:00, 38.7MB/s]
     85%|########4 | 144M/170M [00:03<00:00, 39.0MB/s]
     88%|########8 | 150M/170M [00:04<00:00, 42.6MB/s]
     91%|######### | 154M/170M [00:04<00:00, 41.7MB/s]
     93%|#########3| 158M/170M [00:04<00:00, 33.6MB/s]
     95%|#########5| 162M/170M [00:04<00:00, 29.1MB/s]
     98%|#########7| 166M/170M [00:04<00:00, 33.1MB/s]
    100%|##########| 170M/170M [00:04<00:00, 38.0MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
       (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -295,7 +295,7 @@ Get boxes with score larger than 0.9
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 3 minutes  39.426 seconds)
+   **Total running time of the script:** ( 3 minutes  40.022 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
index 4ef55d96e8..6f1da33fd7 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
@@ -227,7 +227,7 @@ training. Other models require a full post training calibration.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
-
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     59%|#####8    | 7.99M/13.6M [00:00<00:00, 50.9MB/s]
     95%|#########4| 12.9M/13.6M [00:00<00:00, 32.6MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 36.6MB/s]
+
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     47%|####6     | 6.30M/13.6M [00:00<00:00, 46.9MB/s]
     79%|#######9  | 10.8M/13.6M [00:00<00:00, 35.2MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 29.7MB/s]
 
 
 
@@ -409,7 +409,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      89.1665      89.1415      92.0607      88.7729       0.3653                  
+      88.9931      88.9394      89.8859      88.7118       0.2072                  
 
 
 
@@ -457,7 +457,7 @@ TODO
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  29.969 seconds)
+   **Total running time of the script:** ( 1 minutes  29.522 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_prequantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
index ead05d957d..7868cbacf1 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
@@ -423,7 +423,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      111.0840     111.0420     113.8326     110.4673      0.3670                  
+      110.4352     110.3864     113.3146     109.5863      0.4803                  
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
index 5521963036..5be96a38f2 100644
--- a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
@@ -257,7 +257,7 @@ We create a Relay VM to build and execute the model.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  28.603 seconds)
+   **Total running time of the script:** ( 2 minutes  29.162 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_quantized.py:
diff --git a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
index f99b7e38dd..883b002738 100644
--- a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**12:38.494** total execution time for **how_to_deploy_models** files:
+**12:41.137** total execution time for **how_to_deploy_models** files:
 
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:39.426 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:40.022 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 02:28.603 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 02:29.162 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:29.969 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:29.522 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:21.495 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:21.201 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:55.690 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:56.434 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:51.531 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:51.462 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:49.526 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:51.440 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:31.402 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:31.262 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:30.846 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:30.625 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.006 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
index 585341de58..de83857965 100644
--- a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
@@ -463,7 +463,7 @@ First let us define two helper functions to get the mobilenet model and a cat im
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip67c5a67a-b48a-4a9b-bc63-01fe0c94402b from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip677c63cc-09ce-480e-bc10-c04ae7c19ab0 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 
 
 
diff --git a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
index 2fb0e86628..16af6cd60f 100644
--- a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**01:00.749** total execution time for **how_to_extend_tvm** files:
+**00:59.510** total execution time for **how_to_extend_tvm** files:
 
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:56.688 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:55.538 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.827 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.754 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.227 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.211 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.008 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.007 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
index 2936d988a2..b5b2ce3417 100644
--- a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
@@ -220,10 +220,10 @@ profile the execution time of each passes.
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 23775us [23775us] (48.16%; 48.16%)
-    FoldScaleAxis: 25596us [10us] (51.84%; 51.84%)
-            FoldConstant: 25586us [1894us] (51.82%; 99.96%)
-                    InferType: 23691us [23691us] (47.99%; 92.60%)
+    InferType: 23807us [23807us] (48.39%; 48.39%)
+    FoldScaleAxis: 25395us [9us] (51.61%; 51.61%)
+            FoldConstant: 25386us [1857us] (51.60%; 99.97%)
+                    InferType: 23529us [23529us] (47.82%; 92.68%)
 
 
 
@@ -262,10 +262,10 @@ Refer to following sections and :py:func:`tvm.instrument.pass_instrument` for th
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 23734us [23734us] (48.04%; 48.04%)
-    FoldScaleAxis: 25676us [8us] (51.96%; 51.96%)
-            FoldConstant: 25667us [1964us] (51.95%; 99.97%)
-                    InferType: 23703us [23703us] (47.97%; 92.35%)
+    InferType: 23673us [23673us] (47.75%; 47.75%)
+    FoldScaleAxis: 25907us [7us] (52.25%; 52.25%)
+            FoldConstant: 25900us [1875us] (52.24%; 99.97%)
+                    InferType: 24025us [24025us] (48.46%; 92.76%)
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
index 576c5aa4fb..1615329628 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
@@ -331,7 +331,7 @@ latency of convolution.
 
  .. code-block:: none
 
-    Convolution: 36.925407 ms
+    Convolution: 53.573184 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
index d41d1b46f5..67db91e858 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
@@ -598,7 +598,7 @@ be able to run on our build server
 
  .. code-block:: none
 
-    conv2d with tensor core: 11.709709 ms
+    conv2d with tensor core: 12.276105 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
index 054958f94f..92a9afc682 100644
--- a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
@@ -134,8 +134,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 
  .. code-block:: none
 
-    Numpy running time: 0.019280
-    Baseline: 3.352708
+    Numpy running time: 0.019140
+    Baseline: 3.501454
 
 
 
@@ -227,7 +227,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 
  .. code-block:: none
 
-    Opt1: 0.320091
+    Opt1: 0.300509
 
 
 
@@ -318,7 +318,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
 
  .. code-block:: none
 
-    Opt2: 0.314908
+    Opt2: 0.281805
 
 
 
@@ -406,7 +406,7 @@ the access pattern for A matrix is more cache friendly.
 
  .. code-block:: none
 
-    Opt3: 0.129901
+    Opt3: 0.118494
 
 
 
@@ -523,7 +523,7 @@ flattening.
 
  .. code-block:: none
 
-    Opt4: 0.108122
+    Opt4: 0.107271
 
 
 
@@ -635,7 +635,7 @@ write to C when all the block results are ready.
 
  .. code-block:: none
 
-    Opt5: 0.112490
+    Opt5: 0.111860
 
 
 
@@ -748,7 +748,7 @@ Furthermore, we can also utilize multi-core processors to do the thread-level pa
 
  .. code-block:: none
 
-    Opt6: 0.134238
+    Opt6: 0.133092
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
index f542d712d5..932033397c 100644
--- a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:35.006** total execution time for **how_to_optimize_operators** files:
+**00:34.605** total execution time for **how_to_optimize_operators** files:
 
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:31.760 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:31.360 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:02.017 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:02.008 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.228 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.237 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
index e5c7fa9266..cff685c84b 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
@@ -5,18 +5,18 @@
 
 Computation times
 =================
-**03:55.379** total execution time for **how_to_tune_with_autoscheduler** files:
+**03:54.232** total execution time for **how_to_tune_with_autoscheduler** files:
 
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:42.218 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:42.650 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:20.710 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:20.138 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:18.611 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:17.672 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:16.926 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:16.974 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:16.806 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:16.695 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:00.108 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:00.103 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
index c3ce20eaeb..e3069ab65f 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
@@ -767,7 +767,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 0.359 ms
+    Execution time of this operator: 0.352 ms
 
 
 
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
index 84276ff2e5..8f574fa67e 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
@@ -647,7 +647,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       8.1234       8.1193       8.1373       8.1136       0.0101                  
+       8.1281       8.1309       8.1393       8.1140       0.0105                  
 
 
 
@@ -674,7 +674,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  20.710 seconds)
+   **Total running time of the script:** ( 1 minutes  20.138 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
index bb6312481a..55357fc30f 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
@@ -666,7 +666,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      769.1704     770.5453     772.0932     764.8725      3.1040                  
+      766.4102     766.4307     766.9942     765.8056      0.4855                  
 
 
 
@@ -693,7 +693,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  42.218 seconds)
+   **Total running time of the script:** ( 1 minutes  42.650 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_x86.py:
diff --git a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
index ddaa830d1a..a91bebf2f5 100644
--- a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
@@ -5,16 +5,16 @@
 
 Computation times
 =================
-**00:23.681** total execution time for **how_to_tune_with_autotvm** files:
+**00:23.436** total execution time for **how_to_tune_with_autotvm** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:23.638 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:23.397 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.027 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.023 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)             | 00:00.006 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``) | 00:00.005 | 0.0 MB |
-+--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)               | 00:00.005 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``) | 00:00.005 | 0.0 MB |
++--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
index 4ed95b6053..ed4e58ab53 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
@@ -326,7 +326,7 @@ and measure running time.
 
     Best config:
     ,None
-    Time cost of this operator: 0.037110
+    Time cost of this operator: 0.037163
 
 
 
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
index 2ddefc68a9..404980fc90 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
@@ -360,10 +360,10 @@ Timing the untuned program
     ########## Build without Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  305.2     98.734   (1, 2, 10, 10, 3)  2       1        [305.2]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.955     0.956    (1, 6, 10, 10)     1       1        [2.955]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.958     0.31     (1, 1, 10, 10, 3)  1       1        [0.958]           
-    Total_time                                    -                                             309.112   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  329.2     98.837   (1, 2, 10, 10, 3)  2       1        [329.2]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.916     0.875    (1, 6, 10, 10)     1       1        [2.916]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.959     0.288    (1, 1, 10, 10, 3)  1       1        [0.959]           
+    Total_time                                    -                                             333.074   -        -                  -       -        -                 
 
 
 
@@ -428,10 +428,10 @@ Timing the tuned program
     ########## Build with Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  133.8     97.985   (1, 6, 10, 10, 1)  2       1        [133.8]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.772     1.297    (1, 6, 10, 10)     1       1        [1.772]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.98      0.718    (1, 1, 10, 10, 3)  1       1        [0.98]            
-    Total_time                                    -                                             136.552   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.7     97.467   (1, 6, 10, 10, 1)  2       1        [102.7]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.81      1.718    (1, 6, 10, 10)     1       1        [1.81]            
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.859     0.816    (1, 3, 10, 10, 1)  1       1        [0.859]           
+    Total_time                                    -                                             105.369   -        -                  -       -        -                 
 
 
 
@@ -439,7 +439,7 @@ Timing the tuned program
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  32.513 seconds)
+   **Total running time of the script:** ( 1 minutes  28.305 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_autotune.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
index 56aaa5545a..6e9b993507 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
@@ -118,7 +118,7 @@ download a cat image and preprocess it to use as the model input.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/ao/quantization/utils.py:310: UserWarning: must run observer before calling calculate_qparams. Returning default values.
       warnings.warn(
     Downloading: "https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
-
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
     61%|######    | 2.09M/3.42M [00:00<00:00, 14.8MB/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 22.7MB/s]
+
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
     61%|######    | 2.09M/3.42M [00:00<00:00, 16.2MB/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 25.5MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
       device=storage.device,
     /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -326,7 +326,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  30.029 seconds)
+   **Total running time of the script:** ( 1 minutes  29.828 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_pytorch.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
index ae90baf666..84cca967c3 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
@@ -217,7 +217,7 @@ take about **2 minutes** to download the Stanford Cars, while COCO 2017 validati
  .. code-block:: none
 
 
-    '/tmp/tmptmuid4lv/images/random'
+    '/tmp/tmp7eauq5ey/images/random'
 
 
 
@@ -317,8 +317,8 @@ objects to other stuff? We can display some examples from our datasets using ``m
 
  .. code-block:: none
 
-    /tmp/tmptmuid4lv/images/target contains 8144 images
-    /tmp/tmptmuid4lv/images/random contains 5000 images
+    /tmp/tmp7eauq5ey/images/target contains 8144 images
+    /tmp/tmp7eauq5ey/images/random contains 5000 images
 
 
 
@@ -493,13 +493,13 @@ the time on our validation set).
  .. code-block:: none
 
     Epoch 1/3
-    328/328 - 41s - loss: 0.2226 - accuracy: 0.9223 - val_loss: 0.1225 - val_accuracy: 0.9588 - 41s/epoch - 125ms/step
+    328/328 - 41s - loss: 0.2287 - accuracy: 0.9242 - val_loss: 0.1226 - val_accuracy: 0.9573 - 41s/epoch - 126ms/step
     Epoch 2/3
-    328/328 - 36s - loss: 0.1085 - accuracy: 0.9600 - val_loss: 0.1086 - val_accuracy: 0.9653 - 36s/epoch - 109ms/step
+    328/328 - 36s - loss: 0.0981 - accuracy: 0.9632 - val_loss: 0.1167 - val_accuracy: 0.9596 - 36s/epoch - 108ms/step
     Epoch 3/3
-    328/328 - 36s - loss: 0.0684 - accuracy: 0.9754 - val_loss: 0.1325 - val_accuracy: 0.9539 - 36s/epoch - 109ms/step
+    328/328 - 36s - loss: 0.0708 - accuracy: 0.9736 - val_loss: 0.0973 - val_accuracy: 0.9683 - 36s/epoch - 108ms/step
 
-    <keras.callbacks.History object at 0x7f6c32189eb0>
+    <keras.callbacks.History object at 0x7f4c4c770880>
 
 
 
@@ -860,7 +860,7 @@ Arduino tutorial for how to do that `on GitHub <https://github.com/guberti/tvm-a
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 5 minutes  1.624 seconds)
+   **Total running time of the script:** ( 4 minutes  39.709 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_train.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
index df3bb695d1..92e0eac8fc 100644
--- a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
@@ -5,20 +5,20 @@
 
 Computation times
 =================
-**08:35.615** total execution time for **how_to_work_with_microtvm** files:
+**08:08.545** total execution time for **how_to_work_with_microtvm** files:
 
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 05:01.624 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 04:39.709 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:32.513 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:29.828 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:30.029 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:28.305 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:12.626 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:12.630 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:09.947 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:09.242 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.876 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.829 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)         | 00:00.000 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
index 24a709a7b5..2f5817ef1e 100644
--- a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:41.895** total execution time for **how_to_work_with_relay** files:
+**00:40.986** total execution time for **how_to_work_with_relay** files:
 
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:36.294 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:35.641 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.368 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.346 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:02.226 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.994 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``)                 | 00:00.006 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
index 9536d75a5d..a98898b2d1 100644
--- a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
@@ -278,7 +278,7 @@ The following example customizes CUDA lowering rule for :code:`exp`.
  .. code-block:: none
 
 
-    <function my_cuda_math_rule at 0x7f6831414940>
+    <function my_cuda_math_rule at 0x7f4ac83cb700>
 
 
 
diff --git a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
index f7ce231748..9f599dd1da 100644
--- a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
@@ -5,22 +5,22 @@
 
 Computation times
 =================
-**00:07.558** total execution time for **how_to_work_with_schedules** files:
+**00:08.984** total execution time for **how_to_work_with_schedules** files:
 
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:03.902 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:05.617 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.828 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.543 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.767 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.770 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.763 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.759 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.118 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.119 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.084 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.085 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.062 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.060 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.033 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.031 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
index c75efa49af..ba39889f54 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:36.785** total execution time for **topic_vta_tutorials_autotvm** files:
+**00:36.375** total execution time for **topic_vta_tutorials_autotvm** files:
 
 +---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:36.778 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:36.367 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)     | 00:00.008 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
index c17e921cce..be6a64e0a8 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -293,7 +293,7 @@ The compilation steps are:
       warnings.warn(
     /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
       graph, lib, params = relay.build(
-    resnet18_v1 inference graph built in 39.19s!
+    resnet18_v1 inference graph built in 38.68s!
 
 
 
@@ -416,7 +416,7 @@ and an input test image.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  6.796 seconds)
+   **Total running time of the script:** ( 1 minutes  6.044 seconds)
 
 
 .. _sphx_glr_download_topic_vta_tutorials_frontend_deploy_classification.py:
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
index 3a1b95b048..c1f4386bcf 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
@@ -337,7 +337,7 @@ The compilation steps are:
 
     /workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
       warnings.warn(
-    yolov3-tiny inference graph built in 26.98s!
+    yolov3-tiny inference graph built in 26.46s!
 
 
 
@@ -447,7 +447,7 @@ Download test image
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  10.961 seconds)
+   **Total running time of the script:** ( 1 minutes  9.828 seconds)
 
 
 .. _sphx_glr_download_topic_vta_tutorials_frontend_deploy_detection.py:
diff --git a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
index c188d83a3f..fcb506d29c 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**02:17.757** total execution time for **topic_vta_tutorials_frontend** files:
+**02:15.872** total execution time for **topic_vta_tutorials_frontend** files:
 
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 01:10.961 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 01:09.828 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 01:06.796 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 01:06.044 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
index 0e93280a0f..73411f6cba 100644
--- a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:03.461** total execution time for **topic_vta_tutorials_optimize** files:
+**00:03.443** total execution time for **topic_vta_tutorials_optimize** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.905 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.897 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.556 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.546 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
index 87261c65d6..2a6049b12a 100644
--- a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:00.952** total execution time for **topic_vta_tutorials** files:
+**00:00.924** total execution time for **topic_vta_tutorials** files:
 
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.494 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.477 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.458 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.448 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
index 533f1c8b7e..0d0031e349 100644
--- a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
@@ -207,13 +207,6 @@ trials, we can load the best schedule from the log file and apply it.
 
 
 
-.. rst-class:: sphx-glr-script-out
-
- .. code-block:: none
-
-    *E
-
-
 
 
 
@@ -325,7 +318,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 96.087 ms
+    Execution time of this operator: 93.225 ms
 
 
 
@@ -423,7 +416,7 @@ resume the status and do more 5 trials.
  .. code-block:: none
 
     Resume search:
-    *E
+
 
 
 
@@ -441,7 +434,7 @@ operations.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  57.137 seconds)
+   **Total running time of the script:** ( 1 minutes  39.181 seconds)
 
 
 .. _sphx_glr_download_tutorial_auto_scheduler_matmul_x86.py:
diff --git a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
index 12731edb88..145dfec96f 100644
--- a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
@@ -454,16 +454,198 @@ reduce variance, we take 5 measurements and average them.
     waiting for device...
     device available
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 0.50/0.50       result: MeasureResult(costs=(0.5332579814,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.813384771347046, timestamp=1690328353.4487314)        [('tile_y', [-1, 64]), ('tile_x', [-1, 1])],None,6
-    No: 2   GFLOPS: 1.01/1.01       result: MeasureResult(costs=(0.26523001720000006,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.5062477588653564, timestamp=1690328357.9536412)        [('tile_y', [-1, 64]), ('tile_x', [-1, 2])],None,16
-    No: 3   GFLOPS: 7.82/7.82       result: MeasureResult(costs=(0.0343300822,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.8198776245117188, timestamp=1690328358.768012)        [('tile_y', [-1, 512]), ('tile_x', [-1, 16])],None,49
-    No: 4   GFLOPS: 2.03/7.82       result: MeasureResult(costs=(0.1321095892,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.377077579498291, timestamp=1690328361.144421) [('tile_y', [-1, 256]), ('tile_x', [-1, 4])],None,28
-    No: 5   GFLOPS: 14.33/14.33     result: MeasureResult(costs=(0.0187313796,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6215758323669434, timestamp=1690328361.8819606)       [('tile_y', [-1, 32]), ('tile_x', [-1, 64])],None,65
-    No: 6   GFLOPS: 11.09/14.33     result: MeasureResult(costs=(0.0242109118,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.659132719039917, timestamp=1690328362.5401225)        [('tile_y', [-1, 4]), ('tile_x', [-1, 512])],None,92
-    No: 7   GFLOPS: 2.03/14.33      result: MeasureResult(costs=(0.1323101692,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3927040100097656, timestamp=1690328364.918409)        [('tile_y', [-1, 128]), ('tile_x', [-1, 4])],None,27
-    No: 8   GFLOPS: 0.47/14.33      result: MeasureResult(costs=(0.5676618674,), error_no=MeasureErrorNo.NO_ERROR, all_cost=9.340614080429077, timestamp=1690328374.27059)  [('tile_y', [-1, 512]), ('tile_x', [-1, 1])],None,9
-    No: 9   GFLOPS: 3.21/14.33      result: MeasureResult(costs=(0.08349734959999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.5565173625946045, timestamp=1690328375.9469516)        [('tile_y', [-1, 2]), ('tile_x', [-1, 2])],None,11
-    No: 10  GFLOPS: 13.91/14.33     result: MeasureResult(costs=(0.019292975799999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5411303043365479, timestamp=1690328376.5198061)       [('tile_y', [-1, 16]), ('tile_x', [-1, 32])],None,54
+    No: 1   GFLOPS: 0.00/0.00       result: Traceback (most recent call last):
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 712, in __call__
+        yield remote, remote.load_module(os.path.split(build_result.filename)[1])
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 679, in run_through_rpc
+        costs = time_f(*args).results
+      File "/workspace/python/tvm/runtime/module.py", line 398, in evaluator
+        blob = feval(*args)
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 262, in tvm._ffi._cy3.core.FuncCall
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 251, in tvm._ffi._cy3.core.FuncCall3
+      File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
+    tvm.error.RPCSessionTimeoutError: Traceback (most recent call last):
+      41: 0xffffffffffffffff
+      40: _start
+      39: __libc_start_main
+      38: 0x00007fbfb434bd8f
+      37: Py_BytesMain
+      36: Py_RunMain
+      35: 0x00000000005f3021
+      34: PyObject_Call
+      33: _PyFunction_Vectorcall
+      32: _PyEval_EvalCodeWithName
+      31: _PyEval_EvalFrameDefault
+      30: _PyFunction_Vectorcall
+      29: _PyEval_EvalCodeWithName
+      28: _PyEval_EvalFrameDefault
+      27: 0x000000000051546f
+      26: 0x00000000005dabd0
+      25: PyEval_EvalCode
+      24: _PyEval_EvalCodeWithName
+      23: _PyEval_EvalFrameDefault
+      22: _PyFunction_Vectorcall
+      21: _PyEval_EvalCodeWithName
+      20: _PyEval_EvalFrameDefault
+      19: PyObject_Call
+      18: _PyFunction_Vectorcall
+      17: _PyEval_EvalCodeWithName
+      16: _PyEval_EvalFrameDefault
+      15: PyObject_Call
+      14: _PyFunction_Vectorcall
+      13: _PyEval_EvalCodeWithName
+      12: _PyEval_EvalFrameDefault
+      11: PyObject_Call
+      10: __pyx_pw_3tvm_4_ffi_4_cy3_4core_14PackedFuncBase_5__call__
+            at tvm/_ffi/_cython/core.cpp:8724
+      9: __pyx_pf_3tvm_4_ffi_4_cy3_4core_14PackedFuncBase_4__call__
+            at tvm/_ffi/_cython/core.cpp:8760
+      8: __pyx_f_3tvm_4_ffi_4_cy3_4core_FuncCall
+            at tvm/_ffi/_cython/core.cpp:7821
+      7: __pyx_f_3tvm_4_ffi_4_cy3_4core_FuncCall3
+            at tvm/_ffi/_cython/core.cpp:7699
+      6: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+            at /workspace/src/runtime/rpc/rpc_module.cc:129
+      5: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:1016
+      4: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:807
+      3: tvm::runtime::RPCEndpoint::HandleUntilReturnEvent(bool, std::function<void (tvm::runtime::TVMArgs)>)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:651
+      2: tvm::runtime::RPCEndpoint::EventHandler::HandleNextEvent(bool, bool, std::function<void (tvm::runtime::TVMArgs)>)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:136
+      1: tvm::runtime::RPCEndpoint::EventHandler::HandleProcessPacket(std::function<void (tvm::runtime::TVMArgs)>)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:316
+      0: tvm::runtime::RPCEndpoint::EventHandler::HandleReturn(tvm::runtime::RPCCode, std::function<void (tvm::runtime::TVMArgs)>)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:380
+      File "/workspace/src/runtime/rpc/rpc_endpoint.cc", line 380
+    RPCSessionTimeoutError: Your 10.0s session has expired, try to increase the "session_timeout" value.
+
+    During handling of the above exception, another exception occurred:
+
+    Traceback (most recent call last):
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 679, in run_through_rpc
+        costs = time_f(*args).results
+      File "/usr/lib/python3.8/contextlib.py", line 131, in __exit__
+        self.gen.throw(type, value, traceback)
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 716, in __call__
+        remote.remove(build_result.filename)
+      File "/workspace/python/tvm/rpc/client.py", line 145, in remove
+        self._remote_funcs["remove"] = self.get_function("tvm.rpc.server.remove")
+      File "/workspace/python/tvm/rpc/client.py", line 73, in get_function
+        return self._sess.get_function(name)
+      File "/workspace/python/tvm/runtime/module.py", line 171, in get_function
+        check_call(
+      File "/workspace/python/tvm/_ffi/base.py", line 348, in check_call
+        raise get_last_ffi_error()
+    tvm.error.InternalError: Traceback (most recent call last):
+      50: 0xffffffffffffffff
+      49: _start
+      48: __libc_start_main
+      47: 0x00007fbfb434bd8f
+      46: Py_BytesMain
+      45: Py_RunMain
+      44: 0x00000000005f3021
+      43: PyObject_Call
+      42: _PyFunction_Vectorcall
+      41: _PyEval_EvalCodeWithName
+      40: _PyEval_EvalFrameDefault
+      39: _PyFunction_Vectorcall
+      38: _PyEval_EvalCodeWithName
+      37: _PyEval_EvalFrameDefault
+      36: 0x000000000051546f
+      35: 0x00000000005dabd0
+      34: PyEval_EvalCode
+      33: _PyEval_EvalCodeWithName
+      32: _PyEval_EvalFrameDefault
+      31: _PyFunction_Vectorcall
+      30: _PyEval_EvalCodeWithName
+      29: _PyEval_EvalFrameDefault
+      28: PyObject_Call
+      27: _PyFunction_Vectorcall
+      26: _PyEval_EvalCodeWithName
+      25: _PyEval_EvalFrameDefault
+      24: 0x0000000000521f82
+      23: _PyFunction_Vectorcall
+      22: _PyEval_EvalFrameDefault
+      21: 0x000000000052f0a9
+      20: 0x0000000000626e1c
+      19: 0x0000000000626f00
+      18: 0x00000000005291f0
+      17: _PyEval_EvalFrameDefault
+      16: _PyFunction_Vectorcall
+      15: _PyEval_EvalFrameDefault
+      14: _PyFunction_Vectorcall
+      13: _PyEval_EvalFrameDefault
+      12: _PyFunction_Vectorcall
+      11: _PyEval_EvalCodeWithName
+      10: _PyEval_EvalFrameDefault
+      9: _PyObject_MakeTpCall
+      8: 0x00007fbfb4167429
+      7: _ctypes_callproc
+      6: 0x00007fbfb3ec8492
+      5: 0x00007fbfb3ecbe2d
+      4: tvm::runtime::ModuleNode::GetFunction(tvm::runtime::String const&, bool)
+            at /workspace/src/runtime/module.cc:66
+      3: tvm::runtime::RPCModuleNode::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)
+            at /workspace/src/runtime/rpc/rpc_module.cc:187
+      2: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:1011
+      1: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(tvm::runtime::RPCCode, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+            at /workspace/src/runtime/rpc/rpc_endpoint.h:223
+      0: operator()
+            at /workspace/src/runtime/rpc/rpc_endpoint.cc:688
+      File "/workspace/src/runtime/rpc/rpc_endpoint.cc", line 688
+    InternalError: Check failed: (code == RPCCode::kReturn) is false: code=1
+
+    Traceback (most recent call last):
+      50: 0xffffffffffffffff
+      49: _start
+      48: __libc_start_main
+      47: 0x00007fbfb434bd8f
+      46: Py_BytesMain
+      45: Py_RunMain
+      44: 0x00000000005f3021
+      43: PyObject_Call
+      42: _PyFunction_Vectorcall
+      41: _PyEval_EvalCodeWithName
+      40: _PyEval_EvalFrameDefault
+      39: _PyFunction_Vectorcall
+      38: _PyEval_EvalCodeWithName
+      37: _PyEval_EvalFrameDefault
+      36: 0x000000000051546f
+      35: 0x00000000005dabd0
+      34: PyEval_EvalCode
+      33: _PyEval_EvalCodeWithName
+      32: _PyEval_EvalFrameDefault
+      31: _PyFunction_Vectorcall
+      30: _PyEval_EvalCodeWithName
+      29: _PyEval_EvalFrameDefault
+      28: PyObject_Call
+      27: _PyFunction_Vectorcall
+      26: _PyEval_EvalCodeWithName
+      25: _PyEval_EvalFrameDefault
+      24: 0x0000000000521f82
+      23: _PyFunction_Vectorcall
+      22: _PyEval_EvalFrameDefault
+      21: 0x000000000052f0a9
+      20: 0x0000000000626e1c
+      19: 0x0000000000626f00
+      18: 0x00000000005291f0
+      17: _PyEval_EvalFrameDefault
+      16: _PyFunction_Vectorcall
+      15: _PyEval_EvalFrameDefault
+      14: _PyFunction_Vectorcal     [('tile_y', [-1, 512]), ('tile_x', [-1, 1])],None,9
+    No: 2   GFLOPS: 9.71/9.71       result: MeasureResult(costs=(0.027655963999999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7352867126464844, timestamp=1690344607.5445006)       [('tile_y', [-1, 8]), ('tile_x', [-1, 16])],None,43
+    No: 3   GFLOPS: 8.46/9.71       result: MeasureResult(costs=(0.0317474348,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.826432466506958, timestamp=1690344608.324583) [('tile_y', [-1, 1]), ('tile_x', [-1, 128])],None,70
+    No: 4   GFLOPS: 7.24/9.71       result: MeasureResult(costs=(0.0370749484,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.8846931457519531, timestamp=1690344609.1849132)       [('tile_y', [-1, 64]), ('tile_x', [-1, 16])],None,46
+    No: 5   GFLOPS: 11.42/11.42     result: MeasureResult(costs=(0.023515623,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6434590816497803, timestamp=1690344610.0269935)        [('tile_y', [-1, 128]), ('tile_x', [-1, 512])],None,97
+    No: 6   GFLOPS: 3.62/11.42      result: MeasureResult(costs=(0.0742203254,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4683341979980469, timestamp=1690344611.4810464)       [('tile_y', [-1, 128]), ('tile_x', [-1, 8])],None,37
+    No: 7   GFLOPS: 11.59/11.59     result: MeasureResult(costs=(0.0231556352,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6698708534240723, timestamp=1690344612.1197443)       [('tile_y', [-1, 32]), ('tile_x', [-1, 32])],None,55
+    No: 8   GFLOPS: 11.78/11.78     result: MeasureResult(costs=(0.022784483,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.706279993057251, timestamp=1690344612.7481692) [('tile_y', [-1, 64]), ('tile_x', [-1, 32])],None,56
+    No: 9   GFLOPS: 3.60/11.78      result: MeasureResult(costs=(0.0745082484,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4252007007598877, timestamp=1690344614.3315804)       [('tile_y', [-1, 256]), ('tile_x', [-1, 8])],None,38
+    No: 10  GFLOPS: 9.24/11.78      result: MeasureResult(costs=(0.0290505876,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7512252330780029, timestamp=1690344615.064585)        [('tile_y', [-1, 1]), ('tile_x', [-1, 64])],None,60
 
 
 
diff --git a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
index 08750b497f..47c40a5367 100644
--- a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
@@ -311,7 +311,7 @@ standard deviation.
 
  .. code-block:: none
 
-    {'mean': 500.13943584000117, 'median': 500.3703938499939, 'std': 1.82944721104681}
+    {'mean': 498.1850786400082, 'median': 497.6376511000126, 'std': 3.84035680931187}
 
 
 
@@ -582,30 +582,30 @@ the tuning data to.
 
  .. code-block:: none
 
-
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   12.94/  18.41 GFLOPS | Progress: (4/20) | 9.20 s
    [Task  1/25]  Current/Best:    6.38/  21.42 GFLOPS | Progress: (8/20) | 12.91 s
    [Task  1/25]  Current/Best:   12.76/  23.25 GFLOPS | Progress: (12/20) | 15.49 s
    [Task  1/25]  Current/Best:   10.29/  23.25 GFLOPS | Progress: (16/20) | 18.39 s
    [Task  1/25]  Current/Best:    7.22/  23.25 GFLOPS | Progress: (20/20) | 21.58 s Done.
-
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   19.00/  21.51 GFLOPS | Progress: (4/20) | 4.70 s
    [Task  2/25]  Current/Best:    9.87/  21.51 GFLOPS | Progress: (8/20) | 6.83 s
    [Task  2/25]  Current/Best:    7.41/  21.51 GFLOPS | Progress: (12/20) | 8.30 s
    [Task  2/25]  Current/Best:   17.40/  21.51 GFLOPS | Progress: (16/20) | 10.71 s
    [Task  2/25]  Current/Best:   14.39/  21.51 GFLOPS | Progress: (20/20) | 12.48 s Done.
-
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:   10.52/  18.24 GFLOPS | Progress: (4/20) | 6.24 s
    [Task  3/25]  Current/Best:   20.92/  20.92 GFLOPS | Progress: (8/20) | 8.57 s
    [Task  3/25]  Current/Best:   16.64/  20.92 GFLOPS | Progress: (12/20) | 11.56 s
    [Task  3/25]  Current/Best:   18.79/  20.92 GFLOPS | Progress: (16/20) | 13.93 s
    [Task  3/25]  Current/Best:   10.54/  20.92 GFLOPS | Progress: (20/20) | 16.26 s Done.
-
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:    6.42/  14.73 GFLOPS | Progress: (4/20) | 5.95 s
    [Task  4/25]  Current/Best:   14.46/  14.82 GFLOPS | Progress: (8/20) | 9.09 s
    [Task  4/25]  Current/Best:   15.54/  15.54 GFLOPS | Progress: (12/20) | 11.27 s
    [Task  4/25]  Current/Best:   15.27/  20.38 GFLOPS | Progress: (16/20) | 13.54 s
    [Task  4/25]  Current/Best:   19.47/  20.38 GFLOPS | Progress: (20/20) | 15.45 s Done.
-
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:   20.40/  20.40 GFLOPS | Progress: (4/20) | 4.90 s
    [Task  5/25]  Current/Best:   11.05/  20.40 GFLOPS | Progress: (8/20) | 8.07 s
    [Task  5/25]  Current/Best:    6.40/  20.40 GFLOPS | Progress: (12/20) | 9.99 s
    [Task  5/25]  Current/Best:    6.64/  20.40 GFLOPS | Progress: (16/20) | 11.97 s
    [Task  5/25]  Current/Best:    5.58/  20.40 GFLOPS | Progress: (20/20) | 14.00 s Done.
-
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   15.24/  18.15 GFLOPS | Progress: (4/20) | 5.41 s
    [Task  6/25]  Current/Best:    7.83/  19.51 GFLOPS | Progress: (8/20) | 8.68 s
    [Task  6/25]  Current/Best:   12.62/  19.51 GFLOPS | Progress: (12/20) | 11.76 s
    [Task  6/25]  Current/Best:   22.76/  22.76 GFLOPS | Progress: (16/20) | 14.21 s
    [Task  6/25]  Current/Best:    4.72/  22.76 GFLOPS | Progress: (20/20) | 16.90 s Done.
-
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:    7.68/  11.85 GFLOPS | Progress: (4/20) | 5.64 s
    [Task  7/25]  Current/Best:   11.53/  13.94 GFLOPS | Progress: (8/20) | 8.19 s
    [Task  7/25]  Current/Best:    6.30/  17.69 GFLOPS | Progress: (12/20) | 11.13 s
    [Task  7/25]  Current/Best:    6.97/  17.69 GFLOPS | Progress: (16/20) | 15.90 s
    [Task  7/25]  Current/Best:   10.44/  17.69 GFLOPS | Progress: (20/20) | 19.70 s Done.
-
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   14.71/  14.71 GFLOPS | Progress: (4/20) | 7.59 s
    [Task  8/25]  Current/Best:    1.70/  14.71 GFLOPS | Progress: (8/20) | 11.90 s
    [Task  8/25]  Current/Best:   10.20/  14.71 GFLOPS | Progress: (12/20) | 16.69 s
    [Task  8/25]  Current/Best:   12.28/  14.71 GFLOPS | Progress: (16/20) | 20.57 s
    [Task  8/25]  Current/Best:   13.32/  14.71 GFLOPS | Progress: (20/20) | 23.13 s Done.
-
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   12.12/  15.99 GFLOPS | Progress: (4/20) | 4.94 s
    [Task  9/25]  Current/Best:   12.29/  15.99 GFLOPS | Progress: (8/20) | 13.06 s
    [Task  9/25]  Current/Best:   15.16/  17.37 GFLOPS | Progress: (12/20) | 24.12 s
    [Task  9/25]  Current/Best:   13.49/  22.24 GFLOPS | Progress: (16/20) | 27.27 s
    [Task  9/25]  Current/Best:   12.67/  22.24 GFLOPS | Progress: (20/20) | 29.25 s
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
-
    [Task 10/25]  Current/Best:   13.36/  15.87 GFLOPS | Progress: (4/20) | 5.41 s
    [Task 10/25]  Current/Best:   15.17/  18.48 GFLOPS | Progress: (8/20) | 7.30 s
    [Task 10/25]  Current/Best:    5.43/  18.48 GFLOPS | Progress: (12/20) | 9.13 s
    [Task 10/25]  Current/Best:   13.33/  18.48 GFLOPS | Progress: (16/20) | 11.43 s
    [Task 10/25]  Current/Best:   18.86/  18.86 GFLOPS | Progress: (20/20) | 13.44 s Done.
-
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   15.30/  15.30 GFLOPS | Progress: (4/20) | 5.89 s
    [Task 11/25]  Current/Best:   12.31/  15.30 GFLOPS | Progress: (8/20) | 8.95 s
    [Task 11/25]  Current/Best:   18.58/  18.58 GFLOPS | Progress: (12/20) | 11.36 s
    [Task 11/25]  Current/Best:   16.26/  20.41 GFLOPS | Progress: (16/20) | 13.54 s
    [Task 11/25]  Current/Best:   12.32/  20.41 GFLOPS | Progress: (20/20) | 16.46 s Done.
-
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   12.20/  16.12 GFLOPS | Progress: (4/20) | 7.27 s
    [Task 12/25]  Current/Best:    5.44/  16.12 GFLOPS | Progress: (8/20) | 10.11 s
    [Task 12/25]  Current/Best:    5.76/  16.12 GFLOPS | Progress: (12/20) | 12.71 s
    [Task 12/25]  Current/Best:   15.31/  21.95 GFLOPS | Progress: (16/20) | 15.09 s
    [Task 12/25]  Current/Best:   11.70/  21.95 GFLOPS | Progress: (20/20) | 18.65 s Done.
-
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:    9.09/  20.74 GFLOPS | Progress: (4/20) | 5.20 s
    [Task 13/25]  Current/Best:    5.21/  20.74 GFLOPS | Progress: (8/20) | 9.18 s
    [Task 13/25]  Current/Best:    6.22/  20.74 GFLOPS | Progress: (12/20) | 12.94 s
    [Task 13/25]  Current/Best:   12.11/  20.74 GFLOPS | Progress: (16/20) | 15.92 s
    [Task 13/25]  Current/Best:   18.31/  20.74 GFLOPS | Progress: (20/20) | 19.61 s Done.
-
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   10.55/  21.03 GFLOPS | Progress: (4/20) | 9.58 s
    [Task 14/25]  Current/Best:    1.60/  21.03 GFLOPS | Progress: (8/20) | 14.71 s
    [Task 14/25]  Current/Best:   14.98/  21.03 GFLOPS | Progress: (12/20) | 16.63 s
    [Task 14/25]  Current/Best:    5.62/  21.03 GFLOPS | Progress: (16/20) | 28.17 s
    [Task 14/25]  Current/Best:   11.53/  21.03 GFLOPS | Progress: (20/20) | 30.33 s Done.
-
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:    5.61/  17.99 GFLOPS | Progress: (4/20) | 8.67 s
    [Task 15/25]  Current/Best:   12.53/  17.99 GFLOPS | Progress: (8/20) | 14.96 s
    [Task 15/25]  Current/Best:   10.77/  23.56 GFLOPS | Progress: (12/20) | 17.67 s
    [Task 15/25]  Current/Best:   12.31/  23.56 GFLOPS | Progress: (16/20) | 29.07 s
    [Task 15/25]  Current/Best:   17.17/  23.56 GFLOPS | Progress: (20/20) | 32.13 s
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:    4.05/  18.15 GFLOPS | Progress: (4/20) | 4.76 s
    [Task 16/25]  Current/Best:   13.18/  18.15 GFLOPS | Progress: (8/20) | 7.78 s
    [Task 16/25]  Current/Best:    5.76/  20.99 GFLOPS | Progress: (12/20) | 10.04 s
    [Task 16/25]  Current/Best:   14.92/  20.99 GFLOPS | Progress: (16/20) | 11.74 s
    [Task 16/25]  Current/Best:   19.27/  20.99 GFLOPS | Progress: (20/2
 0) | 14.06 s Done.
-
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:   23.07/  23.07 GFLOPS | Progress: (4/20) | 5.38 s
    [Task 17/25]  Current/Best:   14.13/  23.07 GFLOPS | Progress: (8/20) | 8.23 s
    [Task 17/25]  Current/Best:   18.11/  23.07 GFLOPS | Progress: (12/20) | 10.98 s
    [Task 17/25]  Current/Best:   10.36/  23.07 GFLOPS | Progress: (16/20) | 14.32 s
    [Task 17/25]  Current/Best:   17.40/  23.57 GFLOPS | Progress: (20/20) | 16.79 s Done.
-
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:   10.29/  13.70 GFLOPS | Progress: (4/20) | 7.65 s
    [Task 18/25]  Current/Best:   14.26/  22.17 GFLOPS | Progress: (8/20) | 9.95 s
    [Task 18/25]  Current/Best:   15.51/  22.17 GFLOPS | Progress: (12/20) | 14.28 s
    [Task 18/25]  Current/Best:   10.81/  22.17 GFLOPS | Progress: (16/20) | 16.94 s
    [Task 18/25]  Current/Best:   16.43/  22.17 GFLOPS | Progress: (20/20) | 25.08 s Done.
-
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   18.11/  22.22 GFLOPS | Progress: (4/20) | 6.70 s
    [Task 19/25]  Current/Best:    5.36/  22.22 GFLOPS | Progress: (8/20) | 9.47 s
    [Task 19/25]  Current/Best:    5.24/  22.22 GFLOPS | Progress: (12/20) | 13.71 s
    [Task 19/25]  Current/Best:   10.08/  22.22 GFLOPS | Progress: (16/20) | 19.26 s
    [Task 19/25]  Current/Best:   21.61/  22.22 GFLOPS | Progress: (20/20) | 23.39 s Done.
-
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:    4.31/  14.02 GFLOPS | Progress: (4/20) | 14.98 s
    [Task 20/25]  Current/Best:   13.13/  15.90 GFLOPS | Progress: (8/20) | 24.89 s
    [Task 20/25]  Current/Best:   18.56/  18.56 GFLOPS | Progress: (12/20) | 27.97 s
    [Task 20/25]  Current/Best:   12.99/  20.73 GFLOPS | Progress: (16/20) | 30.04 s
    [Task 20/25]  Current/Best:    8.18/  20.73 GFLOPS | Progress: (20/20) | 36.90 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:   10.97/  21.03 GFLOPS | Progress: (4/20) | 14.13 s
    [Task 21/25]  Current/Best:   18.28/  21.03 GFLOPS | Progress: (8/20) | 17.10 s
    [Task 21/25]  Current/Best:    9.90/  21.03 GFLOPS | Progress: (12/20) | 24.75 s
    [Task 21/25]  Current/Best:    9.45/  21.03 GFLOPS | Progress: (16/20) | 32.32 s
    [Task 21/25]  Current/Best:   17.92/  21.03 GFLOPS | Progress: (2
 0/20) | 41.19 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   12.34/  21.72 GFLOPS | Progress: (4/20) | 10.46 s
    [Task  1/25]  Current/Best:   12.86/  21.72 GFLOPS | Progress: (8/20) | 14.05 s
    [Task  1/25]  Current/Best:   16.34/  21.72 GFLOPS | Progress: (12/20) | 16.24 s
    [Task  1/25]  Current/Best:   10.28/  21.72 GFLOPS | Progress: (16/20) | 20.20 s
    [Task  1/25]  Current/Best:   12.80/  22.48 GFLOPS | Progress: (20/20) | 22.58 s Done.
+
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   19.76/  19.76 GFLOPS | Progress: (4/20) | 4.68 s
    [Task  2/25]  Current/Best:    7.49/  19.76 GFLOPS | Progress: (8/20) | 6.30 s
    [Task  2/25]  Current/Best:   21.13/  21.13 GFLOPS | Progress: (12/20) | 7.97 s
    [Task  2/25]  Current/Best:   16.35/  21.13 GFLOPS | Progress: (16/20) | 9.54 s
    [Task  2/25]  Current/Best:    6.79/  21.13 GFLOPS | Progress: (20/20) | 11.10 s Done.
+
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:    6.40/  21.67 GFLOPS | Progress: (4/20) | 6.07 s
    [Task  3/25]  Current/Best:   15.36/  21.67 GFLOPS | Progress: (8/20) | 8.53 s
    [Task  3/25]  Current/Best:   12.09/  21.67 GFLOPS | Progress: (12/20) | 11.13 s
    [Task  3/25]  Current/Best:    5.10/  22.51 GFLOPS | Progress: (16/20) | 13.80 s
    [Task  3/25]  Current/Best:    9.42/  22.51 GFLOPS | Progress: (20/20) | 16.09 s Done.
+
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:    7.88/  13.82 GFLOPS | Progress: (4/20) | 5.17 s
    [Task  4/25]  Current/Best:    7.44/  18.66 GFLOPS | Progress: (8/20) | 7.04 s
    [Task  4/25]  Current/Best:   13.87/  18.66 GFLOPS | Progress: (12/20) | 9.38 s
    [Task  4/25]  Current/Best:   15.98/  18.66 GFLOPS | Progress: (16/20) | 11.31 s
    [Task  4/25]  Current/Best:    4.39/  20.93 GFLOPS | Progress: (20/20) | 13.38 s Done.
+
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:    4.30/  12.90 GFLOPS | Progress: (4/20) | 5.08 s
    [Task  5/25]  Current/Best:   12.00/  15.64 GFLOPS | Progress: (8/20) | 7.32 s
    [Task  5/25]  Current/Best:   16.91/  18.92 GFLOPS | Progress: (12/20) | 9.10 s
    [Task  5/25]  Current/Best:   21.09/  21.09 GFLOPS | Progress: (16/20) | 11.15 s
    [Task  5/25]  Current/Best:   14.56/  21.09 GFLOPS | Progress: (20/20) | 13.61 s Done.
+
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   12.54/  14.21 GFLOPS | Progress: (4/20) | 5.36 s
    [Task  6/25]  Current/Best:    8.87/  18.43 GFLOPS | Progress: (8/20) | 8.52 s
    [Task  6/25]  Current/Best:   15.55/  18.43 GFLOPS | Progress: (12/20) | 11.68 s
    [Task  6/25]  Current/Best:   17.62/  18.43 GFLOPS | Progress: (16/20) | 13.99 s
    [Task  6/25]  Current/Best:   18.26/  18.43 GFLOPS | Progress: (20/20) | 19.17 s Done.
+
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:    8.10/  13.84 GFLOPS | Progress: (4/20) | 5.71 s
    [Task  7/25]  Current/Best:   13.70/  13.84 GFLOPS | Progress: (8/20) | 9.01 s
    [Task  7/25]  Current/Best:    5.26/  18.23 GFLOPS | Progress: (12/20) | 11.61 s
    [Task  7/25]  Current/Best:   13.03/  18.23 GFLOPS | Progress: (16/20) | 14.93 s
    [Task  7/25]  Current/Best:   12.22/  20.49 GFLOPS | Progress: (20/20) | 17.76 s Done.
+
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   12.47/  18.04 GFLOPS | Progress: (4/20) | 5.15 s
    [Task  8/25]  Current/Best:    3.14/  18.04 GFLOPS | Progress: (8/20) | 8.30 s
    [Task  8/25]  Current/Best:   12.10/  18.04 GFLOPS | Progress: (12/20) | 13.17 s
    [Task  8/25]  Current/Best:    6.54/  18.04 GFLOPS | Progress: (16/20) | 16.91 s
    [Task  8/25]  Current/Best:   15.19/  18.04 GFLOPS | Progress: (20/20) | 20.01 s Done.
+
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   15.04/  21.12 GFLOPS | Progress: (4/20) | 6.75 s
    [Task  9/25]  Current/Best:    6.40/  21.12 GFLOPS | Progress: (8/20) | 15.16 s
    [Task  9/25]  Current/Best:    8.53/  21.12 GFLOPS | Progress: (12/20) | 23.15 s
    [Task  9/25]  Current/Best:    5.06/  22.59 GFLOPS | Progress: (16/20) | 29.17 s
    [Task  9/25]  Current/Best:    5.43/  22.59 GFLOPS | Progress: (20/20) | 31.21 s Done.
+
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   14.70/  14.70 GFLOPS | Progress: (4/20) | 4.82 s
    [Task 10/25]  Current/Best:    3.59/  14.70 GFLOPS | Progress: (8/20) | 7.34 s
    [Task 10/25]  Current/Best:   14.24/  15.07 GFLOPS | Progress: (12/20) | 9.83 s
    [Task 10/25]  Current/Best:   14.92/  17.63 GFLOPS | Progress: (16/20) | 11.51 s
    [Task 10/25]  Current/Best:   11.35/  18.18 GFLOPS | Progress: (20/20) | 13.50 s Done.
+
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   17.91/  23.08 GFLOPS | Progress: (4/20) | 4.73 s
    [Task 11/25]  Current/Best:   23.09/  23.09 GFLOPS | Progress: (8/20) | 6.97 s
    [Task 11/25]  Current/Best:    9.90/  23.09 GFLOPS | Progress: (12/20) | 10.00 s
    [Task 11/25]  Current/Best:   13.88/  23.09 GFLOPS | Progress: (16/20) | 12.92 s
    [Task 11/25]  Current/Best:   19.55/  23.09 GFLOPS | Progress: (20/20) | 14.89 s Done.
+
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   14.98/  14.98 GFLOPS | Progress: (4/20) | 8.14 s
    [Task 12/25]  Current/Best:    3.11/  20.30 GFLOPS | Progress: (8/20) | 10.92 s
    [Task 12/25]  Current/Best:   14.25/  20.30 GFLOPS | Progress: (12/20) | 13.02 s
    [Task 12/25]  Current/Best:   18.26/  20.30 GFLOPS | Progress: (16/20) | 16.91 s
    [Task 12/25]  Current/Best:   18.02/  20.30 GFLOPS | Progress: (20/20) | 19.46 s Done.
+
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:    8.69/  21.52 GFLOPS | Progress: (4/20) | 5.40 s
    [Task 13/25]  Current/Best:   16.93/  22.84 GFLOPS | Progress: (8/20) | 7.99 s
    [Task 13/25]  Current/Best:   18.35/  22.84 GFLOPS | Progress: (12/20) | 10.47 s
    [Task 13/25]  Current/Best:    9.44/  22.84 GFLOPS | Progress: (16/20) | 13.70 s
    [Task 13/25]  Current/Best:   11.29/  22.84 GFLOPS | Progress: (20/20) | 18.03 s Done.
+
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   13.03/  13.03 GFLOPS | Progress: (4/20) | 5.68 s
    [Task 14/25]  Current/Best:   16.35/  17.54 GFLOPS | Progress: (8/20) | 9.89 s
    [Task 14/25]  Current/Best:   12.93/  17.54 GFLOPS | Progress: (12/20) | 14.05 s
    [Task 14/25]  Current/Best:   12.08/  17.54 GFLOPS | Progress: (16/20) | 25.61 s
    [Task 14/25]  Current/Best:   18.37/  18.37 GFLOPS | Progress: (20/20) | 34.74 s
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+
    [Task 15/25]  Current/Best:   12.98/  21.20 GFLOPS | Progress: (4/20) | 5.78 s
    [Task 15/25]  Current/Best:   18.37/  21.20 GFLOPS | Progress: (8/20) | 7.48 s
    [Task 15/25]  Current/Best:   10.39/  22.45 GFLOPS | Progress: (12/20) | 18.25 s
    [Task 15/25]  Current/Best:    5.78/  22.45 GFLOPS | Progress: (16/20) | 25.02 s
    [Task 15/25]  Current/Best:    8.44/  22.45 GFLOPS | Progress: (20/20) | 31.01 s
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   16.40/  19.02 GFLOPS | Progress: (4/20) | 4.59 s
    [Task 16/25]  Current/Best:    5.49/  19.02 GFLOPS | Progress: (8/20) | 7.08 s
    [Task 16/25]  Current/Best:   14.25/  19.02 GFLOPS | Progress: (12/20) | 9.47 s
    [Task 16/25]  Current/Best:   15.03/  19.02 GFLOPS | Progress: (16/20) | 11.98 s
    [Task 16/25]  Current/Best:   10.97/  19.02 GFLOPS | Progress: (20/20) | 14.15 s Done.
+
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:   19.22/  19.22 GFLOPS | Progress: (4/20) | 5.60 s
    [Task 17/25]  Current/Best:   15.57/  19.22 GFLOPS | Progress: (8/20) | 8.83 s
    [Task 17/25]  Current/Best:   11.45/  19.88 GFLOPS | Progress: (12/20) | 12.49 s
    [Task 17/25]  Current/Best:   16.10/  19.88 GFLOPS | Progress: (16/20) | 16.84 s
    [Task 17/25]  Current/Best:   20.99/  23.05 GFLOPS | Progress: (20/20) | 19.55 s Done.
+
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:    9.32/  17.93 GFLOPS | Progress: (4/20) | 6.58 s
    [Task 18/25]  Current/Best:    8.08/  17.93 GFLOPS | Progress: (8/20) | 10.95 s
    [Task 18/25]  Current/Best:   18.30/  18.30 GFLOPS | Progress: (12/20) | 13.54 s
    [Task 18/25]  Current/Best:   12.52/  18.30 GFLOPS | Progress: (16/20) | 17.01 s
    [Task 18/25]  Current/Best:    4.40/  18.30 GFLOPS | Progress: (20/20) | 20.01 s Done.
+
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   18.01/  18.01 GFLOPS | Progress: (4/20) | 6.82 s
    [Task 19/25]  Current/Best:   11.02/  18.01 GFLOPS | Progress: (8/20) | 12.20 s
    [Task 19/25]  Current/Best:   16.93/  18.01 GFLOPS | Progress: (12/20) | 16.08 s
    [Task 19/25]  Current/Best:   21.66/  21.66 GFLOPS | Progress: (16/20) | 19.16 s
    [Task 19/25]  Current/Best:   12.88/  21.66 GFLOPS | Progress: (20/20) | 25.81 s Done.
+
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:   20.17/  20.26 GFLOPS | Progress: (4/20) | 9.67 s
    [Task 20/25]  Current/Best:    5.24/  20.26 GFLOPS | Progress: (8/20) | 21.59 s
    [Task 20/25]  Current/Best:    8.35/  20.26 GFLOPS | Progress: (12/20) | 27.74 s
    [Task 20/25]  Current/Best:   20.04/  20.26 GFLOPS | Progress: (16/20) | 30.77 s
    [Task 20/25]  Current/Best:   18.44/  20.26 GFLOPS | Progress: (20/20) | 37.65 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:   17.76/  17.76 GFLOPS | Progress: (4/20) | 5.75 s
    [Task 21/25]  Current/Best:   12.55/  21.68 GFLOPS | Progress: (8/20) | 7.72 s
    [Task 21/25]  Current/Best:   10.28/  21.68 GFLOPS | Progress: (12/20) | 10.32 s
    [Task 21/25]  Current/Best:   17.55/  21.68 GFLOPS | Progress: (16/20) | 19.88 s
    [Task 21/25]  Current/Best:   19.44/  21.68 GFLOPS | Progress: (20/2
 0) | 31.08 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:   16.07/  21.24 GFLOPS | Progress: (4/20) | 5.87 s
    [Task 22/25]  Current/Best:    6.14/  21.24 GFLOPS | Progress: (8/20) | 7.94 s
    [Task 22/25]  Current/Best:   12.42/  21.24 GFLOPS | Progress: (12/20) | 10.11 s
    [Task 22/25]  Current/Best:    7.28/  21.24 GFLOPS | Progress: (16/20) | 13.79 s
    [Task 22/25]  Current/Best:    8.62/  21.24 GFLOPS | Progress: (20/20) | 16.63 s Done.
+
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   23.04/  23.04 GFLOPS | Progress: (4/20) | 6.56 s
    [Task 23/25]  Current/Best:   22.69/  23.04 GFLOPS | Progress: (8/20) | 11.41 s
    [Task 23/25]  Current/Best:   13.39/  23.04 GFLOPS | Progress: (12/20) | 14.34 s
    [Task 23/25]  Current/Best:   21.23/  23.04 GFLOPS | Progress: (16/20) | 18.28 s
    [Task 23/25]  Current/Best:    5.16/  23.04 GFLOPS | Progress: (20/20) | 21.70 s Done.
+
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    7.81/   9.58 GFLOPS | Progress: (4/20) | 10.07 s
    [Task 24/25]  Current/Best:    3.24/   9.58 GFLOPS | Progress: (8/20) | 21.14 s Done.
      Done.
      Done.
-
    [Task 22/25]  Current/Best:   16.38/  18.19 GFLOPS | Progress: (4/20) | 6.03 s
    [Task 22/25]  Current/Best:   12.41/  18.19 GFLOPS | Progress: (8/20) | 9.28 s
    [Task 22/25]  Current/Best:    7.57/  18.19 GFLOPS | Progress: (12/20) | 14.23 s
    [Task 22/25]  Current/Best:   17.91/  18.19 GFLOPS | Progress: (16/20) | 16.85 s
    [Task 22/25]  Current/Best:    8.96/  18.19 GFLOPS | Progress: (20/20) | 19.84 s Done.
-
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   19.06/  19.06 GFLOPS | Progress: (4/20) | 5.78 s
    [Task 23/25]  Current/Best:   12.10/  22.21 GFLOPS | Progress: (8/20) | 8.25 s
    [Task 23/25]  Current/Best:   20.03/  22.21 GFLOPS | Progress: (12/20) | 11.44 s
    [Task 23/25]  Current/Best:   11.44/  22.21 GFLOPS | Progress: (16/20) | 14.84 s
    [Task 23/25]  Current/Best:   18.66/  22.82 GFLOPS | Progress: (20/20) | 18.02 s Done.
-
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    8.95/   8.95 GFLOPS | Progress: (4/20) | 13.91 s
    [Task 24/25]  Current/Best:    5.80/   8.95 GFLOPS | Progress: (8/20) | 18.72 s
    [Task 24/25]  Current/Best:    1.27/   8.95 GFLOPS | Progress: (12/20) | 29.46 s
    [Task 24/25]  Current/Best:    3.31/   8.95 GFLOPS | Progress: (16/20) | 42.08 s
    [Task 24/25]  Current/Best:    2.97/   8.95 GFLOPS | Progress: (20/20) | 55.16 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    3.03/   8.77 GFLOPS | Progress: (4/20) | 11.06 s
    [Task 25/25]  Current/Best:    1.54/   9.17 GFLOPS | Progress: (8/20) | 14.52 s
    [Task 25/25]  Current/Best:    1.55/   9.17 GFLOPS | Progress: (12/20) | 25.53 s
    [Task 25/25]  Current/Best:    5.58/   9.17 GFLOPS | Progress: (16/20) | 29.13 s
    [Task 25/25]  Current/Best:    6.64/   9.17 GFLOPS | Progress: (2
 0/20) | 40.13 s
+
    [Task 24/25]  Current/Best:    9.42/   9.61 GFLOPS | Progress: (12/20) | 33.47 s
    [Task 24/25]  Current/Best:    0.98/   9.61 GFLOPS | Progress: (16/20) | 41.49 s
    [Task 24/25]  Current/Best:    8.31/   9.61 GFLOPS | Progress: (20/20) | 52.58 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    5.61/   8.03 GFLOPS | Progress: (4/20) | 5.81 s
    [Task 25/25]  Current/Best:    1.55/   9.25 GFLOPS | Progress: (8/20) | 7.25 s
    [Task 25/25]  Current/Best:    2.88/   9.25 GFLOPS | Progress: (12/20) | 10.72 s
    [Task 25/25]  Current/Best:    8.06/   9.25 GFLOPS | Progress: (16/20) | 17.15 s
    [Task 25/25]  Current/Best:    8.86/   9.25 GFLOPS | Progress: (20/20) | 28.19 s
 
 
 
@@ -766,8 +766,8 @@ improvement in comparing the optimized model to the unoptimized model.
 
  .. code-block:: none
 
-    optimized: {'mean': 413.3076727600019, 'median': 413.26256724998984, 'std': 0.9493048239191954}
-    unoptimized: {'mean': 500.13943584000117, 'median': 500.3703938499939, 'std': 1.82944721104681}
+    optimized: {'mean': 408.6138473600022, 'median': 408.82695934997173, 'std': 0.9052774125969995}
+    unoptimized: {'mean': 498.1850786400082, 'median': 497.6376511000126, 'std': 3.84035680931187}
 
 
 
@@ -790,7 +790,7 @@ profiling/benchmarking.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 14 minutes  5.972 seconds)
+   **Total running time of the script:** ( 13 minutes  35.613 seconds)
 
 
 .. _sphx_glr_download_tutorial_autotvm_relay_x86.py:
diff --git a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
index 384731f121..d937cd4312 100644
--- a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
@@ -274,7 +274,7 @@ device and returns the measured cost. Network overhead is excluded.
 
  .. code-block:: none
 
-    1.595e-07 secs/op
+    1.3e-07 secs/op
 
 
 
diff --git a/docs/_sources/tutorial/intro_topi.rst.txt b/docs/_sources/tutorial/intro_topi.rst.txt
index 7b85902673..6236701720 100644
--- a/docs/_sources/tutorial/intro_topi.rst.txt
+++ b/docs/_sources/tutorial/intro_topi.rst.txt
@@ -270,7 +270,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
 
  .. code-block:: none
 
-    [stage(a, placeholder(a, 0xe4bc310)), stage(b, placeholder(b, 0x22373140)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T. [...]
+    [stage(a, placeholder(a, 0x24df5c70)), stage(b, placeholder(b, 0xa2fbd90)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T. [...]
 
 
 
diff --git a/docs/_sources/tutorial/sg_execution_times.rst.txt b/docs/_sources/tutorial/sg_execution_times.rst.txt
index 1b7e5c6b82..6f25ffa46f 100644
--- a/docs/_sources/tutorial/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorial/sg_execution_times.rst.txt
@@ -5,24 +5,24 @@
 
 Computation times
 =================
-**18:28.867** total execution time for **tutorial** files:
+**17:28.900** total execution time for **tutorial** files:
 
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 14:05.972 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 13:35.613 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:57.137 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:39.181 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 01:00.483 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 01:02.094 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:44.372 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:44.200 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:38.779 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:25.684 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:01.024 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:01.039 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.871 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.870 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.229 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.219 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_uma.py` (``uma.py``)                                             | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
index ba9333b03d..26f1af7123 100644
--- a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
@@ -389,7 +389,7 @@ compile and run this new schedule with the parallel operation applied:
 
  .. code-block:: none
 
-    parallel: 0.000008
+    parallel: 0.000007
 
 
 
@@ -444,7 +444,7 @@ factor to be the number of threads on your CPU.
 
  .. code-block:: none
 
-    vector: 0.000039
+    vector: 0.000040
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -498,10 +498,10 @@ We can now compare the different schedules
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                   numpy    7.516969999414869e-06                    1.0
-                   naive    6.7544999999999995e-06    0.8985668428270672
-                parallel              8.1922e-06      1.0898274172489304
-                  vector    3.9073900000000005e-05       5.1980917847273
+                   numpy    7.760499993310078e-06                    1.0
+                   naive              7.0947e-06      0.9142065596438337
+                parallel              7.3656e-06      0.9491141042908962
+                  vector             3.95142e-05      5.0917080128939025
 
 
 
@@ -922,7 +922,7 @@ matrix multiplication.
 
  .. code-block:: none
 
-    Numpy running time: 0.019325
+    Numpy running time: 0.018652
 
 
 
@@ -980,7 +980,7 @@ optimizations.
 
  .. code-block:: none
 
-    none: 3.359379
+    none: 3.521342
 
 
 
@@ -1080,7 +1080,7 @@ schedule.
 
  .. code-block:: none
 
-    blocking: 0.321201
+    blocking: 0.315924
 
 
 
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.
 
  .. code-block:: none
 
-    vectorization: 0.319568
+    vectorization: 0.291435
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1230,7 +1230,7 @@ more cache friendly.
 
  .. code-block:: none
 
-    loop permutation: 0.121236
+    loop permutation: 0.116806
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1321,7 +1321,7 @@ optimized schedule.
 
  .. code-block:: none
 
-    array packing: 0.106509
+    array packing: 0.107784
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1404,7 +1404,7 @@ to `C` when all the block results are ready.
 
  .. code-block:: none
 
-    block caching: 0.113137
+    block caching: 0.111464
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1478,7 +1478,7 @@ of thread-level parallelization.
 
  .. code-block:: none
 
-    parallelization: 0.133806
+    parallelization: 0.132848
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1548,13 +1548,13 @@ working, we can compare the results.
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                    none            3.3593791128                     1.0
-                blocking            0.3212007641     0.09561313365203465
-           vectorization     0.31956787359999994     0.09512706451688453
-        loop permutation            0.1212361367    0.036088852323354244
-           array packing              0.10650887     0.03170492713792764
-           block caching            0.1131365701      0.0336778214965152
-         parallelization            0.1338055825    0.039830450213305856
+                    none            3.5213420559                     1.0
+                blocking            0.3159241834      0.0897169824415864
+           vectorization     0.29143462470000003     0.08276237300256079
+        loop permutation            0.1168059489     0.03317086129258358
+           array packing            0.1077835381    0.030608653288711034
+           block caching            0.1114637584     0.03165377195130554
+         parallelization            0.1328480482     0.03772653894199612
 
 
 
@@ -1596,7 +1596,7 @@ the computation for specific platforms.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  0.483 seconds)
+   **Total running time of the script:** ( 1 minutes  2.094 seconds)
 
 
 .. _sphx_glr_download_tutorial_tensor_expr_get_started.py:
diff --git a/docs/api/rust/help.html b/docs/api/rust/help.html
index 131b87bfc3..b7e9afb4d7 100644
--- a/docs/api/rust/help.html
+++ b/docs/api/rust/help.html
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="generator" content="rustdoc"><meta name="description" content="Documentation for Rustdoc"><title>Rustdoc help</title><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/SourceSerif4-Regular-46f98efaafac5295.ttf.woff2"><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/FiraSans-Regular-018c14 [...]
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="generator" content="rustdoc"><meta name="description" content="Documentation for Rustdoc"><title>Rustdoc help</title><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/SourceSerif4-Regular-46f98efaafac5295.ttf.woff2"><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/FiraSans-Regular-018c14 [...]
\ No newline at end of file
diff --git a/docs/api/rust/settings.html b/docs/api/rust/settings.html
index 9987e08388..6212392595 100644
--- a/docs/api/rust/settings.html
+++ b/docs/api/rust/settings.html
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="generator" content="rustdoc"><meta name="description" content="Settings of Rustdoc"><title>Rustdoc settings</title><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/SourceSerif4-Regular-46f98efaafac5295.ttf.woff2"><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/FiraSans-Regular-018c141b [...]
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="generator" content="rustdoc"><meta name="description" content="Settings of Rustdoc"><title>Rustdoc settings</title><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/SourceSerif4-Regular-46f98efaafac5295.ttf.woff2"><link rel="preload" as="font" type="font/woff2" crossorigin href="./static.files/FiraSans-Regular-018c141b [...]
\ No newline at end of file
diff --git a/docs/commit_hash b/docs/commit_hash
index 0e076fe3a2..214b9e527d 100644
--- a/docs/commit_hash
+++ b/docs/commit_hash
@@ -1 +1 @@
-9ff74fb13f5c7fdedb91d47755b98b1d77fa9132
+304aa1e084be8cf294a9f851287a09deed102285
diff --git a/docs/how_to/compile_models/from_darknet.html b/docs/how_to/compile_models/from_darknet.html
index 94f959430a..8e526b690e 100644
--- a/docs/how_to/compile_models/from_darknet.html
+++ b/docs/how_to/compile_models/from_darknet.html
@@ -595,7 +595,7 @@ class:[&#39;truck 0.9266&#39;] left:471 top:83 right:689 bottom:169
 class:[&#39;bicycle 0.9984&#39;] left:111 top:113 right:577 bottom:447
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  36.871 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  36.924 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-darknet-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7716f96385bd5abb6e822041e285be54/from_darknet.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_darknet.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_mxnet.html b/docs/how_to/compile_models/from_mxnet.html
index c6f97c91bc..856101f774 100644
--- a/docs/how_to/compile_models/from_mxnet.html
+++ b/docs/how_to/compile_models/from_mxnet.html
@@ -449,7 +449,7 @@
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;x&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#tuple" title="builtins.tuple" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">x</span><span class="o">.</span><span class="n">shape</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip578cabc6-c1fc-4405-8b72-98f7aacf70eb from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip276663bb-29f7-4498-88ce-3368a555ad78 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
 x (1, 3, 224, 224)
 </pre></div>
 </div>
diff --git a/docs/how_to/compile_models/from_oneflow.html b/docs/how_to/compile_models/from_oneflow.html
index 4df9483572..b4244ddb1c 100644
--- a/docs/how_to/compile_models/from_oneflow.html
+++ b/docs/how_to/compile_models/from_oneflow.html
@@ -459,14 +459,14 @@ Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdo
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip&quot; to /workspace/.oneflow/flowvision_cache/resnet18.zip
 
   0%|          | 0.00/41.5M [00:00&lt;?, ?B/s]
- 19%|#9        | 7.99M/41.5M [00:00&lt;00:00, 43.6MB/s]
- 35%|###4      | 14.3M/41.5M [00:00&lt;00:00, 40.9MB/s]
- 44%|####3     | 18.2M/41.5M [00:00&lt;00:00, 35.7MB/s]
- 58%|#####7    | 24.0M/41.5M [00:00&lt;00:00, 37.4MB/s]
- 77%|#######7  | 32.0M/41.5M [00:00&lt;00:00, 44.4MB/s]
- 87%|########7 | 36.3M/41.5M [00:00&lt;00:00, 39.8MB/s]
- 97%|#########6| 40.1M/41.5M [00:01&lt;00:00, 36.6MB/s]
-100%|##########| 41.5M/41.5M [00:01&lt;00:00, 39.8MB/s]
+ 15%|#5        | 6.33M/41.5M [00:00&lt;00:00, 59.4MB/s]
+ 29%|##8       | 12.0M/41.5M [00:00&lt;00:00, 55.4MB/s]
+ 42%|####1     | 17.3M/41.5M [00:00&lt;00:00, 37.4MB/s]
+ 54%|#####3    | 22.3M/41.5M [00:00&lt;00:00, 38.5MB/s]
+ 63%|######3   | 26.3M/41.5M [00:00&lt;00:00, 38.6MB/s]
+ 77%|#######7  | 32.0M/41.5M [00:00&lt;00:00, 37.4MB/s]
+ 96%|#########6| 40.0M/41.5M [00:00&lt;00:00, 43.3MB/s]
+100%|##########| 41.5M/41.5M [00:01&lt;00:00, 43.3MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_paddle.html b/docs/how_to/compile_models/from_paddle.html
index 6e3d274f9b..f2172ff208 100644
--- a/docs/how_to/compile_models/from_paddle.html
+++ b/docs/how_to/compile_models/from_paddle.html
@@ -494,7 +494,7 @@ To begin, we’ll install PaddlePaddle&gt;=2.1.3:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>TVM prediction top-1 id: 282, class name:  282: &#39;tiger cat&#39;,
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  3.743 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  4.090 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-paddle-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/16269b77359771348d507395692524cf/from_paddle.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_paddle.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_pytorch.html b/docs/how_to/compile_models/from_pytorch.html
index 5a2067c665..5802f23102 100644
--- a/docs/how_to/compile_models/from_pytorch.html
+++ b/docs/how_to/compile_models/from_pytorch.html
@@ -442,14 +442,15 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/resnet18-f37072fd.pth&quot; to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
 
   0%|          | 0.00/44.7M [00:00&lt;?, ?B/s]
- 14%|#4        | 6.30M/44.7M [00:00&lt;00:00, 53.7MB/s]
- 26%|##5       | 11.4M/44.7M [00:00&lt;00:00, 42.1MB/s]
- 36%|###5      | 16.0M/44.7M [00:00&lt;00:00, 42.0MB/s]
- 54%|#####3    | 24.0M/44.7M [00:00&lt;00:00, 42.2MB/s]
- 68%|######7   | 30.3M/44.7M [00:00&lt;00:00, 44.1MB/s]
- 81%|########  | 36.2M/44.7M [00:00&lt;00:00, 48.5MB/s]
- 92%|#########1| 41.0M/44.7M [00:00&lt;00:00, 41.7MB/s]
-100%|##########| 44.7M/44.7M [00:01&lt;00:00, 46.6MB/s]
+ 14%|#4        | 6.30M/44.7M [00:00&lt;00:00, 48.1MB/s]
+ 24%|##4       | 10.9M/44.7M [00:00&lt;00:00, 35.5MB/s]
+ 32%|###2      | 14.4M/44.7M [00:00&lt;00:00, 35.8MB/s]
+ 45%|####4     | 20.1M/44.7M [00:00&lt;00:00, 43.8MB/s]
+ 55%|#####4    | 24.4M/44.7M [00:00&lt;00:00, 36.0MB/s]
+ 72%|#######1  | 32.0M/44.7M [00:00&lt;00:00, 37.1MB/s]
+ 86%|########5 | 38.3M/44.7M [00:01&lt;00:00, 41.5MB/s]
+ 95%|#########5| 42.5M/44.7M [00:01&lt;00:00, 41.3MB/s]
+100%|##########| 44.7M/44.7M [00:01&lt;00:00, 40.1MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_tensorflow.html b/docs/how_to/compile_models/from_tensorflow.html
index d690389fa8..080604e38a 100644
--- a/docs/how_to/compile_models/from_tensorflow.html
+++ b/docs/how_to/compile_models/from_tensorflow.html
@@ -662,7 +662,7 @@ banana (score = 0.00022)
 desk (score = 0.00019)
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  47.707 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  40.402 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-tensorflow-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7f1d3d1b878694c201c614c807cdebc8/from_tensorflow.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_tensorflow.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/sg_execution_times.html b/docs/how_to/compile_models/sg_execution_times.html
index 2af6687241..0eeff32d13 100644
--- a/docs/how_to/compile_models/sg_execution_times.html
+++ b/docs/how_to/compile_models/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-compile-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>07:40.914</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
+<p><strong>07:33.864</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -359,43 +359,43 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
-<td><p>01:47.707</p></td>
+<td><p>01:40.402</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
-<td><p>01:36.871</p></td>
+<td><p>01:36.924</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></td>
-<td><p>01:03.743</p></td>
+<td><p>01:04.090</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></td>
-<td><p>00:42.938</p></td>
+<td><p>00:42.876</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
-<td><p>00:38.379</p></td>
+<td><p>00:38.641</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></td>
-<td><p>00:36.548</p></td>
+<td><p>00:36.101</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
-<td><p>00:29.616</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
+<td><p>00:29.175</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
-<td><p>00:28.842</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
+<td><p>00:28.962</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></td>
-<td><p>00:13.353</p></td>
+<td><p>00:13.819</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></td>
-<td><p>00:02.917</p></td>
+<td><p>00:02.875</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno.html b/docs/how_to/deploy_models/deploy_model_on_adreno.html
index 73b9317e14..f50e559136 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno.html
@@ -840,10 +840,10 @@ Top5 predictions:
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
- 4224.9461    4224.8727    4227.2275    4223.2891      1.1490
+ 4223.1123    4223.3072    4224.3717    4221.5586      0.8522
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  21.495 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  21.201 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-model-on-adreno-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/2387d8448da213eb625e6b3d916327d4/deploy_model_on_adreno.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_model_on_adreno.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
index 5abb4d4863..0ffe8f3b38 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
@@ -448,26 +448,29 @@ to run this tutorial with a real device over rpc.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
 
      8192/102967424 [..............................] - ETA: 0s
-  8380416/102967424 [=&gt;............................] - ETA: 0s
- 16769024/102967424 [===&gt;..........................] - ETA: 0s
- 23412736/102967424 [=====&gt;........................] - ETA: 0s
- 24870912/102967424 [======&gt;.......................] - ETA: 1s
+  6635520/102967424 [&gt;.............................] - ETA: 1s
+  8380416/102967424 [=&gt;............................] - ETA: 2s
+ 15024128/102967424 [===&gt;..........................] - ETA: 1s
+ 16769024/102967424 [===&gt;..........................] - ETA: 2s
  25157632/102967424 [======&gt;.......................] - ETA: 1s
- 31113216/102967424 [========&gt;.....................] - ETA: 1s
- 36765696/102967424 [=========&gt;....................] - ETA: 0s
- 40189952/102967424 [==========&gt;...................] - ETA: 0s
- 41934848/102967424 [===========&gt;..................] - ETA: 0s
- 50323456/102967424 [=============&gt;................] - ETA: 0s
- 54771712/102967424 [==============&gt;...............] - ETA: 0s
+ 33546240/102967424 [========&gt;.....................] - ETA: 1s
+ 40189952/102967424 [==========&gt;...................] - ETA: 1s
+ 41934848/102967424 [===========&gt;..................] - ETA: 1s
+ 48578560/102967424 [=============&gt;................] - ETA: 1s
+ 50323456/102967424 [=============&gt;................] - ETA: 1s
+ 56967168/102967424 [===============&gt;..............] - ETA: 0s
  58712064/102967424 [================&gt;.............] - ETA: 0s
- 58851328/102967424 [================&gt;.............] - ETA: 0s
  65355776/102967424 [==================&gt;...........] - ETA: 0s
+ 67100672/102967424 [==================&gt;...........] - ETA: 0s
  69296128/102967424 [===================&gt;..........] - ETA: 0s
  75489280/102967424 [====================&gt;.........] - ETA: 0s
+ 82124800/102967424 [======================&gt;.......] - ETA: 0s
  83877888/102967424 [=======================&gt;......] - ETA: 0s
  90521600/102967424 [=========================&gt;....] - ETA: 0s
- 93937664/102967424 [==========================&gt;...] - ETA: 0s
+ 92266496/102967424 [=========================&gt;....] - ETA: 0s
+ 98910208/102967424 [===========================&gt;..] - ETA: 0s
 100646912/102967424 [============================&gt;.] - ETA: 0s
+102850560/102967424 [============================&gt;.] - ETA: 0s
 102967424/102967424 [==============================] - 2s 0us/step
 </pre></div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_model_on_android.html b/docs/how_to/deploy_models/deploy_model_on_android.html
index 6fd5b674d7..0f17a03f47 100644
--- a/docs/how_to/deploy_models/deploy_model_on_android.html
+++ b/docs/how_to/deploy_models/deploy_model_on_android.html
@@ -672,7 +672,7 @@ to the remote android device.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  16.3026      16.2233      17.3629      15.5300       0.4568
+  15.0815      15.0722      15.2434      14.8671       0.1197
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
index ddd60bf7bc..4da067c372 100644
--- a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
+++ b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
@@ -464,34 +464,39 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth&quot; to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
 
   0%|          | 0.00/170M [00:00&lt;?, ?B/s]
-  4%|3         | 6.30M/170M [00:00&lt;00:02, 66.1MB/s]
-  7%|7         | 12.6M/170M [00:00&lt;00:03, 45.0MB/s]
- 10%|#         | 17.3M/170M [00:00&lt;00:04, 32.2MB/s]
- 14%|#4        | 24.0M/170M [00:00&lt;00:04, 37.9MB/s]
- 18%|#8        | 31.3M/170M [00:00&lt;00:03, 47.6MB/s]
- 21%|##1       | 36.4M/170M [00:00&lt;00:03, 35.9MB/s]
- 24%|##3       | 40.6M/170M [00:01&lt;00:03, 34.5MB/s]
- 28%|##8       | 48.0M/170M [00:01&lt;00:03, 39.0MB/s]
- 31%|###1      | 53.2M/170M [00:01&lt;00:02, 41.5MB/s]
- 34%|###3      | 57.4M/170M [00:01&lt;00:03, 37.4MB/s]
- 38%|###7      | 64.0M/170M [00:01&lt;00:02, 38.8MB/s]
- 42%|####2     | 72.0M/170M [00:01&lt;00:02, 40.8MB/s]
- 46%|####6     | 78.6M/170M [00:02&lt;00:02, 46.7MB/s]
- 49%|####9     | 83.4M/170M [00:02&lt;00:01, 46.8MB/s]
- 52%|#####1    | 88.0M/170M [00:02&lt;00:02, 30.6MB/s]
- 57%|#####6    | 96.0M/170M [00:02&lt;00:02, 34.9MB/s]
- 61%|######1   | 104M/170M [00:02&lt;00:01, 38.0MB/s]
- 66%|######5   | 112M/170M [00:02&lt;00:01, 39.8MB/s]
- 71%|#######   | 120M/170M [00:03&lt;00:01, 45.6MB/s]
- 74%|#######4  | 126M/170M [00:03&lt;00:01, 44.6MB/s]
- 77%|#######7  | 131M/170M [00:03&lt;00:00, 45.8MB/s]
- 80%|########  | 136M/170M [00:03&lt;00:00, 39.1MB/s]
- 85%|########4 | 144M/170M [00:03&lt;00:00, 41.2MB/s]
- 88%|########8 | 150M/170M [00:03&lt;00:00, 43.5MB/s]
- 91%|#########1| 155M/170M [00:04&lt;00:00, 41.7MB/s]
- 94%|#########4| 160M/170M [00:04&lt;00:00, 34.7MB/s]
- 98%|#########7| 166M/170M [00:04&lt;00:00, 41.1MB/s]
-100%|##########| 170M/170M [00:04&lt;00:00, 40.3MB/s]
+  4%|3         | 6.30M/170M [00:00&lt;00:03, 48.7MB/s]
+  6%|6         | 11.0M/170M [00:00&lt;00:03, 45.6MB/s]
+  9%|9         | 16.0M/170M [00:00&lt;00:04, 33.2MB/s]
+ 13%|#3        | 22.3M/170M [00:00&lt;00:04, 34.5MB/s]
+ 15%|#5        | 25.8M/170M [00:00&lt;00:04, 31.5MB/s]
+ 18%|#7        | 30.3M/170M [00:00&lt;00:04, 30.6MB/s]
+ 20%|#9        | 33.3M/170M [00:01&lt;00:04, 30.4MB/s]
+ 24%|##3       | 40.0M/170M [00:01&lt;00:03, 36.1MB/s]
+ 27%|##7       | 46.3M/170M [00:01&lt;00:03, 34.2MB/s]
+ 29%|##9       | 49.6M/170M [00:01&lt;00:03, 33.0MB/s]
+ 33%|###2      | 56.0M/170M [00:01&lt;00:03, 35.8MB/s]
+ 38%|###7      | 64.0M/170M [00:01&lt;00:02, 38.1MB/s]
+ 42%|####2     | 72.0M/170M [00:02&lt;00:02, 41.7MB/s]
+ 46%|####6     | 78.3M/170M [00:02&lt;00:02, 44.5MB/s]
+ 49%|####8     | 82.6M/170M [00:02&lt;00:02, 43.2MB/s]
+ 51%|#####1    | 86.7M/170M [00:02&lt;00:02, 35.3MB/s]
+ 53%|#####3    | 90.2M/170M [00:02&lt;00:02, 34.1MB/s]
+ 57%|#####6    | 96.0M/170M [00:02&lt;00:02, 32.3MB/s]
+ 61%|######1   | 104M/170M [00:02&lt;00:01, 40.4MB/s]
+ 66%|######5   | 112M/170M [00:03&lt;00:01, 43.8MB/s]
+ 70%|######9   | 118M/170M [00:03&lt;00:01, 46.6MB/s]
+ 72%|#######2  | 123M/170M [00:03&lt;00:01, 46.5MB/s]
+ 75%|#######5  | 127M/170M [00:03&lt;00:00, 44.8MB/s]
+ 78%|#######7  | 132M/170M [00:03&lt;00:00, 42.1MB/s]
+ 80%|########  | 136M/170M [00:03&lt;00:00, 42.6MB/s]
+ 82%|########2 | 140M/170M [00:03&lt;00:00, 38.7MB/s]
+ 85%|########4 | 144M/170M [00:03&lt;00:00, 39.0MB/s]
+ 88%|########8 | 150M/170M [00:04&lt;00:00, 42.6MB/s]
+ 91%|######### | 154M/170M [00:04&lt;00:00, 41.7MB/s]
+ 93%|#########3| 158M/170M [00:04&lt;00:00, 33.6MB/s]
+ 95%|#########5| 162M/170M [00:04&lt;00:00, 29.1MB/s]
+ 98%|#########7| 166M/170M [00:04&lt;00:00, 33.1MB/s]
+100%|##########| 170M/170M [00:04&lt;00:00, 38.0MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
   (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -585,7 +590,7 @@ torchvision rcnn models.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Get 9 valid boxes
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  39.426 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  40.022 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-object-detection-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7795da4b258c8feff986668b95ef57ad/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized.html b/docs/how_to/deploy_models/deploy_prequantized.html
index 599450324f..192b8cef92 100644
--- a/docs/how_to/deploy_models/deploy_prequantized.html
+++ b/docs/how_to/deploy_models/deploy_prequantized.html
@@ -505,9 +505,9 @@ training. Other models require a full post training calibration.</p>
 Downloading: &quot;https://download.pytorch.org/models/mobilenet_v2-b0353104.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
 
   0%|          | 0.00/13.6M [00:00&lt;?, ?B/s]
- 59%|#####8    | 7.99M/13.6M [00:00&lt;00:00, 50.9MB/s]
- 95%|#########4| 12.9M/13.6M [00:00&lt;00:00, 32.6MB/s]
-100%|##########| 13.6M/13.6M [00:00&lt;00:00, 36.6MB/s]
+ 47%|####6     | 6.30M/13.6M [00:00&lt;00:00, 46.9MB/s]
+ 79%|#######9  | 10.8M/13.6M [00:00&lt;00:00, 35.2MB/s]
+100%|##########| 13.6M/13.6M [00:00&lt;00:00, 29.7MB/s]
 </pre></div>
 </div>
 </div>
@@ -598,7 +598,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  89.1665      89.1415      92.0607      88.7729       0.3653
+  88.9931      88.9394      89.8859      88.7118       0.2072
 </pre></div>
 </div>
 <div class="admonition note">
@@ -637,7 +637,7 @@ This includes support for the VNNI 8 bit dot product instruction (CascadeLake or
 <div class="section" id="deploy-a-quantized-tflite-model">
 <h2>Deploy a quantized TFLite Model<a class="headerlink" href="#deploy-a-quantized-tflite-model" title="Permalink to this headline">¶</a></h2>
 <p>TODO</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.969 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.522 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/fb8217c13f4351224c6cf3aacf1a87fc/deploy_prequantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized_tflite.html b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
index b36e24b564..9ae2a6b04b 100644
--- a/docs/how_to/deploy_models/deploy_prequantized_tflite.html
+++ b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
@@ -590,7 +590,7 @@ TFLite Top-5 labels: [387 102 386 341 349]
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  111.0840     111.0420     113.8326     110.4673      0.3670
+  110.4352     110.3864     113.3146     109.5863      0.4803
 </pre></div>
 </div>
 <div class="admonition note">
diff --git a/docs/how_to/deploy_models/deploy_quantized.html b/docs/how_to/deploy_models/deploy_quantized.html
index aae4fd585f..2d2a1ab1e1 100644
--- a/docs/how_to/deploy_models/deploy_quantized.html
+++ b/docs/how_to/deploy_models/deploy_quantized.html
@@ -531,7 +531,7 @@ for calibration. But the accuracy might be impacted.</p>
   warnings.warn(
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  28.603 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  29.162 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-quantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7810ecf51bfc05f7d5e8a400ac3e815d/deploy_quantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_quantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/sg_execution_times.html b/docs/how_to/deploy_models/sg_execution_times.html
index ce916699cc..54bf62f0ef 100644
--- a/docs/how_to/deploy_models/sg_execution_times.html
+++ b/docs/how_to/deploy_models/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-deploy-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>12:38.494</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
+<p><strong>12:41.137</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 86%" />
@@ -359,39 +359,39 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></td>
-<td><p>03:39.426</p></td>
+<td><p>03:40.022</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></td>
-<td><p>02:28.603</p></td>
+<td><p>02:29.162</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></td>
-<td><p>01:29.969</p></td>
+<td><p>01:29.522</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_adreno.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno.py</span></code>)</p></td>
-<td><p>01:21.495</p></td>
+<td><p>01:21.201</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></td>
-<td><p>00:55.690</p></td>
+<td><p>00:56.434</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></td>
-<td><p>00:51.531</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_adreno_tvmc.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-tvmc-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™ with tvmc Interface</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno_tvmc.py</span></code>)</p></td>
+<td><p>00:51.462</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_adreno_tvmc.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-tvmc-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™ with tvmc Interface</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno_tvmc.py</span></code>)</p></td>
-<td><p>00:49.526</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></td>
+<td><p>00:51.440</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_nano.html#sphx-glr-how-to-deploy-models-deploy-model-on-nano-py"><span class="std std-ref">Deploy the Pretrained Model on Jetson Nano</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_nano.py</span></code>)</p></td>
-<td><p>00:31.402</p></td>
+<td><p>00:31.262</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></td>
-<td><p>00:30.846</p></td>
+<td><p>00:30.625</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/bring_your_own_datatypes.html b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
index 6e18d74ac7..368a6f1aba 100644
--- a/docs/how_to/extend_tvm/bring_your_own_datatypes.html
+++ b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
@@ -629,7 +629,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
 <span class="n">module</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">params</span></a> <span class="o">=</span> <span class="n">get_mobilenet</span><span class="p">()</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip67c5a67a-b48a-4a9b-bc63-01fe0c94402b from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip677c63cc-09ce-480e-bc10-c04ae7c19ab0 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 </pre></div>
 </div>
 <p>It’s easy to execute MobileNet with native TVM:</p>
diff --git a/docs/how_to/extend_tvm/sg_execution_times.html b/docs/how_to/extend_tvm/sg_execution_times.html
index 728845f1c9..b830b57332 100644
--- a/docs/how_to/extend_tvm/sg_execution_times.html
+++ b/docs/how_to/extend_tvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-extend-tvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>01:00.749</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
+<p><strong>00:59.510</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,19 +359,19 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></td>
-<td><p>00:56.688</p></td>
+<td><p>00:55.538</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></td>
-<td><p>00:02.827</p></td>
+<td><p>00:02.754</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></td>
-<td><p>00:01.227</p></td>
+<td><p>00:01.211</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></td>
-<td><p>00:00.008</p></td>
+<td><p>00:00.007</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/extend_tvm/use_pass_instrument.html b/docs/how_to/extend_tvm/use_pass_instrument.html
index a25e194565..4ebcd39553 100644
--- a/docs/how_to/extend_tvm/use_pass_instrument.html
+++ b/docs/how_to/extend_tvm/use_pass_instrument.html
@@ -536,10 +536,10 @@ profile the execution time of each passes.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 23775us [23775us] (48.16%; 48.16%)
-FoldScaleAxis: 25596us [10us] (51.84%; 51.84%)
-        FoldConstant: 25586us [1894us] (51.82%; 99.96%)
-                InferType: 23691us [23691us] (47.99%; 92.60%)
+InferType: 23807us [23807us] (48.39%; 48.39%)
+FoldScaleAxis: 25395us [9us] (51.61%; 51.61%)
+        FoldConstant: 25386us [1857us] (51.60%; 99.97%)
+                InferType: 23529us [23529us] (47.82%; 92.68%)
 </pre></div>
 </div>
 </div>
@@ -561,10 +561,10 @@ Refer to following sections and <a class="reference internal" href="../../refere
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 23734us [23734us] (48.04%; 48.04%)
-FoldScaleAxis: 25676us [8us] (51.96%; 51.96%)
-        FoldConstant: 25667us [1964us] (51.95%; 99.97%)
-                InferType: 23703us [23703us] (47.97%; 92.35%)
+InferType: 23673us [23673us] (47.75%; 47.75%)
+FoldScaleAxis: 25907us [7us] (52.25%; 52.25%)
+        FoldConstant: 25900us [1875us] (52.24%; 99.97%)
+                InferType: 24025us [24025us] (48.46%; 92.76%)
 </pre></div>
 </div>
 <p>Register empty list to clear existing instruments.</p>
diff --git a/docs/how_to/optimize_operators/opt_conv_cuda.html b/docs/how_to/optimize_operators/opt_conv_cuda.html
index d60696de36..e167d8acd5 100644
--- a/docs/how_to/optimize_operators/opt_conv_cuda.html
+++ b/docs/how_to/optimize_operators/opt_conv_cuda.html
@@ -585,7 +585,7 @@ latency of convolution.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Convolution: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">*</span> <span cl [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 36.925407 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 53.573184 ms
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-optimize-operators-opt-conv-cuda-py">
diff --git a/docs/how_to/optimize_operators/opt_conv_tensorcore.html b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
index 1344ff9bdb..b3c5e0f183 100644
--- a/docs/how_to/optimize_operators/opt_conv_tensorcore.html
+++ b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
@@ -867,7 +867,7 @@ be able to run on our build server</p>
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;conv2d with tensor core: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">* [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 11.709709 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 12.276105 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/optimize_operators/opt_gemm.html b/docs/how_to/optimize_operators/opt_gemm.html
index 82fb721bee..019365a7d6 100644
--- a/docs/how_to/optimize_operators/opt_gemm.html
+++ b/docs/how_to/optimize_operators/opt_gemm.html
@@ -482,8 +482,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Baseline: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019280
-Baseline: 3.352708
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019140
+Baseline: 3.501454
 </pre></div>
 </div>
 <p>In TVM, we can always inspect lower level IR to debug or optimize our schedule.
@@ -542,7 +542,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt1: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.320091
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.300509
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -599,7 +599,7 @@ vastly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt2: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.314908
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.281805
 </pre></div>
 </div>
 <p>Here is the generated IR after vectorization.</p>
@@ -654,7 +654,7 @@ the access pattern for A matrix is more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt3: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.129901
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.118494
 </pre></div>
 </div>
 <p>Here is the generated IR after loop permutation.</p>
@@ -731,7 +731,7 @@ flattening.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt4: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.108122
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.107271
 </pre></div>
 </div>
 <p>Here is the generated IR after array packing.</p>
@@ -809,7 +809,7 @@ write to C when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt5: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.112490
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111860
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -889,7 +889,7 @@ class Module:
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt6: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">opt6_time</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.134238
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.133092
 </pre></div>
 </div>
 <p>Here is the generated IR after parallelization.</p>
diff --git a/docs/how_to/optimize_operators/sg_execution_times.html b/docs/how_to/optimize_operators/sg_execution_times.html
index 1e8a94a1d6..a66f3bd233 100644
--- a/docs/how_to/optimize_operators/sg_execution_times.html
+++ b/docs/how_to/optimize_operators/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-optimize-operators-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:35.006</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
+<p><strong>00:34.605</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></td>
-<td><p>00:31.760</p></td>
+<td><p>00:31.360</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></td>
-<td><p>00:02.017</p></td>
+<td><p>00:02.008</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></td>
-<td><p>00:01.228</p></td>
+<td><p>00:01.237</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
index 106edf1b82..c447941da2 100644
--- a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
+++ b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autoscheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>03:55.379</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
+<p><strong>03:54.232</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 85%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></td>
-<td><p>01:42.218</p></td>
+<td><p>01:42.650</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></td>
-<td><p>01:20.710</p></td>
+<td><p>01:20.138</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></td>
-<td><p>00:18.611</p></td>
+<td><p>00:17.672</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></td>
-<td><p>00:16.926</p></td>
+<td><p>00:16.974</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></td>
-<td><p>00:16.806</p></td>
+<td><p>00:16.695</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></td>
-<td><p>00:00.108</p></td>
+<td><p>00:00.103</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
index 0d867bcfc8..e661d200f8 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
@@ -1023,7 +1023,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.359 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.352 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
index 00fe0854d1..58cdfa2a61 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
@@ -926,7 +926,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   8.1234       8.1193       8.1373       8.1136       0.0101
+   8.1281       8.1309       8.1393       8.1140       0.0105
 </pre></div>
 </div>
 </div>
@@ -948,7 +948,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  20.710 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  20.138 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-cuda-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/eafe360d52540634c9eea0fa89e804bd/tune_network_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
index 38d33b0d1d..5ca4a1117e 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
@@ -945,7 +945,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  769.1704     770.5453     772.0932     764.8725      3.1040
+  766.4102     766.4307     766.9942     765.8056      0.4855
 </pre></div>
 </div>
 </div>
@@ -967,7 +967,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  42.218 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  42.650 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/e416b94ca1090b0897c0f6e0df95b911/tune_network_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_x86.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autotvm/sg_execution_times.html b/docs/how_to/tune_with_autotvm/sg_execution_times.html
index 9e39cc7bcd..fe536fffed 100644
--- a/docs/how_to/tune_with_autotvm/sg_execution_times.html
+++ b/docs/how_to/tune_with_autotvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:23.681</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
+<p><strong>00:23.436</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,22 +359,22 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></td>
-<td><p>00:23.638</p></td>
+<td><p>00:23.397</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></td>
-<td><p>00:00.027</p></td>
+<td><p>00:00.023</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a Convolutional Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></td>
 <td><p>00:00.006</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></td>
 <td><p>00:00.005</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></td>
 <td><p>00:00.005</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
diff --git a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
index b3b9518043..d1b55882b3 100644
--- a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
@@ -620,7 +620,7 @@ and measure running time.</p>
 
 Best config:
 ,None
-Time cost of this operator: 0.037110
+Time cost of this operator: 0.037163
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autotvm-tune-conv2d-cuda-py">
diff --git a/docs/how_to/work_with_microtvm/micro_autotune.html b/docs/how_to/work_with_microtvm/micro_autotune.html
index c3f67751f2..b23ac6e101 100644
--- a/docs/how_to/work_with_microtvm/micro_autotune.html
+++ b/docs/how_to/work_with_microtvm/micro_autotune.html
@@ -654,10 +654,10 @@ the tuned operator.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build without Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  305.2     98.734   (1, 2, 10, 10, 3)  2       1        [305.2]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.955     0.956    (1, 6, 10, 10)     1       1        [2.955]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.958     0.31     (1, 1, 10, 10, 3)  1       1        [0.958]
-Total_time                                    -                                             309.112   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  329.2     98.837   (1, 2, 10, 10, 3)  2       1        [329.2]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.916     0.875    (1, 6, 10, 10)     1       1        [2.916]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.959     0.288    (1, 1, 10, 10, 3)  1       1        [0.959]
+Total_time                                    -                                             333.074   -        -                  -       -        -
 </pre></div>
 </div>
 </div>
@@ -709,13 +709,13 @@ Total_time                                    -
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build with Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  133.8     97.985   (1, 6, 10, 10, 1)  2       1        [133.8]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.772     1.297    (1, 6, 10, 10)     1       1        [1.772]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.98      0.718    (1, 1, 10, 10, 3)  1       1        [0.98]
-Total_time                                    -                                             136.552   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.7     97.467   (1, 6, 10, 10, 1)  2       1        [102.7]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.81      1.718    (1, 6, 10, 10)     1       1        [1.81]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.859     0.816    (1, 3, 10, 10, 1)  1       1        [0.859]
+Total_time                                    -                                             105.369   -        -                  -       -        -
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  32.513 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  28.305 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-autotune-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/9ccca8fd489a1486ac71b55a55c320c5/micro_autotune.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_autotune.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_pytorch.html b/docs/how_to/work_with_microtvm/micro_pytorch.html
index 9d43c484a1..f762b1d428 100644
--- a/docs/how_to/work_with_microtvm/micro_pytorch.html
+++ b/docs/how_to/work_with_microtvm/micro_pytorch.html
@@ -465,8 +465,8 @@ download a cat image and preprocess it to use as the model input.</p>
 Downloading: &quot;https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
 
   0%|          | 0.00/3.42M [00:00&lt;?, ?B/s]
- 61%|######    | 2.09M/3.42M [00:00&lt;00:00, 14.8MB/s]
-100%|##########| 3.42M/3.42M [00:00&lt;00:00, 22.7MB/s]
+ 61%|######    | 2.09M/3.42M [00:00&lt;00:00, 16.2MB/s]
+100%|##########| 3.42M/3.42M [00:00&lt;00:00, 25.5MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
   device=storage.device,
 /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -594,7 +594,7 @@ via the host <cite>main.cc`</cite> or if a Zephyr emulated board is selected as
 Torch top-1 id: 282, class name: tiger cat
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  30.029 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.828 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/12b9ecc04c41abaa12022061771821d1/micro_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_train.html b/docs/how_to/work_with_microtvm/micro_train.html
index 7e294563df..0c86835da2 100644
--- a/docs/how_to/work_with_microtvm/micro_train.html
+++ b/docs/how_to/work_with_microtvm/micro_train.html
@@ -533,7 +533,7 @@ take about <strong>2 minutes</strong> to download the Stanford Cars, while COCO
 <a href="https://docs.python.org/3/library/shutil.html#shutil.move" title="shutil.move" class="sphx-glr-backref-module-shutil sphx-glr-backref-type-py-function"><span class="n">shutil</span><span class="o">.</span><span class="n">move</span></a><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx-glr-backref-typ [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmptmuid4lv/images/random&#39;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmp7eauq5ey/images/random&#39;
 </pre></div>
 </div>
 </div>
@@ -593,8 +593,8 @@ objects to other stuff? We can display some examples from our datasets using <co
     <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmptmuid4lv/images/target contains 8144 images
-/tmp/tmptmuid4lv/images/random contains 5000 images
+<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmp7eauq5ey/images/target contains 8144 images
+/tmp/tmp7eauq5ey/images/random contains 5000 images
 </pre></div>
 </div>
 </div>
@@ -706,13 +706,13 @@ the time on our validation set).</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Epoch 1/3
-328/328 - 41s - loss: 0.2226 - accuracy: 0.9223 - val_loss: 0.1225 - val_accuracy: 0.9588 - 41s/epoch - 125ms/step
+328/328 - 41s - loss: 0.2287 - accuracy: 0.9242 - val_loss: 0.1226 - val_accuracy: 0.9573 - 41s/epoch - 126ms/step
 Epoch 2/3
-328/328 - 36s - loss: 0.1085 - accuracy: 0.9600 - val_loss: 0.1086 - val_accuracy: 0.9653 - 36s/epoch - 109ms/step
+328/328 - 36s - loss: 0.0981 - accuracy: 0.9632 - val_loss: 0.1167 - val_accuracy: 0.9596 - 36s/epoch - 108ms/step
 Epoch 3/3
-328/328 - 36s - loss: 0.0684 - accuracy: 0.9754 - val_loss: 0.1325 - val_accuracy: 0.9539 - 36s/epoch - 109ms/step
+328/328 - 36s - loss: 0.0708 - accuracy: 0.9736 - val_loss: 0.0973 - val_accuracy: 0.9683 - 36s/epoch - 108ms/step
 
-&lt;keras.callbacks.History object at 0x7f6c32189eb0&gt;
+&lt;keras.callbacks.History object at 0x7f4c4c770880&gt;
 </pre></div>
 </div>
 </div>
@@ -976,7 +976,7 @@ as intended.</p>
 <p>From here, we could modify the model to read live images from the camera - we have another
 Arduino tutorial for how to do that <a class="reference external" href="https://github.com/guberti/tvm-arduino-demos/tree/master/examples/person_detection">on GitHub</a>. Alternatively, we could also
 <a class="reference external" href="https://tvm.apache.org/docs/how_to/work_with_microtvm/micro_autotune.html">use TVM’s autotuning capabilities</a> to dramatically improve the model’s performance.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes  1.624 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  39.709 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-train-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/b52cec46baf4f78d6bcd94cbe269c8a6/micro_train.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_train.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/sg_execution_times.html b/docs/how_to/work_with_microtvm/sg_execution_times.html
index 1664ac392f..b32666b68a 100644
--- a/docs/how_to/work_with_microtvm/sg_execution_times.html
+++ b/docs/how_to/work_with_microtvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-microtvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>08:35.615</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
+<p><strong>08:08.545</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_train.html#sphx-glr-how-to-work-with-microtvm-micro-train-py"><span class="std std-ref">5. Training Vision Models for microTVM on Arduino</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_train.py</span></code>)</p></td>
-<td><p>05:01.624</p></td>
+<td><p>04:39.709</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">6. Model Tuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
-<td><p>01:32.513</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="micro_pytorch.html#sphx-glr-how-to-work-with-microtvm-micro-pytorch-py"><span class="std std-ref">4. microTVM PyTorch Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_pytorch.py</span></code>)</p></td>
+<td><p>01:29.828</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="micro_pytorch.html#sphx-glr-how-to-work-with-microtvm-micro-pytorch-py"><span class="std std-ref">4. microTVM PyTorch Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_pytorch.py</span></code>)</p></td>
-<td><p>01:30.029</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">6. Model Tuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
+<td><p>01:28.305</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_aot.html#sphx-glr-how-to-work-with-microtvm-micro-aot-py"><span class="std std-ref">3. microTVM Ahead-of-Time (AOT) Compilation</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_aot.py</span></code>)</p></td>
-<td><p>00:12.626</p></td>
+<td><p>00:12.630</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">2. microTVM TFLite Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></td>
-<td><p>00:09.947</p></td>
+<td><p>00:09.242</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_custom_ide.html#sphx-glr-how-to-work-with-microtvm-micro-custom-ide-py"><span class="std std-ref">9. Bring microTVM to your own development environment</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_custom_ide.py</span></code>)</p></td>
-<td><p>00:08.876</p></td>
+<td><p>00:08.829</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">7. Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_relay/sg_execution_times.html b/docs/how_to/work_with_relay/sg_execution_times.html
index ac069c8635..f76be583b2 100644
--- a/docs/how_to/work_with_relay/sg_execution_times.html
+++ b/docs/how_to/work_with_relay/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-relay-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:41.895</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
+<p><strong>00:40.986</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="using_pipeline_executor.html#sphx-glr-how-to-work-with-relay-using-pipeline-executor-py"><span class="std std-ref">Using Pipeline Executor in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_pipeline_executor.py</span></code>)</p></td>
-<td><p>00:36.294</p></td>
+<td><p>00:35.641</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></td>
-<td><p>00:03.368</p></td>
+<td><p>00:03.346</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></td>
-<td><p>00:02.226</p></td>
+<td><p>00:01.994</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/intrin_math.html b/docs/how_to/work_with_schedules/intrin_math.html
index 400b99efa9..f70742f7e9 100644
--- a/docs/how_to/work_with_schedules/intrin_math.html
+++ b/docs/how_to/work_with_schedules/intrin_math.html
@@ -559,7 +559,7 @@ The following example customizes CUDA lowering rule for <code class="code docuti
 <a href="../../reference/api/python/ir.html#tvm.ir.register_intrin_lowering" title="tvm.ir.register_intrin_lowering" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-function"><span class="n">register_intrin_lowering</span></a><span class="p">(</span><span class="s2">&quot;tir.exp&quot;</span><span class="p">,</span> <span class="n">target</span><span class="o">=</span><span class="s2">&quot;cuda&quot;</span><span class="p">,</span> <span class="n">f</span><span class="o">= [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7f6831414940&gt;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7f4ac83cb700&gt;
 </pre></div>
 </div>
 <p>Register the rule to TVM with override option to override existing rule.
diff --git a/docs/how_to/work_with_schedules/sg_execution_times.html b/docs/how_to/work_with_schedules/sg_execution_times.html
index 575397247a..637fe9f408 100644
--- a/docs/how_to/work_with_schedules/sg_execution_times.html
+++ b/docs/how_to/work_with_schedules/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-schedules-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:07.558</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
+<p><strong>00:08.984</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,35 +359,35 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></td>
-<td><p>00:03.902</p></td>
+<td><p>00:05.617</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></td>
-<td><p>00:01.828</p></td>
+<td><p>00:01.543</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></td>
-<td><p>00:00.767</p></td>
+<td><p>00:00.770</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></td>
-<td><p>00:00.763</p></td>
+<td><p>00:00.759</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></td>
-<td><p>00:00.118</p></td>
+<td><p>00:00.119</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></td>
-<td><p>00:00.084</p></td>
+<td><p>00:00.085</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></td>
-<td><p>00:00.062</p></td>
+<td><p>00:00.060</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tuple_inputs.html#sphx-glr-how-to-work-with-schedules-tuple-inputs-py"><span class="std std-ref">Compute and Reduce with Tuple Inputs</span></a> (<code class="docutils literal notranslate"><span class="pre">tuple_inputs.py</span></code>)</p></td>
-<td><p>00:00.033</p></td>
+<td><p>00:00.031</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/install/nnpack.html b/docs/install/nnpack.html
index 2d646d6fd1..6f2eee3512 100644
--- a/docs/install/nnpack.html
+++ b/docs/install/nnpack.html
@@ -239,17 +239,7 @@
               <p class="caption" role="heading"><span class="caption-text">Getting Started</span></p>
 <ul class="current">
 <li class="toctree-l1 current"><a class="reference internal" href="index.html">Installing TVM</a><ul class="current">
-<li class="toctree-l2 current"><a class="reference internal" href="from_source.html">Install from Source</a><ul class="current">
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#developers-get-source-from-github">Developers: Get Source from Github</a></li>
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#build-the-shared-library">Build the Shared Library</a></li>
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#python-package-installation">Python Package Installation</a></li>
-<li class="toctree-l3 current"><a class="reference internal" href="from_source.html#install-contrib-libraries">Install Contrib Libraries</a><ul class="current">
-<li class="toctree-l4 current"><a class="current reference internal" href="#">NNPACK Contrib Installation</a></li>
-</ul>
-</li>
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#enable-c-tests">Enable C++ Tests</a></li>
-</ul>
-</li>
+<li class="toctree-l2"><a class="reference internal" href="from_source.html">Install from Source</a></li>
 <li class="toctree-l2"><a class="reference internal" href="docker.html">Docker Images</a></li>
 <li class="toctree-l2 current"><a class="current reference internal" href="#">NNPACK Contrib Installation</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#conditions">Conditions</a></li>
diff --git a/docs/reference/api/python/auto_scheduler.html b/docs/reference/api/python/auto_scheduler.html
index 69e8fec0a4..7d5a2fb125 100644
--- a/docs/reference/api/python/auto_scheduler.html
+++ b/docs/reference/api/python/auto_scheduler.html
@@ -1627,7 +1627,7 @@ history states as starting point to perform Evolutionary Search).</p></li>
 
 <dl class="py class">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.SketchPolicy">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
 <dd><p>The search policy that searches in a hierarchical search space defined by sketches.
 The policy randomly samples programs from the space defined by sketches and use evolutionary
 search to fine-tune them.</p>
@@ -1911,7 +1911,7 @@ Candidates:
 
 <dl class="py function">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.auto_schedule">
-<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
+<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
 <dd><p>THIS API IS DEPRECATED.</p>
 <p>Run auto scheduling search for a task.</p>
 <dl class="field-list simple">
diff --git a/docs/reference/api/typedoc/classes/bytestreamreader.html b/docs/reference/api/typedoc/classes/bytestreamreader.html
index 73d10544b9..5a3a010bf6 100644
--- a/docs/reference/api/typedoc/classes/bytestreamreader.html
+++ b/docs/reference/api/typedoc/classes/bytestreamreader.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -141,7 +141,7 @@
 					<div class="tsd-signature tsd-kind-icon">bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Uint8Array</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -151,7 +151,7 @@
 					<div class="tsd-signature tsd-kind-icon">offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -168,7 +168,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">Uint8Array</span></h4>
@@ -185,7 +185,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -202,7 +202,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/cachedcallstack.html b/docs/reference/api/typedoc/classes/cachedcallstack.html
index bed3e9f46e..cafab38676 100644
--- a/docs/reference/api/typedoc/classes/cachedcallstack.html
+++ b/docs/reference/api/typedoc/classes/cachedcallstack.html
@@ -144,7 +144,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L223">memory.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L223">memory.ts:223</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">temp<wbr>Args<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><a href="../interfaces/disposable.html" class="tsd-signature-type">Disposable</a><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L208">memory.ts:208</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L208">memory.ts:208</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -194,7 +194,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L312">memory.ts:312</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L312">memory.ts:312</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L284">memory.ts:284</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L284">memory.ts:284</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L388">memory.ts:388</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L388">memory.ts:388</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -300,7 +300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L376">memory.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L376">memory.ts:376</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -340,7 +340,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L267">memory.ts:267</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L267">memory.ts:267</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -373,7 +373,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L243">memory.ts:243</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L243">memory.ts:243</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -390,7 +390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L321">memory.ts:321</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L321">memory.ts:321</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -422,7 +422,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L252">memory.ts:252</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L252">memory.ts:252</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -444,7 +444,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L359">memory.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L359">memory.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -470,7 +470,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L342">memory.ts:342</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L342">memory.ts:342</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -496,7 +496,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L350">memory.ts:350</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L350">memory.ts:350</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -522,7 +522,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L326">memory.ts:326</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L326">memory.ts:326</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -548,7 +548,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L363">memory.ts:363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L363">memory.ts:363</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -574,7 +574,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L346">memory.ts:346</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L346">memory.ts:346</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -600,7 +600,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L334">memory.ts:334</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L334">memory.ts:334</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/dldatatype.html b/docs/reference/api/typedoc/classes/dldatatype.html
index 5d1e6f235f..42ee372816 100644
--- a/docs/reference/api/typedoc/classes/dldatatype.html
+++ b/docs/reference/api/typedoc/classes/dldatatype.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">bits<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L357">runtime.ts:357</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L357">runtime.ts:357</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">code<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L355">runtime.ts:355</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L355">runtime.ts:355</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -177,7 +177,7 @@
 					<div class="tsd-signature tsd-kind-icon">lanes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -199,7 +199,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L376">runtime.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L376">runtime.ts:376</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -216,7 +216,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L367">runtime.ts:367</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L367">runtime.ts:367</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/dldevice.html b/docs/reference/api/typedoc/classes/dldevice.html
index 59e5b5fad3..feca2351d8 100644
--- a/docs/reference/api/typedoc/classes/dldevice.html
+++ b/docs/reference/api/typedoc/classes/dldevice.html
@@ -118,7 +118,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L299">runtime.ts:299</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L299">runtime.ts:299</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L297">runtime.ts:297</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L297">runtime.ts:297</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -161,7 +161,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L295">runtime.ts:295</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L295">runtime.ts:295</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -183,7 +183,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L320">runtime.ts:320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L320">runtime.ts:320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -205,7 +205,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L327">runtime.ts:327</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L327">runtime.ts:327</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/environment.html b/docs/reference/api/typedoc/classes/environment.html
index b92d5543d5..aa587334c0 100644
--- a/docs/reference/api/typedoc/classes/environment.html
+++ b/docs/reference/api/typedoc/classes/environment.html
@@ -125,7 +125,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/environment.ts#L86">environment.ts:86</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/environment.ts#L86">environment.ts:86</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 					<aside class="tsd-sources">
 						<p>Implementation of <a href="../interfaces/libraryprovider.html">LibraryProvider</a>.<a href="../interfaces/libraryprovider.html#imports">imports</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/environment.ts#L70">environment.ts:70</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/environment.ts#L70">environment.ts:70</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/environment.ts#L69">environment.ts:69</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/environment.ts#L69">environment.ts:69</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -210,7 +210,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">ctypes.FTVMWasmPackedCFunc</span><span class="tsd-signature-symbol"> | </span><span class="tsd-signature-type">undefined</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = [undefined,]</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/environment.ts#L78">environment.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/environment.ts#L78">environment.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -228,7 +228,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<wbr>Free<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/environment.ts#L84">environment.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/environment.ts#L84">environment.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/environment.ts#L105">environment.ts:105</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/environment.ts#L105">environment.ts:105</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ffilibrary.html b/docs/reference/api/typedoc/classes/ffilibrary.html
index 39f10dbdee..c048f6d506 100644
--- a/docs/reference/api/typedoc/classes/ffilibrary.html
+++ b/docs/reference/api/typedoc/classes/ffilibrary.html
@@ -131,7 +131,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L50">runtime.ts:50</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L50">runtime.ts:50</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L47">runtime.ts:47</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L47">runtime.ts:47</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L46">runtime.ts:46</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L46">runtime.ts:46</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L45">runtime.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L45">runtime.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">webGPUContext<span class="tsd-signature-symbol">:</span> <a href="webgpucontext.html" class="tsd-signature-type">WebGPUContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L48">runtime.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L48">runtime.ts:48</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -203,7 +203,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L77">runtime.ts:77</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L77">runtime.ts:77</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L67">runtime.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L67">runtime.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -243,7 +243,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L85">runtime.ts:85</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L85">runtime.ts:85</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <a href="cachedcallstack.html" class="tsd-signature-type">CachedCallStack</a></h4>
@@ -260,7 +260,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L96">runtime.ts:96</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L96">runtime.ts:96</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -283,7 +283,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L73">runtime.ts:73</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L73">runtime.ts:73</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/instance.html b/docs/reference/api/typedoc/classes/instance.html
index 430d4477a3..75b19fe590 100644
--- a/docs/reference/api/typedoc/classes/instance.html
+++ b/docs/reference/api/typedoc/classes/instance.html
@@ -161,7 +161,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L844">runtime.ts:844</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L844">runtime.ts:844</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L834">runtime.ts:834</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L834">runtime.ts:834</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -234,7 +234,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L833">runtime.ts:833</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L833">runtime.ts:833</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -251,7 +251,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L973">runtime.ts:973</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L973">runtime.ts:973</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -296,7 +296,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L932">runtime.ts:932</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L932">runtime.ts:932</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -318,7 +318,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L901">runtime.ts:901</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L901">runtime.ts:901</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -381,7 +381,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -412,7 +412,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -453,7 +453,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -491,7 +491,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L922">runtime.ts:922</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L922">runtime.ts:922</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -508,7 +508,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -552,7 +552,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L943">runtime.ts:943</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L943">runtime.ts:943</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -577,7 +577,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -609,7 +609,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -640,7 +640,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -672,7 +672,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -695,7 +695,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -729,7 +729,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L986">runtime.ts:986</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L986">runtime.ts:986</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -769,7 +769,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -817,7 +817,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -857,7 +857,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -900,7 +900,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -938,7 +938,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -990,7 +990,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1014,7 +1014,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1046,7 +1046,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1078,7 +1078,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1110,7 +1110,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1141,7 +1141,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L957">runtime.ts:957</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L957">runtime.ts:957</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/memory.html b/docs/reference/api/typedoc/classes/memory.html
index 482063956f..f870e5f31e 100644
--- a/docs/reference/api/typedoc/classes/memory.html
+++ b/docs/reference/api/typedoc/classes/memory.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L40">memory.ts:40</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L40">memory.ts:40</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Memory</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L32">memory.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L32">memory.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span><span class="tsd-signature-symbol"> = true</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L33">memory.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L33">memory.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L154">memory.ts:154</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L154">memory.ts:154</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -210,7 +210,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L90">memory.ts:90</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L90">memory.ts:90</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -233,7 +233,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L97">memory.ts:97</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L97">memory.ts:97</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -256,7 +256,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L74">memory.ts:74</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L74">memory.ts:74</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -279,7 +279,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L81">memory.ts:81</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L81">memory.ts:81</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -302,7 +302,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L104">memory.ts:104</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L104">memory.ts:104</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -325,7 +325,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L132">memory.ts:132</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L132">memory.ts:132</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -362,7 +362,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L145">memory.ts:145</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L145">memory.ts:145</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -393,7 +393,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L60">memory.ts:60</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L60">memory.ts:60</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -416,7 +416,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L67">memory.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L67">memory.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -439,7 +439,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L53">memory.ts:53</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L53">memory.ts:53</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -462,7 +462,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L114">memory.ts:114</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L114">memory.ts:114</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -485,7 +485,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L124">memory.ts:124</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L124">memory.ts:124</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -502,7 +502,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/memory.ts#L175">memory.ts:175</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/memory.ts#L175">memory.ts:175</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/module.html b/docs/reference/api/typedoc/classes/module.html
index c644195364..39e7649c42 100644
--- a/docs/reference/api/typedoc/classes/module.html
+++ b/docs/reference/api/typedoc/classes/module.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L614">runtime.ts:614</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L614">runtime.ts:614</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L626">runtime.ts:626</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L626">runtime.ts:626</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -186,7 +186,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L653">runtime.ts:653</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L653">runtime.ts:653</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -218,7 +218,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L641">runtime.ts:641</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L641">runtime.ts:641</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L687">runtime.ts:687</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L687">runtime.ts:687</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ndarray.html b/docs/reference/api/typedoc/classes/ndarray.html
index 39bc732aeb..4879a93474 100644
--- a/docs/reference/api/typedoc/classes/ndarray.html
+++ b/docs/reference/api/typedoc/classes/ndarray.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L401">runtime.ts:401</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L401">runtime.ts:401</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <a href="dldevice.html" class="tsd-signature-type">DLDevice</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L394">runtime.ts:394</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L394">runtime.ts:394</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -173,7 +173,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L390">runtime.ts:390</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L390">runtime.ts:390</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -188,7 +188,7 @@
 					<div class="tsd-signature tsd-kind-icon">ndim<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L388">runtime.ts:388</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L388">runtime.ts:388</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -203,7 +203,7 @@
 					<div class="tsd-signature tsd-kind-icon">shape<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L392">runtime.ts:392</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L392">runtime.ts:392</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -225,7 +225,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L480">runtime.ts:480</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L480">runtime.ts:480</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -258,7 +258,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L524">runtime.ts:524</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L524">runtime.ts:524</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -290,7 +290,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L465">runtime.ts:465</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L465">runtime.ts:465</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -307,7 +307,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L458">runtime.ts:458</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L458">runtime.ts:458</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -339,7 +339,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L584">runtime.ts:584</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L584">runtime.ts:584</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -363,7 +363,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L553">runtime.ts:553</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L553">runtime.ts:553</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/packedfunccell.html b/docs/reference/api/typedoc/classes/packedfunccell.html
index 5ef620dce2..3d5a8ee034 100644
--- a/docs/reference/api/typedoc/classes/packedfunccell.html
+++ b/docs/reference/api/typedoc/classes/packedfunccell.html
@@ -117,7 +117,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L248">runtime.ts:248</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L248">runtime.ts:248</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L255">runtime.ts:255</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L255">runtime.ts:255</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -163,7 +163,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L264">runtime.ts:264</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L264">runtime.ts:264</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/rpcserver.html b/docs/reference/api/typedoc/classes/rpcserver.html
index b820e9c2ff..ac17c7f080 100644
--- a/docs/reference/api/typedoc/classes/rpcserver.html
+++ b/docs/reference/api/typedoc/classes/rpcserver.html
@@ -115,7 +115,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">unknown</span><span class="tsd-signat [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -201,7 +201,7 @@
 					<div class="tsd-signature tsd-kind-icon">key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -211,7 +211,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -242,7 +242,7 @@
 					<div class="tsd-signature tsd-kind-icon">socket<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">WebSocket</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -252,7 +252,7 @@
 					<div class="tsd-signature tsd-kind-icon">state<span class="tsd-signature-symbol">:</span> <a href="../enums/rpcserverstate.html" class="tsd-signature-type">RPCServerState</a><span class="tsd-signature-symbol"> = RPCServerState.InitHeader</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -262,7 +262,7 @@
 					<div class="tsd-signature tsd-kind-icon">url<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/classes/runtimecontext.html b/docs/reference/api/typedoc/classes/runtimecontext.html
index 241edbdcb8..b72b654ebd 100644
--- a/docs/reference/api/typedoc/classes/runtimecontext.html
+++ b/docs/reference/api/typedoc/classes/runtimecontext.html
@@ -132,7 +132,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L148">runtime.ts:148</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L148">runtime.ts:148</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Item<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L143">runtime.ts:143</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L143">runtime.ts:143</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Size<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L144">runtime.ts:144</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L144">runtime.ts:144</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -192,7 +192,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Make<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L145">runtime.ts:145</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -202,7 +202,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Sys<wbr>Lib<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L146">runtime.ts:146</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L146">runtime.ts:146</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -219,7 +219,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L189">runtime.ts:189</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L189">runtime.ts:189</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -263,7 +263,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L163">runtime.ts:163</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L163">runtime.ts:163</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -280,7 +280,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L208">runtime.ts:208</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L208">runtime.ts:208</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
@@ -309,7 +309,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L157">runtime.ts:157</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L157">runtime.ts:157</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -326,7 +326,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L167">runtime.ts:167</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L167">runtime.ts:167</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -343,7 +343,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L198">runtime.ts:198</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L198">runtime.ts:198</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/scalar.html b/docs/reference/api/typedoc/classes/scalar.html
index 3ac499bf5f..c684e06f60 100644
--- a/docs/reference/api/typedoc/classes/scalar.html
+++ b/docs/reference/api/typedoc/classes/scalar.html
@@ -112,7 +112,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -137,7 +137,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">value<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L233">runtime.ts:233</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L233">runtime.ts:233</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmarray.html b/docs/reference/api/typedoc/classes/tvmarray.html
index eaeafa1fa9..4e5dd0548a 100644
--- a/docs/reference/api/typedoc/classes/tvmarray.html
+++ b/docs/reference/api/typedoc/classes/tvmarray.html
@@ -133,7 +133,7 @@
 							<aside class="tsd-sources">
 								<p>Overrides <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#constructor">constructor</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L784">runtime.ts:784</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L784">runtime.ts:784</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -162,7 +162,7 @@
 					<aside class="tsd-sources">
 						<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#ctx">ctx</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#dispose">dispose</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -197,7 +197,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L804">runtime.ts:804</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L804">runtime.ts:804</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -230,7 +230,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#gethandle">getHandle</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L796">runtime.ts:796</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L796">runtime.ts:796</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -283,7 +283,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typeindex">typeIndex</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -306,7 +306,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typekey">typeKey</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmobject.html b/docs/reference/api/typedoc/classes/tvmobject.html
index b54454b41e..b001c1d935 100644
--- a/docs/reference/api/typedoc/classes/tvmobject.html
+++ b/docs/reference/api/typedoc/classes/tvmobject.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">ctx<span class="tsd-signature-symbol">:</span> <a href="runtimecontext.html" class="tsd-signature-type">RuntimeContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -175,7 +175,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -192,7 +192,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -246,7 +246,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/webgpucontext.html b/docs/reference/api/typedoc/classes/webgpucontext.html
index 9d5a311b4d..4bbbd2ff0d 100644
--- a/docs/reference/api/typedoc/classes/webgpucontext.html
+++ b/docs/reference/api/typedoc/classes/webgpucontext.html
@@ -120,7 +120,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -145,7 +145,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">GPUDevice</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -155,7 +155,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -172,7 +172,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -209,7 +209,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/enums/argtypecode.html b/docs/reference/api/typedoc/enums/argtypecode.html
index a8d4563e8c..1ff2239aed 100644
--- a/docs/reference/api/typedoc/enums/argtypecode.html
+++ b/docs/reference/api/typedoc/enums/argtypecode.html
@@ -106,7 +106,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 6</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -116,7 +116,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -126,7 +126,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -136,7 +136,7 @@
 					<div class="tsd-signature tsd-kind-icon">Null<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 12</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMDLTensor<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 7</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMModule<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 9</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMNDArray<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 13</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -196,7 +196,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -206,7 +206,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObjectRValue<wbr>Ref<wbr>Arg<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 14</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -216,7 +216,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMOpaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -226,7 +226,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMPacked<wbr>Func<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 10</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -236,7 +236,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 11</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -246,7 +246,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/aynccallbackcode.html b/docs/reference/api/typedoc/enums/aynccallbackcode.html
index 7ab77f9733..a35e96f536 100644
--- a/docs/reference/api/typedoc/enums/aynccallbackcode.html
+++ b/docs/reference/api/typedoc/enums/aynccallbackcode.html
@@ -93,7 +93,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Exception<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L812">runtime.ts:812</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L812">runtime.ts:812</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -103,7 +103,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L811">runtime.ts:811</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L811">runtime.ts:811</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/dldatatypecode.html b/docs/reference/api/typedoc/enums/dldatatypecode.html
index 227bb7cd42..7dbf6de4d8 100644
--- a/docs/reference/api/typedoc/enums/dldatatypecode.html
+++ b/docs/reference/api/typedoc/enums/dldatatypecode.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L339">runtime.ts:339</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L339">runtime.ts:339</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L337">runtime.ts:337</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L337">runtime.ts:337</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">Opaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L340">runtime.ts:340</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L340">runtime.ts:340</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -125,7 +125,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L338">runtime.ts:338</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L338">runtime.ts:338</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/rpcserverstate.html b/docs/reference/api/typedoc/enums/rpcserverstate.html
index 1111a4464f..faca5aeac7 100644
--- a/docs/reference/api/typedoc/enums/rpcserverstate.html
+++ b/docs/reference/api/typedoc/enums/rpcserverstate.html
@@ -90,7 +90,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<wbr>Key<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Server<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Body<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">Wait<wbr>For<wbr>Callback<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/sizeof.html b/docs/reference/api/typedoc/enums/sizeof.html
index ce00afaa9a..859f377b63 100644
--- a/docs/reference/api/typedoc/enums/sizeof.html
+++ b/docs/reference/api/typedoc/enums/sizeof.html
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32 + I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">F32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">F64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">I32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -150,7 +150,7 @@
 					<div class="tsd-signature tsd-kind-icon">I64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -160,7 +160,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMValue<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -170,7 +170,7 @@
 					<div class="tsd-signature tsd-kind-icon">U16<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 					<div class="tsd-signature tsd-kind-icon">U8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/index.html b/docs/reference/api/typedoc/index.html
index b859944049..97ad70297a 100644
--- a/docs/reference/api/typedoc/index.html
+++ b/docs/reference/api/typedoc/index.html
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">FObject<wbr>Constructor<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, lib<span class="tsd-signature-symbol">: </span><a href="classes/ffilibrary.html" class="tsd-signature-type">FFILibrary</a>, ctx<span class="tsd-signature-symbol">: </span><a href="classes/runtimecontext.html" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L778">runtime.ts:778</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L778">runtime.ts:778</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Alloc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>shape<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, ndim<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeCode<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeBits<span class="tsd [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -288,7 +288,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>Bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">num [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -332,7 +332,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>To<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>from<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, to<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-sig [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -376,7 +376,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>ToBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</sp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -420,7 +420,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -456,7 +456,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMBackend<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number< [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -508,7 +508,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCFunc<wbr>Set<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ret<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -556,7 +556,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCb<wbr>Arg<wbr>ToReturn<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, code<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span c [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -595,7 +595,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Call<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -651,7 +651,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -687,7 +687,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Get<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span cla [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -726,7 +726,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>List<wbr>Global<wbr>Names<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>outSize<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, outArray<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -765,7 +765,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Register<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, f<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, override<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -808,7 +808,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMGet<wbr>Last<wbr>Error<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -838,7 +838,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -874,7 +874,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Get<wbr>Function<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, funcName<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, queryImports<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">numbe [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -922,7 +922,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Import<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, dep<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-si [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -962,7 +962,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -998,7 +998,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Get<wbr>Type<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt;  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1037,7 +1037,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Index2<wbr>Key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_index<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, out_type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1076,7 +1076,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Key2<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol">  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1115,7 +1115,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMSynchronize<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>deviceType<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, deviceId<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signatur [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1157,7 +1157,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Alloc<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>size<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1193,7 +1193,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Free<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ptr<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1229,7 +1229,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Func<wbr>Create<wbr>FromCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resource<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1269,7 +1269,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>args<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1321,7 +1321,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<wbr>Finalizer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resourceHandle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1357,7 +1357,7 @@
 					<div class="tsd-signature tsd-kind-icon">GPUPointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1372,7 +1372,7 @@
 					<div class="tsd-signature tsd-kind-icon">Packed<wbr>Func<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">...</span>args<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol"> &amp; </span><a href="interfaces/disp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L37">runtime.ts:37</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L37">runtime.ts:37</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1387,7 +1387,7 @@
 					<div class="tsd-signature tsd-kind-icon">Pointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1402,7 +1402,7 @@
 					<div class="tsd-signature tsd-kind-icon">Ptr<wbr>Offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1417,7 +1417,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Base<span class="tsd-signature-symbol">:</span> <a href="classes/tvmobject.html" class="tsd-signature-type">TVMObject</a><span class="tsd-signature-symbol"> | </span><a href="classes/ndarray.html" class="tsd-signature-type">NDArray</a><span class="tsd-signature-symbol"> | </span><a href="classes/module.html" class="tsd-signature-type">Module</a><span class="tsd-signature-symbol"> | </span><a href="index.html#packedfunc" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L781">runtime.ts:781</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L781">runtime.ts:781</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1435,7 +1435,7 @@
 					<div class="tsd-signature tsd-kind-icon">RPC_<wbr>MAGIC<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">1045105</span><span class="tsd-signature-symbol"> = 1045105</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1457,7 +1457,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/support.ts#L25">support.ts:25</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/support.ts#L25">support.ts:25</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1489,7 +1489,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/support.ts#L39">support.ts:39</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/support.ts#L39">support.ts:39</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1518,7 +1518,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/support.ts#L52">support.ts:52</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/support.ts#L52">support.ts:52</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1555,7 +1555,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/compact.ts#L38">compact.ts:38</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/compact.ts#L38">compact.ts:38</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1586,7 +1586,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1608,7 +1608,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/environment.ts#L32">environment.ts:32</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/environment.ts#L32">environment.ts:32</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1639,7 +1639,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/compact.ts#L24">compact.ts:24</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/compact.ts#L24">compact.ts:24</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1661,7 +1661,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1726,7 +1726,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/support.ts#L62">support.ts:62</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/support.ts#L62">support.ts:62</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1748,7 +1748,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<wbr>Code<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L343">runtime.ts:343</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L343">runtime.ts:343</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1757,7 +1757,7 @@
 						<div class="tsd-signature tsd-kind-icon">0<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;int&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L344">runtime.ts:344</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L344">runtime.ts:344</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1767,7 +1767,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;uint&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L345">runtime.ts:345</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L345">runtime.ts:345</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1777,7 +1777,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;float&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L346">runtime.ts:346</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L346">runtime.ts:346</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1787,7 +1787,7 @@
 						<div class="tsd-signature tsd-kind-icon">3<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;handle&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L347">runtime.ts:347</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L347">runtime.ts:347</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1798,7 +1798,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Enum<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L272">runtime.ts:272</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L272">runtime.ts:272</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1807,7 +1807,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L273">runtime.ts:273</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L273">runtime.ts:273</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1817,7 +1817,7 @@
 						<div class="tsd-signature tsd-kind-icon">15<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;webgpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L277">runtime.ts:277</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L277">runtime.ts:277</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1827,7 +1827,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cuda&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L274">runtime.ts:274</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L274">runtime.ts:274</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1837,7 +1837,7 @@
 						<div class="tsd-signature tsd-kind-icon">4<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;opencl&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L275">runtime.ts:275</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L275">runtime.ts:275</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1847,7 +1847,7 @@
 						<div class="tsd-signature tsd-kind-icon">8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;metal&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L276">runtime.ts:276</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L276">runtime.ts:276</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1858,7 +1858,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Str<wbr>ToEnum<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L280">runtime.ts:280</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L280">runtime.ts:280</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1867,7 +1867,7 @@
 						<div class="tsd-signature tsd-kind-icon">cl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L283">runtime.ts:283</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L283">runtime.ts:283</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1877,7 +1877,7 @@
 						<div class="tsd-signature tsd-kind-icon">cpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 1</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L281">runtime.ts:281</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L281">runtime.ts:281</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1887,7 +1887,7 @@
 						<div class="tsd-signature tsd-kind-icon">cuda<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 2</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L282">runtime.ts:282</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L282">runtime.ts:282</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1897,7 +1897,7 @@
 						<div class="tsd-signature tsd-kind-icon">metal<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 8</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L286">runtime.ts:286</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L286">runtime.ts:286</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1907,7 +1907,7 @@
 						<div class="tsd-signature tsd-kind-icon">opencl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L284">runtime.ts:284</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L284">runtime.ts:284</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1917,7 +1917,7 @@
 						<div class="tsd-signature tsd-kind-icon">vulkan<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 7</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L285">runtime.ts:285</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L285">runtime.ts:285</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1927,7 +1927,7 @@
 						<div class="tsd-signature tsd-kind-icon">webgpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 15</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/runtime.ts#L287">runtime.ts:287</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/runtime.ts#L287">runtime.ts:287</a></li>
 							</ul>
 						</aside>
 					</section>
diff --git a/docs/reference/api/typedoc/interfaces/disposable.html b/docs/reference/api/typedoc/interfaces/disposable.html
index c19c057446..52ef8e4979 100644
--- a/docs/reference/api/typedoc/interfaces/disposable.html
+++ b/docs/reference/api/typedoc/interfaces/disposable.html
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">dispose<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/types.ts#L52">types.ts:52</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/types.ts#L52">types.ts:52</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/interfaces/functioninfo.html b/docs/reference/api/typedoc/interfaces/functioninfo.html
index 524f38f126..3b49dea117 100644
--- a/docs/reference/api/typedoc/interfaces/functioninfo.html
+++ b/docs/reference/api/typedoc/interfaces/functioninfo.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">arg_<wbr>types<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">launch_<wbr>param_<wbr>tags<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">name<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/interfaces/libraryprovider.html b/docs/reference/api/typedoc/interfaces/libraryprovider.html
index 62f23aee5b..1b3a1dad55 100644
--- a/docs/reference/api/typedoc/interfaces/libraryprovider.html
+++ b/docs/reference/api/typedoc/interfaces/libraryprovider.html
@@ -112,7 +112,7 @@
 					<div class="tsd-signature tsd-kind-icon">imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/types.ts#L34">types.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/types.ts#L34">types.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -127,7 +127,7 @@
 					<div class="tsd-signature tsd-kind-icon">start<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>inst<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">Instance</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/9ff74fb13/web/src/types.ts#L39">types.ts:39</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/304aa1e08/web/src/types.ts#L39">types.ts:39</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/searchindex.js b/docs/searchindex.js
index ede596c968..66b510d6ad 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
+Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
diff --git a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
index 13e010ac70..f6fb67f4ad 100644
--- a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:36.785</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
+<p><strong>00:36.375</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -359,7 +359,7 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-relay-vta-py"><span class="std std-ref">Auto-tuning a convolutional network on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_vta.py</span></code>)</p></td>
-<td><p>00:36.778</p></td>
+<td><p>00:36.367</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_alu_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-alu-vta-py"><span class="std std-ref">Auto-tuning a ALU fused op on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_alu_vta.py</span></code>)</p></td>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_classification.html b/docs/topic/vta/tutorials/frontend/deploy_classification.html
index 41a2834d38..2e80aab522 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_classification.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_classification.html
@@ -593,7 +593,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
   warnings.warn(
 /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
   graph, lib, params = relay.build(
-resnet18_v1 inference graph built in 39.19s!
+resnet18_v1 inference graph built in 38.68s!
 </pre></div>
 </div>
 </div>
@@ -690,7 +690,7 @@ resnet18_v1 prediction for sample 0
         #5: weasel
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  6.796 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  6.044 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-topic-vta-tutorials-frontend-deploy-classification-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../../../_downloads/9e8de33a5822b31748bfd76861009f92/deploy_classification.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_classification.py</span></code></a></p>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_detection.html b/docs/topic/vta/tutorials/frontend/deploy_detection.html
index 5e2fdaec6b..e410cca6ea 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_detection.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_detection.html
@@ -611,7 +611,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
   warnings.warn(
-yolov3-tiny inference graph built in 26.98s!
+yolov3-tiny inference graph built in 26.46s!
 </pre></div>
 </div>
 </div>
@@ -696,7 +696,7 @@ Download test image</p>
         alu_counter     :           849056
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  10.961 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  9.828 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-topic-vta-tutorials-frontend-deploy-detection-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../../../_downloads/65b9451c8de050d7cd9da2fe5a49acc6/deploy_detection.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_detection.py</span></code></a></p>
diff --git a/docs/topic/vta/tutorials/frontend/sg_execution_times.html b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
index 88d0b4ddd3..ea6bd14e17 100644
--- a/docs/topic/vta/tutorials/frontend/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-frontend-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>02:17.757</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
+<p><strong>02:15.872</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_detection.html#sphx-glr-topic-vta-tutorials-frontend-deploy-detection-py"><span class="std std-ref">Deploy Pretrained Vision Detection Model from Darknet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_detection.py</span></code>)</p></td>
-<td><p>01:10.961</p></td>
+<td><p>01:09.828</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_classification.html#sphx-glr-topic-vta-tutorials-frontend-deploy-classification-py"><span class="std std-ref">Deploy Pretrained Vision Model from MxNet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_classification.py</span></code>)</p></td>
-<td><p>01:06.796</p></td>
+<td><p>01:06.044</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/optimize/sg_execution_times.html b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
index ea5891ded6..d099087564 100644
--- a/docs/topic/vta/tutorials/optimize/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-optimize-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:03.461</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
+<p><strong>00:03.443</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="convolution_opt.html#sphx-glr-topic-vta-tutorials-optimize-convolution-opt-py"><span class="std std-ref">2D Convolution Optimization</span></a> (<code class="docutils literal notranslate"><span class="pre">convolution_opt.py</span></code>)</p></td>
-<td><p>00:02.905</p></td>
+<td><p>00:02.897</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="matrix_multiply_opt.html#sphx-glr-topic-vta-tutorials-optimize-matrix-multiply-opt-py"><span class="std std-ref">Matrix Multiply Blocking</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply_opt.py</span></code>)</p></td>
-<td><p>00:00.556</p></td>
+<td><p>00:00.546</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/sg_execution_times.html b/docs/topic/vta/tutorials/sg_execution_times.html
index f00aed9559..bc838d1de8 100644
--- a/docs/topic/vta/tutorials/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:00.952</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
+<p><strong>00:00.924</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="matrix_multiply.html#sphx-glr-topic-vta-tutorials-matrix-multiply-py"><span class="std std-ref">Simple Matrix Multiply</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply.py</span></code>)</p></td>
-<td><p>00:00.494</p></td>
+<td><p>00:00.477</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="vta_get_started.html#sphx-glr-topic-vta-tutorials-vta-get-started-py"><span class="std std-ref">Get Started with VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">vta_get_started.py</span></code>)</p></td>
-<td><p>00:00.458</p></td>
+<td><p>00:00.448</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/tutorial/auto_scheduler_matmul_x86.html b/docs/tutorial/auto_scheduler_matmul_x86.html
index b50bcfb67f..54b1208617 100644
--- a/docs/tutorial/auto_scheduler_matmul_x86.html
+++ b/docs/tutorial/auto_scheduler_matmul_x86.html
@@ -502,9 +502,6 @@ trials, we can load the best schedule from the log file and apply it.</p>
 <a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">sch</span></a><span class="p">,</span> <a href="../reference/api/python/ir.html#tvm.ir.Array" title="tvm.ir.Array" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">args</span></a> <span class="o">=</span> <a href="../reference/api/pyth [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>*E
-</pre></div>
-</div>
 </div>
 <div class="section" id="inspecting-the-optimized-schedule">
 <h2>Inspecting the Optimized Schedule<a class="headerlink" href="#inspecting-the-optimized-schedule" title="Permalink to this headline">¶</a></h2>
@@ -582,7 +579,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 96.087 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 93.225 ms
 </pre></div>
 </div>
 </div>
@@ -644,7 +641,6 @@ resume the status and do more 5 trials.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Resume search:
-*E
 </pre></div>
 </div>
 </div>
@@ -655,7 +651,7 @@ automatically optimize a matrix multiplication, without the need to specify a
 search template.  It ends a series of examples that starts from the Tensor
 Expression (TE) language that demonstrates how TVM can optimize computational
 operations.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  57.137 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  39.181 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-auto-scheduler-matmul-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/eac4389b114db015e95cb3cdf8b86b83/auto_scheduler_matmul_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">auto_scheduler_matmul_x86.py</span></code></a></p>
diff --git a/docs/tutorial/autotvm_matmul_x86.html b/docs/tutorial/autotvm_matmul_x86.html
index e7e9bf121e..d2f57d3029 100644
--- a/docs/tutorial/autotvm_matmul_x86.html
+++ b/docs/tutorial/autotvm_matmul_x86.html
@@ -690,16 +690,198 @@ reduce variance, we take 5 measurements and average them.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>waiting for device...
 device available
 Get devices for measurement successfully!
-No: 1   GFLOPS: 0.50/0.50       result: MeasureResult(costs=(0.5332579814,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.813384771347046, timestamp=1690328353.4487314)        [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 1])],None,6
-No: 2   GFLOPS: 1.01/1.01       result: MeasureResult(costs=(0.26523001720000006,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.5062477588653564, timestamp=1690328357.9536412)        [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 2])],None,16
-No: 3   GFLOPS: 7.82/7.82       result: MeasureResult(costs=(0.0343300822,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.8198776245117188, timestamp=1690328358.768012)        [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 16])],None,49
-No: 4   GFLOPS: 2.03/7.82       result: MeasureResult(costs=(0.1321095892,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.377077579498291, timestamp=1690328361.144421) [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 4])],None,28
-No: 5   GFLOPS: 14.33/14.33     result: MeasureResult(costs=(0.0187313796,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6215758323669434, timestamp=1690328361.8819606)       [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 64])],None,65
-No: 6   GFLOPS: 11.09/14.33     result: MeasureResult(costs=(0.0242109118,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.659132719039917, timestamp=1690328362.5401225)        [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 512])],None,92
-No: 7   GFLOPS: 2.03/14.33      result: MeasureResult(costs=(0.1323101692,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3927040100097656, timestamp=1690328364.918409)        [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 4])],None,27
-No: 8   GFLOPS: 0.47/14.33      result: MeasureResult(costs=(0.5676618674,), error_no=MeasureErrorNo.NO_ERROR, all_cost=9.340614080429077, timestamp=1690328374.27059)  [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 1])],None,9
-No: 9   GFLOPS: 3.21/14.33      result: MeasureResult(costs=(0.08349734959999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.5565173625946045, timestamp=1690328375.9469516)        [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 2])],None,11
-No: 10  GFLOPS: 13.91/14.33     result: MeasureResult(costs=(0.019292975799999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5411303043365479, timestamp=1690328376.5198061)       [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 32])],None,54
+No: 1   GFLOPS: 0.00/0.00       result: Traceback (most recent call last):
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 712, in __call__
+    yield remote, remote.load_module(os.path.split(build_result.filename)[1])
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 679, in run_through_rpc
+    costs = time_f(*args).results
+  File &quot;/workspace/python/tvm/runtime/module.py&quot;, line 398, in evaluator
+    blob = feval(*args)
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 262, in tvm._ffi._cy3.core.FuncCall
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 251, in tvm._ffi._cy3.core.FuncCall3
+  File &quot;tvm/_ffi/_cython/./base.pxi&quot;, line 181, in tvm._ffi._cy3.core.CHECK_CALL
+tvm.error.RPCSessionTimeoutError: Traceback (most recent call last):
+  41: 0xffffffffffffffff
+  40: _start
+  39: __libc_start_main
+  38: 0x00007fbfb434bd8f
+  37: Py_BytesMain
+  36: Py_RunMain
+  35: 0x00000000005f3021
+  34: PyObject_Call
+  33: _PyFunction_Vectorcall
+  32: _PyEval_EvalCodeWithName
+  31: _PyEval_EvalFrameDefault
+  30: _PyFunction_Vectorcall
+  29: _PyEval_EvalCodeWithName
+  28: _PyEval_EvalFrameDefault
+  27: 0x000000000051546f
+  26: 0x00000000005dabd0
+  25: PyEval_EvalCode
+  24: _PyEval_EvalCodeWithName
+  23: _PyEval_EvalFrameDefault
+  22: _PyFunction_Vectorcall
+  21: _PyEval_EvalCodeWithName
+  20: _PyEval_EvalFrameDefault
+  19: PyObject_Call
+  18: _PyFunction_Vectorcall
+  17: _PyEval_EvalCodeWithName
+  16: _PyEval_EvalFrameDefault
+  15: PyObject_Call
+  14: _PyFunction_Vectorcall
+  13: _PyEval_EvalCodeWithName
+  12: _PyEval_EvalFrameDefault
+  11: PyObject_Call
+  10: __pyx_pw_3tvm_4_ffi_4_cy3_4core_14PackedFuncBase_5__call__
+        at tvm/_ffi/_cython/core.cpp:8724
+  9: __pyx_pf_3tvm_4_ffi_4_cy3_4core_14PackedFuncBase_4__call__
+        at tvm/_ffi/_cython/core.cpp:8760
+  8: __pyx_f_3tvm_4_ffi_4_cy3_4core_FuncCall
+        at tvm/_ffi/_cython/core.cpp:7821
+  7: __pyx_f_3tvm_4_ffi_4_cy3_4core_FuncCall3
+        at tvm/_ffi/_cython/core.cpp:7699
+  6: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+        at /workspace/src/runtime/rpc/rpc_module.cc:129
+  5: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function&lt;void (tvm::runtime::TVMArgs)&gt; const&amp;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:1016
+  4: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function&lt;void (tvm::runtime::TVMArgs)&gt;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:807
+  3: tvm::runtime::RPCEndpoint::HandleUntilReturnEvent(bool, std::function&lt;void (tvm::runtime::TVMArgs)&gt;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:651
+  2: tvm::runtime::RPCEndpoint::EventHandler::HandleNextEvent(bool, bool, std::function&lt;void (tvm::runtime::TVMArgs)&gt;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:136
+  1: tvm::runtime::RPCEndpoint::EventHandler::HandleProcessPacket(std::function&lt;void (tvm::runtime::TVMArgs)&gt;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:316
+  0: tvm::runtime::RPCEndpoint::EventHandler::HandleReturn(tvm::runtime::RPCCode, std::function&lt;void (tvm::runtime::TVMArgs)&gt;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:380
+  File &quot;/workspace/src/runtime/rpc/rpc_endpoint.cc&quot;, line 380
+RPCSessionTimeoutError: Your 10.0s session has expired, try to increase the &quot;session_timeout&quot; value.
+
+During handling of the above exception, another exception occurred:
+
+Traceback (most recent call last):
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 679, in run_through_rpc
+    costs = time_f(*args).results
+  File &quot;/usr/lib/python3.8/contextlib.py&quot;, line 131, in __exit__
+    self.gen.throw(type, value, traceback)
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 716, in __call__
+    remote.remove(build_result.filename)
+  File &quot;/workspace/python/tvm/rpc/client.py&quot;, line 145, in remove
+    self._remote_funcs[&quot;remove&quot;] = self.get_function(&quot;tvm.rpc.server.remove&quot;)
+  File &quot;/workspace/python/tvm/rpc/client.py&quot;, line 73, in get_function
+    return self._sess.get_function(name)
+  File &quot;/workspace/python/tvm/runtime/module.py&quot;, line 171, in get_function
+    check_call(
+  File &quot;/workspace/python/tvm/_ffi/base.py&quot;, line 348, in check_call
+    raise get_last_ffi_error()
+tvm.error.InternalError: Traceback (most recent call last):
+  50: 0xffffffffffffffff
+  49: _start
+  48: __libc_start_main
+  47: 0x00007fbfb434bd8f
+  46: Py_BytesMain
+  45: Py_RunMain
+  44: 0x00000000005f3021
+  43: PyObject_Call
+  42: _PyFunction_Vectorcall
+  41: _PyEval_EvalCodeWithName
+  40: _PyEval_EvalFrameDefault
+  39: _PyFunction_Vectorcall
+  38: _PyEval_EvalCodeWithName
+  37: _PyEval_EvalFrameDefault
+  36: 0x000000000051546f
+  35: 0x00000000005dabd0
+  34: PyEval_EvalCode
+  33: _PyEval_EvalCodeWithName
+  32: _PyEval_EvalFrameDefault
+  31: _PyFunction_Vectorcall
+  30: _PyEval_EvalCodeWithName
+  29: _PyEval_EvalFrameDefault
+  28: PyObject_Call
+  27: _PyFunction_Vectorcall
+  26: _PyEval_EvalCodeWithName
+  25: _PyEval_EvalFrameDefault
+  24: 0x0000000000521f82
+  23: _PyFunction_Vectorcall
+  22: _PyEval_EvalFrameDefault
+  21: 0x000000000052f0a9
+  20: 0x0000000000626e1c
+  19: 0x0000000000626f00
+  18: 0x00000000005291f0
+  17: _PyEval_EvalFrameDefault
+  16: _PyFunction_Vectorcall
+  15: _PyEval_EvalFrameDefault
+  14: _PyFunction_Vectorcall
+  13: _PyEval_EvalFrameDefault
+  12: _PyFunction_Vectorcall
+  11: _PyEval_EvalCodeWithName
+  10: _PyEval_EvalFrameDefault
+  9: _PyObject_MakeTpCall
+  8: 0x00007fbfb4167429
+  7: _ctypes_callproc
+  6: 0x00007fbfb3ec8492
+  5: 0x00007fbfb3ecbe2d
+  4: tvm::runtime::ModuleNode::GetFunction(tvm::runtime::String const&amp;, bool)
+        at /workspace/src/runtime/module.cc:66
+  3: tvm::runtime::RPCModuleNode::GetFunction(tvm::runtime::String const&amp;, tvm::runtime::ObjectPtr&lt;tvm::runtime::Object&gt; const&amp;)
+        at /workspace/src/runtime/rpc/rpc_module.cc:187
+  2: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:1011
+  1: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote&lt;std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;&gt;(tvm::runtime::RPCCode, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;)
+        at /workspace/src/runtime/rpc/rpc_endpoint.h:223
+  0: operator()
+        at /workspace/src/runtime/rpc/rpc_endpoint.cc:688
+  File &quot;/workspace/src/runtime/rpc/rpc_endpoint.cc&quot;, line 688
+InternalError: Check failed: (code == RPCCode::kReturn) is false: code=1
+
+Traceback (most recent call last):
+  50: 0xffffffffffffffff
+  49: _start
+  48: __libc_start_main
+  47: 0x00007fbfb434bd8f
+  46: Py_BytesMain
+  45: Py_RunMain
+  44: 0x00000000005f3021
+  43: PyObject_Call
+  42: _PyFunction_Vectorcall
+  41: _PyEval_EvalCodeWithName
+  40: _PyEval_EvalFrameDefault
+  39: _PyFunction_Vectorcall
+  38: _PyEval_EvalCodeWithName
+  37: _PyEval_EvalFrameDefault
+  36: 0x000000000051546f
+  35: 0x00000000005dabd0
+  34: PyEval_EvalCode
+  33: _PyEval_EvalCodeWithName
+  32: _PyEval_EvalFrameDefault
+  31: _PyFunction_Vectorcall
+  30: _PyEval_EvalCodeWithName
+  29: _PyEval_EvalFrameDefault
+  28: PyObject_Call
+  27: _PyFunction_Vectorcall
+  26: _PyEval_EvalCodeWithName
+  25: _PyEval_EvalFrameDefault
+  24: 0x0000000000521f82
+  23: _PyFunction_Vectorcall
+  22: _PyEval_EvalFrameDefault
+  21: 0x000000000052f0a9
+  20: 0x0000000000626e1c
+  19: 0x0000000000626f00
+  18: 0x00000000005291f0
+  17: _PyEval_EvalFrameDefault
+  16: _PyFunction_Vectorcall
+  15: _PyEval_EvalFrameDefault
+  14: _PyFunction_Vectorcal     [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 1])],None,9
+No: 2   GFLOPS: 9.71/9.71       result: MeasureResult(costs=(0.027655963999999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7352867126464844, timestamp=1690344607.5445006)       [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 16])],None,43
+No: 3   GFLOPS: 8.46/9.71       result: MeasureResult(costs=(0.0317474348,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.826432466506958, timestamp=1690344608.324583) [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 128])],None,70
+No: 4   GFLOPS: 7.24/9.71       result: MeasureResult(costs=(0.0370749484,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.8846931457519531, timestamp=1690344609.1849132)       [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 16])],None,46
+No: 5   GFLOPS: 11.42/11.42     result: MeasureResult(costs=(0.023515623,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6434590816497803, timestamp=1690344610.0269935)        [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 512])],None,97
+No: 6   GFLOPS: 3.62/11.42      result: MeasureResult(costs=(0.0742203254,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4683341979980469, timestamp=1690344611.4810464)       [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 8])],None,37
+No: 7   GFLOPS: 11.59/11.59     result: MeasureResult(costs=(0.0231556352,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6698708534240723, timestamp=1690344612.1197443)       [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 32])],None,55
+No: 8   GFLOPS: 11.78/11.78     result: MeasureResult(costs=(0.022784483,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.706279993057251, timestamp=1690344612.7481692) [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 32])],None,56
+No: 9   GFLOPS: 3.60/11.78      result: MeasureResult(costs=(0.0745082484,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4252007007598877, timestamp=1690344614.3315804)       [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 8])],None,38
+No: 10  GFLOPS: 9.24/11.78      result: MeasureResult(costs=(0.0290505876,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7512252330780029, timestamp=1690344615.064585)        [(&#39;tile_y&#39;, [-1, 1]), (&#39;tile_x&#39;, [-1, 64])],None,60
 </pre></div>
 </div>
 <p>With tuning completed, we can choose the configuration from the log file that
diff --git a/docs/tutorial/autotvm_relay_x86.html b/docs/tutorial/autotvm_relay_x86.html
index e6cb0e0db5..70bfdc70a9 100644
--- a/docs/tutorial/autotvm_relay_x86.html
+++ b/docs/tutorial/autotvm_relay_x86.html
@@ -568,7 +568,7 @@ standard deviation.</p>
 <span class="nb">print</span><span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 500.13943584000117, &#39;median&#39;: 500.3703938499939, &#39;std&#39;: 1.82944721104681}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 498.1850786400082, &#39;median&#39;: 497.6376511000126, &#39;std&#39;: 3.84035680931187}
 </pre></div>
 </div>
 </div>
@@ -757,178 +757,178 @@ depending on the specifics of the model and the target platform.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  1/25]  Current/Best:   12.94/  18.41 GFLOPS | Progress: (4/20) | 9.20 s
-[Task  1/25]  Current/Best:    6.38/  21.42 GFLOPS | Progress: (8/20) | 12.91 s
-[Task  1/25]  Current/Best:   12.76/  23.25 GFLOPS | Progress: (12/20) | 15.49 s
-[Task  1/25]  Current/Best:   10.29/  23.25 GFLOPS | Progress: (16/20) | 18.39 s
-[Task  1/25]  Current/Best:    7.22/  23.25 GFLOPS | Progress: (20/20) | 21.58 s Done.
+[Task  1/25]  Current/Best:   12.34/  21.72 GFLOPS | Progress: (4/20) | 10.46 s
+[Task  1/25]  Current/Best:   12.86/  21.72 GFLOPS | Progress: (8/20) | 14.05 s
+[Task  1/25]  Current/Best:   16.34/  21.72 GFLOPS | Progress: (12/20) | 16.24 s
+[Task  1/25]  Current/Best:   10.28/  21.72 GFLOPS | Progress: (16/20) | 20.20 s
+[Task  1/25]  Current/Best:   12.80/  22.48 GFLOPS | Progress: (20/20) | 22.58 s Done.
 
 [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  2/25]  Current/Best:   19.00/  21.51 GFLOPS | Progress: (4/20) | 4.70 s
-[Task  2/25]  Current/Best:    9.87/  21.51 GFLOPS | Progress: (8/20) | 6.83 s
-[Task  2/25]  Current/Best:    7.41/  21.51 GFLOPS | Progress: (12/20) | 8.30 s
-[Task  2/25]  Current/Best:   17.40/  21.51 GFLOPS | Progress: (16/20) | 10.71 s
-[Task  2/25]  Current/Best:   14.39/  21.51 GFLOPS | Progress: (20/20) | 12.48 s Done.
+[Task  2/25]  Current/Best:   19.76/  19.76 GFLOPS | Progress: (4/20) | 4.68 s
+[Task  2/25]  Current/Best:    7.49/  19.76 GFLOPS | Progress: (8/20) | 6.30 s
+[Task  2/25]  Current/Best:   21.13/  21.13 GFLOPS | Progress: (12/20) | 7.97 s
+[Task  2/25]  Current/Best:   16.35/  21.13 GFLOPS | Progress: (16/20) | 9.54 s
+[Task  2/25]  Current/Best:    6.79/  21.13 GFLOPS | Progress: (20/20) | 11.10 s Done.
 
 [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  3/25]  Current/Best:   10.52/  18.24 GFLOPS | Progress: (4/20) | 6.24 s
-[Task  3/25]  Current/Best:   20.92/  20.92 GFLOPS | Progress: (8/20) | 8.57 s
-[Task  3/25]  Current/Best:   16.64/  20.92 GFLOPS | Progress: (12/20) | 11.56 s
-[Task  3/25]  Current/Best:   18.79/  20.92 GFLOPS | Progress: (16/20) | 13.93 s
-[Task  3/25]  Current/Best:   10.54/  20.92 GFLOPS | Progress: (20/20) | 16.26 s Done.
+[Task  3/25]  Current/Best:    6.40/  21.67 GFLOPS | Progress: (4/20) | 6.07 s
+[Task  3/25]  Current/Best:   15.36/  21.67 GFLOPS | Progress: (8/20) | 8.53 s
+[Task  3/25]  Current/Best:   12.09/  21.67 GFLOPS | Progress: (12/20) | 11.13 s
+[Task  3/25]  Current/Best:    5.10/  22.51 GFLOPS | Progress: (16/20) | 13.80 s
+[Task  3/25]  Current/Best:    9.42/  22.51 GFLOPS | Progress: (20/20) | 16.09 s Done.
 
 [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  4/25]  Current/Best:    6.42/  14.73 GFLOPS | Progress: (4/20) | 5.95 s
-[Task  4/25]  Current/Best:   14.46/  14.82 GFLOPS | Progress: (8/20) | 9.09 s
-[Task  4/25]  Current/Best:   15.54/  15.54 GFLOPS | Progress: (12/20) | 11.27 s
-[Task  4/25]  Current/Best:   15.27/  20.38 GFLOPS | Progress: (16/20) | 13.54 s
-[Task  4/25]  Current/Best:   19.47/  20.38 GFLOPS | Progress: (20/20) | 15.45 s Done.
+[Task  4/25]  Current/Best:    7.88/  13.82 GFLOPS | Progress: (4/20) | 5.17 s
+[Task  4/25]  Current/Best:    7.44/  18.66 GFLOPS | Progress: (8/20) | 7.04 s
+[Task  4/25]  Current/Best:   13.87/  18.66 GFLOPS | Progress: (12/20) | 9.38 s
+[Task  4/25]  Current/Best:   15.98/  18.66 GFLOPS | Progress: (16/20) | 11.31 s
+[Task  4/25]  Current/Best:    4.39/  20.93 GFLOPS | Progress: (20/20) | 13.38 s Done.
 
 [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  5/25]  Current/Best:   20.40/  20.40 GFLOPS | Progress: (4/20) | 4.90 s
-[Task  5/25]  Current/Best:   11.05/  20.40 GFLOPS | Progress: (8/20) | 8.07 s
-[Task  5/25]  Current/Best:    6.40/  20.40 GFLOPS | Progress: (12/20) | 9.99 s
-[Task  5/25]  Current/Best:    6.64/  20.40 GFLOPS | Progress: (16/20) | 11.97 s
-[Task  5/25]  Current/Best:    5.58/  20.40 GFLOPS | Progress: (20/20) | 14.00 s Done.
+[Task  5/25]  Current/Best:    4.30/  12.90 GFLOPS | Progress: (4/20) | 5.08 s
+[Task  5/25]  Current/Best:   12.00/  15.64 GFLOPS | Progress: (8/20) | 7.32 s
+[Task  5/25]  Current/Best:   16.91/  18.92 GFLOPS | Progress: (12/20) | 9.10 s
+[Task  5/25]  Current/Best:   21.09/  21.09 GFLOPS | Progress: (16/20) | 11.15 s
+[Task  5/25]  Current/Best:   14.56/  21.09 GFLOPS | Progress: (20/20) | 13.61 s Done.
 
 [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  6/25]  Current/Best:   15.24/  18.15 GFLOPS | Progress: (4/20) | 5.41 s
-[Task  6/25]  Current/Best:    7.83/  19.51 GFLOPS | Progress: (8/20) | 8.68 s
-[Task  6/25]  Current/Best:   12.62/  19.51 GFLOPS | Progress: (12/20) | 11.76 s
-[Task  6/25]  Current/Best:   22.76/  22.76 GFLOPS | Progress: (16/20) | 14.21 s
-[Task  6/25]  Current/Best:    4.72/  22.76 GFLOPS | Progress: (20/20) | 16.90 s Done.
+[Task  6/25]  Current/Best:   12.54/  14.21 GFLOPS | Progress: (4/20) | 5.36 s
+[Task  6/25]  Current/Best:    8.87/  18.43 GFLOPS | Progress: (8/20) | 8.52 s
+[Task  6/25]  Current/Best:   15.55/  18.43 GFLOPS | Progress: (12/20) | 11.68 s
+[Task  6/25]  Current/Best:   17.62/  18.43 GFLOPS | Progress: (16/20) | 13.99 s
+[Task  6/25]  Current/Best:   18.26/  18.43 GFLOPS | Progress: (20/20) | 19.17 s Done.
 
 [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  7/25]  Current/Best:    7.68/  11.85 GFLOPS | Progress: (4/20) | 5.64 s
-[Task  7/25]  Current/Best:   11.53/  13.94 GFLOPS | Progress: (8/20) | 8.19 s
-[Task  7/25]  Current/Best:    6.30/  17.69 GFLOPS | Progress: (12/20) | 11.13 s
-[Task  7/25]  Current/Best:    6.97/  17.69 GFLOPS | Progress: (16/20) | 15.90 s
-[Task  7/25]  Current/Best:   10.44/  17.69 GFLOPS | Progress: (20/20) | 19.70 s Done.
+[Task  7/25]  Current/Best:    8.10/  13.84 GFLOPS | Progress: (4/20) | 5.71 s
+[Task  7/25]  Current/Best:   13.70/  13.84 GFLOPS | Progress: (8/20) | 9.01 s
+[Task  7/25]  Current/Best:    5.26/  18.23 GFLOPS | Progress: (12/20) | 11.61 s
+[Task  7/25]  Current/Best:   13.03/  18.23 GFLOPS | Progress: (16/20) | 14.93 s
+[Task  7/25]  Current/Best:   12.22/  20.49 GFLOPS | Progress: (20/20) | 17.76 s Done.
 
 [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  8/25]  Current/Best:   14.71/  14.71 GFLOPS | Progress: (4/20) | 7.59 s
-[Task  8/25]  Current/Best:    1.70/  14.71 GFLOPS | Progress: (8/20) | 11.90 s
-[Task  8/25]  Current/Best:   10.20/  14.71 GFLOPS | Progress: (12/20) | 16.69 s
-[Task  8/25]  Current/Best:   12.28/  14.71 GFLOPS | Progress: (16/20) | 20.57 s
-[Task  8/25]  Current/Best:   13.32/  14.71 GFLOPS | Progress: (20/20) | 23.13 s Done.
+[Task  8/25]  Current/Best:   12.47/  18.04 GFLOPS | Progress: (4/20) | 5.15 s
+[Task  8/25]  Current/Best:    3.14/  18.04 GFLOPS | Progress: (8/20) | 8.30 s
+[Task  8/25]  Current/Best:   12.10/  18.04 GFLOPS | Progress: (12/20) | 13.17 s
+[Task  8/25]  Current/Best:    6.54/  18.04 GFLOPS | Progress: (16/20) | 16.91 s
+[Task  8/25]  Current/Best:   15.19/  18.04 GFLOPS | Progress: (20/20) | 20.01 s Done.
 
 [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  9/25]  Current/Best:   12.12/  15.99 GFLOPS | Progress: (4/20) | 4.94 s
-[Task  9/25]  Current/Best:   12.29/  15.99 GFLOPS | Progress: (8/20) | 13.06 s
-[Task  9/25]  Current/Best:   15.16/  17.37 GFLOPS | Progress: (12/20) | 24.12 s
-[Task  9/25]  Current/Best:   13.49/  22.24 GFLOPS | Progress: (16/20) | 27.27 s
-[Task  9/25]  Current/Best:   12.67/  22.24 GFLOPS | Progress: (20/20) | 29.25 s
-[Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
-
-[Task 10/25]  Current/Best:   13.36/  15.87 GFLOPS | Progress: (4/20) | 5.41 s
-[Task 10/25]  Current/Best:   15.17/  18.48 GFLOPS | Progress: (8/20) | 7.30 s
-[Task 10/25]  Current/Best:    5.43/  18.48 GFLOPS | Progress: (12/20) | 9.13 s
-[Task 10/25]  Current/Best:   13.33/  18.48 GFLOPS | Progress: (16/20) | 11.43 s
-[Task 10/25]  Current/Best:   18.86/  18.86 GFLOPS | Progress: (20/20) | 13.44 s Done.
+[Task  9/25]  Current/Best:   15.04/  21.12 GFLOPS | Progress: (4/20) | 6.75 s
+[Task  9/25]  Current/Best:    6.40/  21.12 GFLOPS | Progress: (8/20) | 15.16 s
+[Task  9/25]  Current/Best:    8.53/  21.12 GFLOPS | Progress: (12/20) | 23.15 s
+[Task  9/25]  Current/Best:    5.06/  22.59 GFLOPS | Progress: (16/20) | 29.17 s
+[Task  9/25]  Current/Best:    5.43/  22.59 GFLOPS | Progress: (20/20) | 31.21 s Done.
+
+[Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
+[Task 10/25]  Current/Best:   14.70/  14.70 GFLOPS | Progress: (4/20) | 4.82 s
+[Task 10/25]  Current/Best:    3.59/  14.70 GFLOPS | Progress: (8/20) | 7.34 s
+[Task 10/25]  Current/Best:   14.24/  15.07 GFLOPS | Progress: (12/20) | 9.83 s
+[Task 10/25]  Current/Best:   14.92/  17.63 GFLOPS | Progress: (16/20) | 11.51 s
+[Task 10/25]  Current/Best:   11.35/  18.18 GFLOPS | Progress: (20/20) | 13.50 s Done.
 
 [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 11/25]  Current/Best:   15.30/  15.30 GFLOPS | Progress: (4/20) | 5.89 s
-[Task 11/25]  Current/Best:   12.31/  15.30 GFLOPS | Progress: (8/20) | 8.95 s
-[Task 11/25]  Current/Best:   18.58/  18.58 GFLOPS | Progress: (12/20) | 11.36 s
-[Task 11/25]  Current/Best:   16.26/  20.41 GFLOPS | Progress: (16/20) | 13.54 s
-[Task 11/25]  Current/Best:   12.32/  20.41 GFLOPS | Progress: (20/20) | 16.46 s Done.
+[Task 11/25]  Current/Best:   17.91/  23.08 GFLOPS | Progress: (4/20) | 4.73 s
+[Task 11/25]  Current/Best:   23.09/  23.09 GFLOPS | Progress: (8/20) | 6.97 s
+[Task 11/25]  Current/Best:    9.90/  23.09 GFLOPS | Progress: (12/20) | 10.00 s
+[Task 11/25]  Current/Best:   13.88/  23.09 GFLOPS | Progress: (16/20) | 12.92 s
+[Task 11/25]  Current/Best:   19.55/  23.09 GFLOPS | Progress: (20/20) | 14.89 s Done.
 
 [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 12/25]  Current/Best:   12.20/  16.12 GFLOPS | Progress: (4/20) | 7.27 s
-[Task 12/25]  Current/Best:    5.44/  16.12 GFLOPS | Progress: (8/20) | 10.11 s
-[Task 12/25]  Current/Best:    5.76/  16.12 GFLOPS | Progress: (12/20) | 12.71 s
-[Task 12/25]  Current/Best:   15.31/  21.95 GFLOPS | Progress: (16/20) | 15.09 s
-[Task 12/25]  Current/Best:   11.70/  21.95 GFLOPS | Progress: (20/20) | 18.65 s Done.
+[Task 12/25]  Current/Best:   14.98/  14.98 GFLOPS | Progress: (4/20) | 8.14 s
+[Task 12/25]  Current/Best:    3.11/  20.30 GFLOPS | Progress: (8/20) | 10.92 s
+[Task 12/25]  Current/Best:   14.25/  20.30 GFLOPS | Progress: (12/20) | 13.02 s
+[Task 12/25]  Current/Best:   18.26/  20.30 GFLOPS | Progress: (16/20) | 16.91 s
+[Task 12/25]  Current/Best:   18.02/  20.30 GFLOPS | Progress: (20/20) | 19.46 s Done.
 
 [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 13/25]  Current/Best:    9.09/  20.74 GFLOPS | Progress: (4/20) | 5.20 s
-[Task 13/25]  Current/Best:    5.21/  20.74 GFLOPS | Progress: (8/20) | 9.18 s
-[Task 13/25]  Current/Best:    6.22/  20.74 GFLOPS | Progress: (12/20) | 12.94 s
-[Task 13/25]  Current/Best:   12.11/  20.74 GFLOPS | Progress: (16/20) | 15.92 s
-[Task 13/25]  Current/Best:   18.31/  20.74 GFLOPS | Progress: (20/20) | 19.61 s Done.
+[Task 13/25]  Current/Best:    8.69/  21.52 GFLOPS | Progress: (4/20) | 5.40 s
+[Task 13/25]  Current/Best:   16.93/  22.84 GFLOPS | Progress: (8/20) | 7.99 s
+[Task 13/25]  Current/Best:   18.35/  22.84 GFLOPS | Progress: (12/20) | 10.47 s
+[Task 13/25]  Current/Best:    9.44/  22.84 GFLOPS | Progress: (16/20) | 13.70 s
+[Task 13/25]  Current/Best:   11.29/  22.84 GFLOPS | Progress: (20/20) | 18.03 s Done.
 
 [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 14/25]  Current/Best:   10.55/  21.03 GFLOPS | Progress: (4/20) | 9.58 s
-[Task 14/25]  Current/Best:    1.60/  21.03 GFLOPS | Progress: (8/20) | 14.71 s
-[Task 14/25]  Current/Best:   14.98/  21.03 GFLOPS | Progress: (12/20) | 16.63 s
-[Task 14/25]  Current/Best:    5.62/  21.03 GFLOPS | Progress: (16/20) | 28.17 s
-[Task 14/25]  Current/Best:   11.53/  21.03 GFLOPS | Progress: (20/20) | 30.33 s Done.
-
-[Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 15/25]  Current/Best:    5.61/  17.99 GFLOPS | Progress: (4/20) | 8.67 s
-[Task 15/25]  Current/Best:   12.53/  17.99 GFLOPS | Progress: (8/20) | 14.96 s
-[Task 15/25]  Current/Best:   10.77/  23.56 GFLOPS | Progress: (12/20) | 17.67 s
-[Task 15/25]  Current/Best:   12.31/  23.56 GFLOPS | Progress: (16/20) | 29.07 s
-[Task 15/25]  Current/Best:   17.17/  23.56 GFLOPS | Progress: (20/20) | 32.13 s
+[Task 14/25]  Current/Best:   13.03/  13.03 GFLOPS | Progress: (4/20) | 5.68 s
+[Task 14/25]  Current/Best:   16.35/  17.54 GFLOPS | Progress: (8/20) | 9.89 s
+[Task 14/25]  Current/Best:   12.93/  17.54 GFLOPS | Progress: (12/20) | 14.05 s
+[Task 14/25]  Current/Best:   12.08/  17.54 GFLOPS | Progress: (16/20) | 25.61 s
+[Task 14/25]  Current/Best:   18.37/  18.37 GFLOPS | Progress: (20/20) | 34.74 s
+[Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+
+[Task 15/25]  Current/Best:   12.98/  21.20 GFLOPS | Progress: (4/20) | 5.78 s
+[Task 15/25]  Current/Best:   18.37/  21.20 GFLOPS | Progress: (8/20) | 7.48 s
+[Task 15/25]  Current/Best:   10.39/  22.45 GFLOPS | Progress: (12/20) | 18.25 s
+[Task 15/25]  Current/Best:    5.78/  22.45 GFLOPS | Progress: (16/20) | 25.02 s
+[Task 15/25]  Current/Best:    8.44/  22.45 GFLOPS | Progress: (20/20) | 31.01 s
 [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 16/25]  Current/Best:    4.05/  18.15 GFLOPS | Progress: (4/20) | 4.76 s
-[Task 16/25]  Current/Best:   13.18/  18.15 GFLOPS | Progress: (8/20) | 7.78 s
-[Task 16/25]  Current/Best:    5.76/  20.99 GFLOPS | Progress: (12/20) | 10.04 s
-[Task 16/25]  Current/Best:   14.92/  20.99 GFLOPS | Progress: (16/20) | 11.74 s
-[Task 16/25]  Current/Best:   19.27/  20.99 GFLOPS | Progress: (20/20) | 14.06 s Done.
+[Task 16/25]  Current/Best:   16.40/  19.02 GFLOPS | Progress: (4/20) | 4.59 s
+[Task 16/25]  Current/Best:    5.49/  19.02 GFLOPS | Progress: (8/20) | 7.08 s
+[Task 16/25]  Current/Best:   14.25/  19.02 GFLOPS | Progress: (12/20) | 9.47 s
+[Task 16/25]  Current/Best:   15.03/  19.02 GFLOPS | Progress: (16/20) | 11.98 s
+[Task 16/25]  Current/Best:   10.97/  19.02 GFLOPS | Progress: (20/20) | 14.15 s Done.
 
 [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 17/25]  Current/Best:   23.07/  23.07 GFLOPS | Progress: (4/20) | 5.38 s
-[Task 17/25]  Current/Best:   14.13/  23.07 GFLOPS | Progress: (8/20) | 8.23 s
-[Task 17/25]  Current/Best:   18.11/  23.07 GFLOPS | Progress: (12/20) | 10.98 s
-[Task 17/25]  Current/Best:   10.36/  23.07 GFLOPS | Progress: (16/20) | 14.32 s
-[Task 17/25]  Current/Best:   17.40/  23.57 GFLOPS | Progress: (20/20) | 16.79 s Done.
+[Task 17/25]  Current/Best:   19.22/  19.22 GFLOPS | Progress: (4/20) | 5.60 s
+[Task 17/25]  Current/Best:   15.57/  19.22 GFLOPS | Progress: (8/20) | 8.83 s
+[Task 17/25]  Current/Best:   11.45/  19.88 GFLOPS | Progress: (12/20) | 12.49 s
+[Task 17/25]  Current/Best:   16.10/  19.88 GFLOPS | Progress: (16/20) | 16.84 s
+[Task 17/25]  Current/Best:   20.99/  23.05 GFLOPS | Progress: (20/20) | 19.55 s Done.
 
 [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 18/25]  Current/Best:   10.29/  13.70 GFLOPS | Progress: (4/20) | 7.65 s
-[Task 18/25]  Current/Best:   14.26/  22.17 GFLOPS | Progress: (8/20) | 9.95 s
-[Task 18/25]  Current/Best:   15.51/  22.17 GFLOPS | Progress: (12/20) | 14.28 s
-[Task 18/25]  Current/Best:   10.81/  22.17 GFLOPS | Progress: (16/20) | 16.94 s
-[Task 18/25]  Current/Best:   16.43/  22.17 GFLOPS | Progress: (20/20) | 25.08 s Done.
+[Task 18/25]  Current/Best:    9.32/  17.93 GFLOPS | Progress: (4/20) | 6.58 s
+[Task 18/25]  Current/Best:    8.08/  17.93 GFLOPS | Progress: (8/20) | 10.95 s
+[Task 18/25]  Current/Best:   18.30/  18.30 GFLOPS | Progress: (12/20) | 13.54 s
+[Task 18/25]  Current/Best:   12.52/  18.30 GFLOPS | Progress: (16/20) | 17.01 s
+[Task 18/25]  Current/Best:    4.40/  18.30 GFLOPS | Progress: (20/20) | 20.01 s Done.
 
 [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 19/25]  Current/Best:   18.11/  22.22 GFLOPS | Progress: (4/20) | 6.70 s
-[Task 19/25]  Current/Best:    5.36/  22.22 GFLOPS | Progress: (8/20) | 9.47 s
-[Task 19/25]  Current/Best:    5.24/  22.22 GFLOPS | Progress: (12/20) | 13.71 s
-[Task 19/25]  Current/Best:   10.08/  22.22 GFLOPS | Progress: (16/20) | 19.26 s
-[Task 19/25]  Current/Best:   21.61/  22.22 GFLOPS | Progress: (20/20) | 23.39 s Done.
+[Task 19/25]  Current/Best:   18.01/  18.01 GFLOPS | Progress: (4/20) | 6.82 s
+[Task 19/25]  Current/Best:   11.02/  18.01 GFLOPS | Progress: (8/20) | 12.20 s
+[Task 19/25]  Current/Best:   16.93/  18.01 GFLOPS | Progress: (12/20) | 16.08 s
+[Task 19/25]  Current/Best:   21.66/  21.66 GFLOPS | Progress: (16/20) | 19.16 s
+[Task 19/25]  Current/Best:   12.88/  21.66 GFLOPS | Progress: (20/20) | 25.81 s Done.
 
 [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 20/25]  Current/Best:    4.31/  14.02 GFLOPS | Progress: (4/20) | 14.98 s
-[Task 20/25]  Current/Best:   13.13/  15.90 GFLOPS | Progress: (8/20) | 24.89 s
-[Task 20/25]  Current/Best:   18.56/  18.56 GFLOPS | Progress: (12/20) | 27.97 s
-[Task 20/25]  Current/Best:   12.99/  20.73 GFLOPS | Progress: (16/20) | 30.04 s
-[Task 20/25]  Current/Best:    8.18/  20.73 GFLOPS | Progress: (20/20) | 36.90 s
+[Task 20/25]  Current/Best:   20.17/  20.26 GFLOPS | Progress: (4/20) | 9.67 s
+[Task 20/25]  Current/Best:    5.24/  20.26 GFLOPS | Progress: (8/20) | 21.59 s
+[Task 20/25]  Current/Best:    8.35/  20.26 GFLOPS | Progress: (12/20) | 27.74 s
+[Task 20/25]  Current/Best:   20.04/  20.26 GFLOPS | Progress: (16/20) | 30.77 s
+[Task 20/25]  Current/Best:   18.44/  20.26 GFLOPS | Progress: (20/20) | 37.65 s
 [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 21/25]  Current/Best:   10.97/  21.03 GFLOPS | Progress: (4/20) | 14.13 s
-[Task 21/25]  Current/Best:   18.28/  21.03 GFLOPS | Progress: (8/20) | 17.10 s
-[Task 21/25]  Current/Best:    9.90/  21.03 GFLOPS | Progress: (12/20) | 24.75 s
-[Task 21/25]  Current/Best:    9.45/  21.03 GFLOPS | Progress: (16/20) | 32.32 s
-[Task 21/25]  Current/Best:   17.92/  21.03 GFLOPS | Progress: (20/20) | 41.19 s
-[Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
- Done.
- Done.
-
-[Task 22/25]  Current/Best:   16.38/  18.19 GFLOPS | Progress: (4/20) | 6.03 s
-[Task 22/25]  Current/Best:   12.41/  18.19 GFLOPS | Progress: (8/20) | 9.28 s
-[Task 22/25]  Current/Best:    7.57/  18.19 GFLOPS | Progress: (12/20) | 14.23 s
-[Task 22/25]  Current/Best:   17.91/  18.19 GFLOPS | Progress: (16/20) | 16.85 s
-[Task 22/25]  Current/Best:    8.96/  18.19 GFLOPS | Progress: (20/20) | 19.84 s Done.
+[Task 21/25]  Current/Best:   17.76/  17.76 GFLOPS | Progress: (4/20) | 5.75 s
+[Task 21/25]  Current/Best:   12.55/  21.68 GFLOPS | Progress: (8/20) | 7.72 s
+[Task 21/25]  Current/Best:   10.28/  21.68 GFLOPS | Progress: (12/20) | 10.32 s
+[Task 21/25]  Current/Best:   17.55/  21.68 GFLOPS | Progress: (16/20) | 19.88 s
+[Task 21/25]  Current/Best:   19.44/  21.68 GFLOPS | Progress: (20/20) | 31.08 s
+[Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
+[Task 22/25]  Current/Best:   16.07/  21.24 GFLOPS | Progress: (4/20) | 5.87 s
+[Task 22/25]  Current/Best:    6.14/  21.24 GFLOPS | Progress: (8/20) | 7.94 s
+[Task 22/25]  Current/Best:   12.42/  21.24 GFLOPS | Progress: (12/20) | 10.11 s
+[Task 22/25]  Current/Best:    7.28/  21.24 GFLOPS | Progress: (16/20) | 13.79 s
+[Task 22/25]  Current/Best:    8.62/  21.24 GFLOPS | Progress: (20/20) | 16.63 s Done.
 
 [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 23/25]  Current/Best:   19.06/  19.06 GFLOPS | Progress: (4/20) | 5.78 s
-[Task 23/25]  Current/Best:   12.10/  22.21 GFLOPS | Progress: (8/20) | 8.25 s
-[Task 23/25]  Current/Best:   20.03/  22.21 GFLOPS | Progress: (12/20) | 11.44 s
-[Task 23/25]  Current/Best:   11.44/  22.21 GFLOPS | Progress: (16/20) | 14.84 s
-[Task 23/25]  Current/Best:   18.66/  22.82 GFLOPS | Progress: (20/20) | 18.02 s Done.
+[Task 23/25]  Current/Best:   23.04/  23.04 GFLOPS | Progress: (4/20) | 6.56 s
+[Task 23/25]  Current/Best:   22.69/  23.04 GFLOPS | Progress: (8/20) | 11.41 s
+[Task 23/25]  Current/Best:   13.39/  23.04 GFLOPS | Progress: (12/20) | 14.34 s
+[Task 23/25]  Current/Best:   21.23/  23.04 GFLOPS | Progress: (16/20) | 18.28 s
+[Task 23/25]  Current/Best:    5.16/  23.04 GFLOPS | Progress: (20/20) | 21.70 s Done.
 
 [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 24/25]  Current/Best:    8.95/   8.95 GFLOPS | Progress: (4/20) | 13.91 s
-[Task 24/25]  Current/Best:    5.80/   8.95 GFLOPS | Progress: (8/20) | 18.72 s
-[Task 24/25]  Current/Best:    1.27/   8.95 GFLOPS | Progress: (12/20) | 29.46 s
-[Task 24/25]  Current/Best:    3.31/   8.95 GFLOPS | Progress: (16/20) | 42.08 s
-[Task 24/25]  Current/Best:    2.97/   8.95 GFLOPS | Progress: (20/20) | 55.16 s
+[Task 24/25]  Current/Best:    7.81/   9.58 GFLOPS | Progress: (4/20) | 10.07 s
+[Task 24/25]  Current/Best:    3.24/   9.58 GFLOPS | Progress: (8/20) | 21.14 s Done.
+ Done.
+ Done.
+
+[Task 24/25]  Current/Best:    9.42/   9.61 GFLOPS | Progress: (12/20) | 33.47 s
+[Task 24/25]  Current/Best:    0.98/   9.61 GFLOPS | Progress: (16/20) | 41.49 s
+[Task 24/25]  Current/Best:    8.31/   9.61 GFLOPS | Progress: (20/20) | 52.58 s
 [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 25/25]  Current/Best:    3.03/   8.77 GFLOPS | Progress: (4/20) | 11.06 s
-[Task 25/25]  Current/Best:    1.54/   9.17 GFLOPS | Progress: (8/20) | 14.52 s
-[Task 25/25]  Current/Best:    1.55/   9.17 GFLOPS | Progress: (12/20) | 25.53 s
-[Task 25/25]  Current/Best:    5.58/   9.17 GFLOPS | Progress: (16/20) | 29.13 s
-[Task 25/25]  Current/Best:    6.64/   9.17 GFLOPS | Progress: (20/20) | 40.13 s
+[Task 25/25]  Current/Best:    5.61/   8.03 GFLOPS | Progress: (4/20) | 5.81 s
+[Task 25/25]  Current/Best:    1.55/   9.25 GFLOPS | Progress: (8/20) | 7.25 s
+[Task 25/25]  Current/Best:    2.88/   9.25 GFLOPS | Progress: (12/20) | 10.72 s
+[Task 25/25]  Current/Best:    8.06/   9.25 GFLOPS | Progress: (16/20) | 17.15 s
+[Task 25/25]  Current/Best:    8.86/   9.25 GFLOPS | Progress: (20/20) | 28.19 s
 </pre></div>
 </div>
 <p>The output from this tuning process will look something like this:</p>
@@ -1031,8 +1031,8 @@ improvement in comparing the optimized model to the unoptimized model.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;unoptimized: </span><span class="si">%s</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">))</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 413.3076727600019, &#39;median&#39;: 413.26256724998984, &#39;std&#39;: 0.9493048239191954}
-unoptimized: {&#39;mean&#39;: 500.13943584000117, &#39;median&#39;: 500.3703938499939, &#39;std&#39;: 1.82944721104681}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 408.6138473600022, &#39;median&#39;: 408.82695934997173, &#39;std&#39;: 0.9052774125969995}
+unoptimized: {&#39;mean&#39;: 498.1850786400082, &#39;median&#39;: 497.6376511000126, &#39;std&#39;: 3.84035680931187}
 </pre></div>
 </div>
 </div>
@@ -1046,7 +1046,7 @@ models.</p>
 <p>Here we presented a simple example using ResNet-50 v2 locally. However, TVM
 supports many more features including cross-compilation, remote execution and
 profiling/benchmarking.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 14 minutes  5.972 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 13 minutes  35.613 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-autotvm-relay-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/57a45d9bef1af358191e7d50043e652c/autotvm_relay_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">autotvm_relay_x86.py</span></code></a></p>
diff --git a/docs/tutorial/cross_compilation_and_rpc.html b/docs/tutorial/cross_compilation_and_rpc.html
index c97f572e0b..9b180c8a1e 100644
--- a/docs/tutorial/cross_compilation_and_rpc.html
+++ b/docs/tutorial/cross_compilation_and_rpc.html
@@ -548,7 +548,7 @@ device and returns the measured cost. Network overhead is excluded.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;</span><span class="si">%g</span><span class="s2"> secs/op&quot;</span> <span class="o">%</span> <span class="n">cost</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.595e-07 secs/op
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.3e-07 secs/op
 </pre></div>
 </div>
 </div>
diff --git a/docs/tutorial/intro_topi.html b/docs/tutorial/intro_topi.html
index cf6b6a9451..571bcd533d 100644
--- a/docs/tutorial/intro_topi.html
+++ b/docs/tutorial/intro_topi.html
@@ -518,7 +518,7 @@ class Module:
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/ir.html#tvm.ir.Array" title="tvm.ir.Array" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">sg</span><span class="o">.</span><span class="n">stages</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0xe4bc310)), stage(b, placeholder(b, 0x22373140)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attrs [...]
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0x24df5c70)), stage(b, placeholder(b, 0xa2fbd90)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attrs [...]
 </pre></div>
 </div>
 <p>We can test the correctness by comparing with <code class="code docutils literal notranslate"><span class="pre">numpy</span></code> result as follows</p>
diff --git a/docs/tutorial/sg_execution_times.html b/docs/tutorial/sg_execution_times.html
index 08284e283e..c900e99e13 100644
--- a/docs/tutorial/sg_execution_times.html
+++ b/docs/tutorial/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-tutorial-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>18:28.867</strong> total execution time for <strong>tutorial</strong> files:</p>
+<p><strong>17:28.900</strong> total execution time for <strong>tutorial</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,35 +359,35 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_relay_x86.html#sphx-glr-tutorial-autotvm-relay-x86-py"><span class="std std-ref">Compiling and Optimizing a Model with the Python Interface (AutoTVM)</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_relay_x86.py</span></code>)</p></td>
-<td><p>14:05.972</p></td>
+<td><p>13:35.613</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="auto_scheduler_matmul_x86.html#sphx-glr-tutorial-auto-scheduler-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Auto-scheduling</span></a> (<code class="docutils literal notranslate"><span class="pre">auto_scheduler_matmul_x86.py</span></code>)</p></td>
-<td><p>01:57.137</p></td>
+<td><p>01:39.181</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_expr_get_started.html#sphx-glr-tutorial-tensor-expr-get-started-py"><span class="std std-ref">Working with Operators Using Tensor Expression</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_expr_get_started.py</span></code>)</p></td>
-<td><p>01:00.483</p></td>
+<td><p>01:02.094</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="relay_quick_start.html#sphx-glr-tutorial-relay-quick-start-py"><span class="std std-ref">Quick Start Tutorial for Compiling Deep Learning Models</span></a> (<code class="docutils literal notranslate"><span class="pre">relay_quick_start.py</span></code>)</p></td>
-<td><p>00:44.372</p></td>
+<td><p>00:44.200</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_matmul_x86.html#sphx-glr-tutorial-autotvm-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Schedule Templates and AutoTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_matmul_x86.py</span></code>)</p></td>
-<td><p>00:38.779</p></td>
+<td><p>00:25.684</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="intro_topi.html#sphx-glr-tutorial-intro-topi-py"><span class="std std-ref">Introduction to TOPI</span></a> (<code class="docutils literal notranslate"><span class="pre">intro_topi.py</span></code>)</p></td>
-<td><p>00:01.024</p></td>
+<td><p>00:01.039</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_ir_blitz_course.html#sphx-glr-tutorial-tensor-ir-blitz-course-py"><span class="std std-ref">Blitz Course to TensorIR</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_ir_blitz_course.py</span></code>)</p></td>
-<td><p>00:00.871</p></td>
+<td><p>00:00.870</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="cross_compilation_and_rpc.html#sphx-glr-tutorial-cross-compilation-and-rpc-py"><span class="std std-ref">Cross Compilation and RPC</span></a> (<code class="docutils literal notranslate"><span class="pre">cross_compilation_and_rpc.py</span></code>)</p></td>
-<td><p>00:00.229</p></td>
+<td><p>00:00.219</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="uma.html#sphx-glr-tutorial-uma-py"><span class="std std-ref">Making your Hardware Accelerator TVM-ready with UMA</span></a> (<code class="docutils literal notranslate"><span class="pre">uma.py</span></code>)</p></td>
diff --git a/docs/tutorial/tensor_expr_get_started.html b/docs/tutorial/tensor_expr_get_started.html
index 9005bf49fd..05cf23e3e6 100644
--- a/docs/tutorial/tensor_expr_get_started.html
+++ b/docs/tutorial/tensor_expr_get_started.html
@@ -615,7 +615,7 @@ compile and run this new schedule with the parallel operation applied:</p>
 <span class="n">evaluate_addition</span><span class="p">(</span><span class="n">fadd_parallel</span><span class="p">,</span> <a href="../reference/api/python/target.html#tvm.target.Target" title="tvm.target.Target" class="sphx-glr-backref-module-tvm-target sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">tgt</span></a><span class="p">,</span> <span class="s2">&quot;parallel&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.h [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallel: 0.000008
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallel: 0.000007
 </pre></div>
 </div>
 </div>
@@ -654,7 +654,7 @@ factor to be the number of threads on your CPU.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vector: 0.000039
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vector: 0.000040
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -691,10 +691,10 @@ class Module:
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Operator                  Timing             Performance
-   numpy    7.516969999414869e-06                    1.0
-   naive    6.7544999999999995e-06    0.8985668428270672
-parallel              8.1922e-06      1.0898274172489304
-  vector    3.9073900000000005e-05       5.1980917847273
+   numpy    7.760499993310078e-06                    1.0
+   naive              7.0947e-06      0.9142065596438337
+parallel              7.3656e-06      0.9491141042908962
+  vector             3.95142e-05      5.0917080128939025
 </pre></div>
 </div>
 <div class="admonition-code-specialization admonition">
@@ -1010,7 +1010,7 @@ matrix multiplication.</p>
 <span class="n">answer</span> <span class="o">=</span> <span class="n">numpy</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a</span><span class="o">.</span><span class="n">numpy</span><span class="p">(),</span> <span class="n">b</span><span class="o">.</span><span class="n">numpy</span><span class="p">())</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019325
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018652
 </pre></div>
 </div>
 <p>Now we write a basic matrix multiplication using TVM TE and verify that it
@@ -1051,7 +1051,7 @@ optimizations.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.359379
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.521342
 </pre></div>
 </div>
 <p>Let’s take a look at the intermediate representation of the operator and
@@ -1115,7 +1115,7 @@ schedule.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.321201
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.315924
 </pre></div>
 </div>
 <p>By reordering the computation to take advantage of caching, you should see a
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.319568
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.291435
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1213,7 +1213,7 @@ more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.121236
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.116806
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1283,7 +1283,7 @@ optimized schedule.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.106509
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.107784
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1349,7 +1349,7 @@ to `C</cite> when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.113137
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.111464
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1406,7 +1406,7 @@ of thread-level parallelization.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.133806
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.132848
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1459,13 +1459,13 @@ working, we can compare the results.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>        Operator                  Timing             Performance
-            none            3.3593791128                     1.0
-        blocking            0.3212007641     0.09561313365203465
-   vectorization     0.31956787359999994     0.09512706451688453
-loop permutation            0.1212361367    0.036088852323354244
-   array packing              0.10650887     0.03170492713792764
-   block caching            0.1131365701      0.0336778214965152
- parallelization            0.1338055825    0.039830450213305856
+            none            3.5213420559                     1.0
+        blocking            0.3159241834      0.0897169824415864
+   vectorization     0.29143462470000003     0.08276237300256079
+loop permutation            0.1168059489     0.03317086129258358
+   array packing            0.1077835381    0.030608653288711034
+   block caching            0.1114637584     0.03165377195130554
+ parallelization            0.1328480482     0.03772653894199612
 </pre></div>
 </div>
 <p>Note that the outputs on the web page reflect the running times on a
@@ -1497,7 +1497,7 @@ is</p>
 you can build generic templates of the matrix multiplication and other
 operations with tunable parameters that allows you to automatically optimize
 the computation for specific platforms.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  0.483 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  2.094 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-tensor-expr-get-started-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/40a01cffb015a67aaec0fad7e27cf80d/tensor_expr_get_started.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tensor_expr_get_started.py</span></code></a></p>