You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2023/06/19 15:37:28 UTC

[tvm-site] branch asf-site updated: deploying docs (apache/tvm@e280e01fc1d2f79cc4f1cda2257a51bd20605bcd)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 328cdf3160 deploying docs (apache/tvm@e280e01fc1d2f79cc4f1cda2257a51bd20605bcd)
328cdf3160 is described below

commit 328cdf3160fec0e687d530b272d1022d63b25605
Author: tvm-bot <95...@users.noreply.github.com>
AuthorDate: Mon Jun 19 15:37:21 2023 +0000

    deploying docs (apache/tvm@e280e01fc1d2f79cc4f1cda2257a51bd20605bcd)
---
 .../how_to/compile_models/from_darknet.rst.txt     |   2 +-
 .../how_to/compile_models/from_mxnet.rst.txt       |   2 +-
 .../how_to/compile_models/from_oneflow.rst.txt     |   2 +-
 .../how_to/compile_models/from_paddle.rst.txt      |   2 +-
 .../how_to/compile_models/from_pytorch.rst.txt     |   2 +-
 .../how_to/compile_models/from_tensorflow.rst.txt  |   2 +-
 .../compile_models/sg_execution_times.rst.txt      |  22 +-
 .../deploy_models/deploy_model_on_adreno.rst.txt   |   4 +-
 .../deploy_model_on_adreno_tvmc.rst.txt            |   2 +-
 .../deploy_models/deploy_model_on_android.rst.txt  |   2 +-
 .../deploy_object_detection_pytorch.rst.txt        |   4 +-
 .../deploy_models/deploy_prequantized.rst.txt      |   6 +-
 .../deploy_prequantized_tflite.rst.txt             |   2 +-
 .../how_to/deploy_models/deploy_quantized.rst.txt  |   2 +-
 .../deploy_models/sg_execution_times.rst.txt       |  20 +-
 .../extend_tvm/bring_your_own_datatypes.rst.txt    |   2 +-
 .../how_to/extend_tvm/sg_execution_times.rst.txt   |   8 +-
 .../how_to/extend_tvm/use_pass_instrument.rst.txt  |  16 +-
 .../optimize_operators/opt_conv_cuda.rst.txt       |   2 +-
 .../optimize_operators/opt_conv_tensorcore.rst.txt |   2 +-
 .../how_to/optimize_operators/opt_gemm.rst.txt     |  16 +-
 .../optimize_operators/sg_execution_times.rst.txt  |   8 +-
 .../sg_execution_times.rst.txt                     |  14 +-
 .../tune_conv2d_layer_cuda.rst.txt                 |   2 +-
 .../tune_network_cuda.rst.txt                      |   4 +-
 .../tune_network_x86.rst.txt                       |   4 +-
 .../tune_with_autotvm/sg_execution_times.rst.txt   |   6 +-
 .../tune_with_autotvm/tune_conv2d_cuda.rst.txt     |   2 +-
 .../work_with_microtvm/micro_autotune.rst.txt      |  18 +-
 .../work_with_microtvm/micro_pytorch.rst.txt       |   4 +-
 .../how_to/work_with_microtvm/micro_train.rst.txt  |  16 +-
 .../work_with_microtvm/sg_execution_times.rst.txt  |  14 +-
 .../work_with_relay/sg_execution_times.rst.txt     |   8 +-
 .../how_to/work_with_schedules/intrin_math.rst.txt |   2 +-
 .../work_with_schedules/sg_execution_times.rst.txt |  18 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |   4 +-
 .../frontend/deploy_classification.rst.txt         |   2 +-
 .../tutorials/frontend/deploy_detection.rst.txt    |   7 +-
 .../tutorials/frontend/sg_execution_times.rst.txt  |   6 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |   6 +-
 .../topic/vta/tutorials/sg_execution_times.rst.txt |   6 +-
 .../tutorial/auto_scheduler_matmul_x86.rst.txt     |  13 +-
 docs/_sources/tutorial/autotvm_matmul_x86.rst.txt  |  20 +-
 docs/_sources/tutorial/autotvm_relay_x86.rst.txt   |  58 ++---
 .../tutorial/cross_compilation_and_rpc.rst.txt     |   2 +-
 docs/_sources/tutorial/intro_topi.rst.txt          |   2 +-
 docs/_sources/tutorial/sg_execution_times.rst.txt  |  22 +-
 .../tutorial/tensor_expr_get_started.rst.txt       |  44 ++--
 docs/commit_hash                                   |   2 +-
 docs/how_to/compile_models/from_darknet.html       |   2 +-
 docs/how_to/compile_models/from_mxnet.html         |   2 +-
 docs/how_to/compile_models/from_oneflow.html       |  11 +-
 docs/how_to/compile_models/from_paddle.html        |   2 +-
 docs/how_to/compile_models/from_pytorch.html       |  13 +-
 docs/how_to/compile_models/from_tensorflow.html    |   2 +-
 docs/how_to/compile_models/sg_execution_times.html |  26 +-
 .../deploy_models/deploy_model_on_adreno.html      |   4 +-
 .../deploy_models/deploy_model_on_adreno_tvmc.html |  36 ++-
 .../deploy_models/deploy_model_on_android.html     |   2 +-
 .../deploy_object_detection_pytorch.html           |  59 +++--
 docs/how_to/deploy_models/deploy_prequantized.html |   9 +-
 .../deploy_models/deploy_prequantized_tflite.html  |   2 +-
 docs/how_to/deploy_models/deploy_quantized.html    |   2 +-
 docs/how_to/deploy_models/sg_execution_times.html  |  20 +-
 .../extend_tvm/bring_your_own_datatypes.html       |   2 +-
 docs/how_to/extend_tvm/sg_execution_times.html     |   8 +-
 docs/how_to/extend_tvm/use_pass_instrument.html    |  16 +-
 docs/how_to/optimize_operators/opt_conv_cuda.html  |   2 +-
 .../optimize_operators/opt_conv_tensorcore.html    |   2 +-
 docs/how_to/optimize_operators/opt_gemm.html       |  16 +-
 .../optimize_operators/sg_execution_times.html     |   8 +-
 .../sg_execution_times.html                        |  14 +-
 .../tune_conv2d_layer_cuda.html                    |   2 +-
 .../tune_with_autoscheduler/tune_network_cuda.html |   4 +-
 .../tune_with_autoscheduler/tune_network_x86.html  |   4 +-
 .../tune_with_autotvm/sg_execution_times.html      |   6 +-
 .../how_to/tune_with_autotvm/tune_conv2d_cuda.html |   2 +-
 docs/how_to/work_with_microtvm/micro_autotune.html |  18 +-
 docs/how_to/work_with_microtvm/micro_pytorch.html  |   5 +-
 docs/how_to/work_with_microtvm/micro_train.html    |  16 +-
 .../work_with_microtvm/sg_execution_times.html     |  18 +-
 .../how_to/work_with_relay/sg_execution_times.html |   8 +-
 docs/how_to/work_with_schedules/intrin_math.html   |   2 +-
 .../work_with_schedules/sg_execution_times.html    |  18 +-
 docs/reference/api/python/auto_scheduler.html      |   4 +-
 .../api/typedoc/classes/bytestreamreader.html      |  12 +-
 .../api/typedoc/classes/cachedcallstack.html       |  34 +--
 docs/reference/api/typedoc/classes/dldatatype.html |  12 +-
 docs/reference/api/typedoc/classes/dldevice.html   |  10 +-
 .../reference/api/typedoc/classes/environment.html |  12 +-
 docs/reference/api/typedoc/classes/ffilibrary.html |  20 +-
 docs/reference/api/typedoc/classes/instance.html   |  58 ++---
 docs/reference/api/typedoc/classes/memory.html     |  34 +--
 docs/reference/api/typedoc/classes/module.html     |  10 +-
 docs/reference/api/typedoc/classes/ndarray.html    |  22 +-
 .../api/typedoc/classes/packedfunccell.html        |   6 +-
 docs/reference/api/typedoc/classes/rpcserver.html  |  14 +-
 .../api/typedoc/classes/runtimecontext.html        |  22 +-
 docs/reference/api/typedoc/classes/scalar.html     |   6 +-
 docs/reference/api/typedoc/classes/tvmarray.html   |  16 +-
 docs/reference/api/typedoc/classes/tvmobject.html  |  12 +-
 .../api/typedoc/classes/webgpucontext.html         |  12 +-
 docs/reference/api/typedoc/enums/argtypecode.html  |  30 +--
 .../api/typedoc/enums/aynccallbackcode.html        |   4 +-
 .../api/typedoc/enums/dldatatypecode.html          |   8 +-
 .../api/typedoc/enums/rpcserverstate.html          |  12 +-
 docs/reference/api/typedoc/enums/sizeof.html       |  18 +-
 docs/reference/api/typedoc/index.html              | 124 +++++-----
 .../api/typedoc/interfaces/disposable.html         |   2 +-
 .../api/typedoc/interfaces/functioninfo.html       |   6 +-
 .../api/typedoc/interfaces/libraryprovider.html    |   4 +-
 docs/searchindex.js                                |   2 +-
 .../vta/tutorials/autotvm/sg_execution_times.html  |   4 +-
 .../tutorials/frontend/deploy_classification.html  |   2 +-
 .../vta/tutorials/frontend/deploy_detection.html   |   3 +-
 .../vta/tutorials/frontend/sg_execution_times.html |   6 +-
 .../vta/tutorials/optimize/sg_execution_times.html |   6 +-
 docs/topic/vta/tutorials/sg_execution_times.html   |   6 +-
 docs/tutorial/auto_scheduler_matmul_x86.html       |   8 +-
 docs/tutorial/autotvm_matmul_x86.html              |  20 +-
 docs/tutorial/autotvm_relay_x86.html               | 271 +++++++++++----------
 docs/tutorial/cross_compilation_and_rpc.html       |   2 +-
 docs/tutorial/intro_topi.html                      |   2 +-
 docs/tutorial/sg_execution_times.html              |  22 +-
 docs/tutorial/tensor_expr_get_started.html         |  44 ++--
 125 files changed, 854 insertions(+), 867 deletions(-)

diff --git a/docs/_sources/how_to/compile_models/from_darknet.rst.txt b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
index 297c04ed73..6b5bdbba21 100644
--- a/docs/_sources/how_to/compile_models/from_darknet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
@@ -318,7 +318,7 @@ The process is no different from other examples.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  31.073 seconds)
+   **Total running time of the script:** ( 1 minutes  33.484 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_darknet.py:
diff --git a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
index 6d74765bdb..08aaea72b9 100644
--- a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
@@ -116,7 +116,7 @@ In this section, we download a pretrained imagenet model and classify an image.
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipce8aa542-1a2b-4473-9be6-28cc3b10b648 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip513d3b55-72e3-4612-800d-59d41fd413e9 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
     x (1, 3, 224, 224)
 
 
diff --git a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
index 3eb38bdef2..5806f17ccc 100644
--- a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
@@ -121,7 +121,7 @@ Load a pretrained OneFlow model and save model
  .. code-block:: none
 
     Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
-
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     21%|##1       | 8.89M/41.5M [00:00<00:00, 93.2MB/s]
     55%|#####4    | 22.8M/41.5M [00:00<00:00, 124MB/s] 
     83%|########3 | 34.6M/41.5M [00:00<00:00, 50.8MB/s]
    100%|##########| 41.5M/41.5M [00:00<00:00, 59.0MB/s]
+
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     18%|#8        | 7.56M/41.5M [00:00<00:00, 79.3MB/s]
     36%|###6      | 15.1M/41.5M [00:00<00:00, 38.5MB/s]
     48%|####7     | 19.8M/41.5M [00:00<00:00, 39.6MB/s]
     58%|#####8    | 24.2M/41.5M [00:00<00:00, 27.0MB/s]
     77%|#######7  | 32.0M/41.5M [00:00<00:00, 33.5MB/s]
     92%|#########2| 38.3M/41.5M [00:01<00:00, 34.5MB/s]
    100%|##########| 41.5M/41.5M [00:01<00:00, 35.0MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_paddle.rst.txt b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
index d621d32092..0c150dc3ee 100644
--- a/docs/_sources/how_to/compile_models/from_paddle.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
@@ -209,7 +209,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  1.199 seconds)
+   **Total running time of the script:** ( 1 minutes  2.566 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_paddle.py:
diff --git a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
index bdf773b98c..10fcbdb045 100644
--- a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
@@ -101,7 +101,7 @@ Load a pretrained PyTorch model
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     14%|#4        | 6.30M/44.7M [00:00<00:00, 47.0MB/s]
     24%|##4       | 10.8M/44.7M [00:00<00:00, 42.9MB/s]
     48%|####8     | 21.5M/44.7M [00:00<00:00, 70.5MB/s]
     64%|######4   | 28.6M/44.7M [00:00<00:00, 71.9MB/s]
     80%|########  | 35.8M/44.7M [00:00<00:00, 45.6MB/s]
     92%|#########2| 41.2M/44.7M [00:00<00:00, 42.7MB/s]
    100%|##########| 44.7M/44.7M [00:00<00:00, 51.5MB/s]
+
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     18%|#7        | 7.99M/44.7M [00:00<00:00, 58.3MB/s]
     36%|###5      | 16.0M/44.7M [00:00<00:00, 59.9MB/s]
     54%|#####3    | 24.0M/44.7M [00:00<00:00, 60.5MB/s]
     72%|#######1  | 32.0M/44.7M [00:00<00:00, 49.3MB/s]
     90%|########9 | 40.0M/44.7M [00:00<00:00, 51.5MB/s]
    100%|##########| 44.7M/44.7M [00:00<00:00, 56.3MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
index 625ec3add5..2ed071de9f 100644
--- a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
@@ -430,7 +430,7 @@ Run the corresponding model on tensorflow
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  34.776 seconds)
+   **Total running time of the script:** ( 1 minutes  32.171 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_tensorflow.py:
diff --git a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
index 543e094ed4..670549847d 100644
--- a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**07:06.216** total execution time for **how_to_compile_models** files:
+**07:06.702** total execution time for **how_to_compile_models** files:
 
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:34.776 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:33.484 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:31.073 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:32.171 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:01.199 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 01:02.566 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:40.053 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:41.185 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:36.932 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:37.515 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:33.323 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:32.795 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:27.939 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:28.034 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:26.791 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:24.826 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:11.202 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:11.297 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.926 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.828 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
index 738dd7d820..787dc125bf 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
@@ -673,7 +673,7 @@ well as provides information about the model's performance
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-     4225.1644    4223.8821    4235.2511    4221.9718      3.6720                  
+     4227.0592    4221.3038    4278.4657    4220.0554     17.1578                  
 
 
 
@@ -681,7 +681,7 @@ well as provides information about the model's performance
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  20.344 seconds)
+   **Total running time of the script:** ( 1 minutes  20.171 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_model_on_adreno.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
index 7431c773bf..36bf52e2df 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno_tvmc.rst.txt
@@ -127,7 +127,7 @@ Make a Keras Resnet50 Model
  .. code-block:: none
 
     Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
-
         8192/102967424 [..............................] - ETA: 0s
      6635520/102967424 [>.............................] - ETA: 1s
      8380416/102967424 [=>............................] - ETA: 2s
     15024128/102967424 [===>..........................] - ETA: 1s
     16769024/102967424 [===>..........................] - ETA: 2s
     24764416/102967424 [======>.......................] - ETA: 1s
     25157632/102967424 [======>.......................] - ETA: 1s
     33546240/102967424 [========>.....................] - ETA: 1s
 
     40189952/102967424 [==========>...................] - ETA: 1s
     41934848/102967424 [===========>..................] - ETA: 1s
     47079424/102967424 [============>.................] - ETA: 1s
     50323456/102967424 [=============>................] - ETA: 1s
     56967168/102967424 [===============>..............] - ETA: 0s
     58712064/102967424 [================>.............] - ETA: 0s
     65355776/102967424 [==================>...........] - ETA: 0s
     67100672/102967424 [==================>...........] -
  ETA: 0s
     69296128/102967424 [===================>..........] - ETA: 0s
     75489280/102967424 [====================>.........] - ETA: 0s
     82124800/102967424 [======================>.......] - ETA: 0s
     83877888/102967424 [=======================>......] - ETA: 0s
     90521600/102967424 [=========================>....] - ETA: 0s
     92266496/102967424 [=========================>....] - ETA: 0s
    100646912/102967424 [============================>.] - ETA: 0s
    102850560/102967424
  [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 2s 0us/step
+
         8192/102967424 [..............................] - ETA: 0s
      5079040/102967424 [>.............................] - ETA: 0s
     10575872/102967424 [==>...........................] - ETA: 0s
     16957440/102967424 [===>..........................] - ETA: 0s
     24002560/102967424 [=====>........................] - ETA: 0s
     29106176/102967424 [=======>......................] - ETA: 0s
     33546240/102967424 [========>.....................] - ETA: 0s
     41213952/102967424 [===========>..................] - ETA: 0s
 
     44130304/102967424 [===========>..................] - ETA: 0s
     50323456/102967424 [=============>................] - ETA: 0s
     52518912/102967424 [==============>...............] - ETA: 0s
     56967168/102967424 [===============>..............] - ETA: 0s
     66043904/102967424 [==================>...........] - ETA: 0s
     69296128/102967424 [===================>..........] - ETA: 0s
     75489280/102967424 [====================>.........] - ETA: 0s
     88023040/102967424 [========================>.....] -
  ETA: 0s
     97812480/102967424 [===========================>..] - ETA: 0s
    102817792/102967424 [============================>.] - ETA: 0s
    102967424/102967424 [==============================] - 1s 0us/step
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
index 266d347d63..6d24244f21 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
@@ -437,7 +437,7 @@ Execute on TVM
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      15.8892      15.6744      16.7882      15.3324       0.4693                  
+      15.8511      15.7483      16.8541      15.3328       0.4226                  
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
index 0469244b2c..c51b272c9d 100644
--- a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
@@ -130,7 +130,7 @@ Load pre-trained maskrcnn from torchvision and do tracing
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MaskRCNN_ResNet50_FPN_Weights.COCO_V1`. You can also use `weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-
      0%|          | 0.00/170M [00:00<?, ?B/s]
      4%|3         | 6.30M/170M [00:00<00:02, 59.2MB/s]
      7%|7         | 12.0M/170M [00:00<00:03, 45.0MB/s]
     10%|9         | 16.4M/170M [00:00<00:05, 29.4MB/s]
     14%|#4        | 24.0M/170M [00:00<00:04, 35.5MB/s]
     18%|#7        | 30.3M/170M [00:00<00:03, 40.7MB/s]
     24%|##3       | 40.0M/170M [00:01<00:03, 43.8MB/s]
     28%|##8       | 48.0M/170M [00:01<00:02, 45.7MB/s]
     34%|###3      | 57.4M/170M [00:01<00:02, 56.9MB/s]
     38%|###7      | 64.0M/170M [00:01<00:01, 57.0MB/s]
     42%|####2     | 72.0M/170M [00:01<00:01, 62.7MB/s]
     46%|####6     | 78.4M/170M [00:01<00:02, 44.2MB/s]
     49%|####9     | 83.6M/170M [00:01<00:01, 45.3MB/s]
     52%|#####2    | 88.6M/170M [00:02<00:02, 35.4MB/s]
     56%|#####5    | 94.3M/170M [00:02<00:02, 37.5MB/s]
     60%|######    | 102M/170M [00:02<00:01, 44.3MB/s] 
     66%|######5   | 112M/170M [00:02<00:01, 46.5MB/s]
     70%|######9   | 118M/170M [00:02<00:01, 43.6MB/s]
 
     72%|#######2  | 123M/170M [00:02<00:01, 42.4MB/s]
     75%|#######4  | 127M/170M [00:03<00:01, 41.1MB/s]
     77%|#######7  | 131M/170M [00:03<00:01, 39.0MB/s]
     80%|########  | 136M/170M [00:03<00:00, 36.2MB/s]
     85%|########4 | 144M/170M [00:03<00:00, 42.1MB/s]
     89%|########9 | 152M/170M [00:03<00:00, 47.1MB/s]
     94%|#########4| 160M/170M [00:03<00:00, 50.8MB/s]
     99%|#########8| 168M/170M [00:03<00:00, 51.8MB/s]
    100%|##########| 170M/170M [00:03<00:00, 45.4MB/s]
+
      0%|          | 0.00/170M [00:00<?, ?B/s]
      4%|3         | 6.30M/170M [00:00<00:03, 46.3MB/s]
      6%|6         | 10.7M/170M [00:00<00:05, 30.5MB/s]
      9%|9         | 16.0M/170M [00:00<00:04, 35.4MB/s]
     14%|#4        | 24.0M/170M [00:00<00:03, 38.4MB/s]
     18%|#7        | 30.3M/170M [00:00<00:04, 35.9MB/s]
     20%|#9        | 33.8M/170M [00:01<00:04, 30.8MB/s]
     24%|##3       | 40.0M/170M [00:01<00:03, 37.3MB/s]
     27%|##7       | 46.3M/170M [00:01<00:03, 40.8MB/s]
     30%|##9       | 50.4M/170M [00:01<00:03, 35.2MB/s]
     33%|###2      | 56.0M/170M [00:01<00:03, 38.7MB/s]
     37%|###6      | 62.3M/170M [00:01<00:02, 44.2MB/s]
     39%|###9      | 66.8M/170M [00:01<00:02, 39.2MB/s]
     42%|####2     | 72.0M/170M [00:02<00:02, 38.5MB/s]
     46%|####6     | 78.3M/170M [00:02<00:02, 41.7MB/s]
     49%|####8     | 82.4M/170M [00:02<00:02, 40.8MB/s]
     52%|#####1    | 88.0M/170M [00:02<00:02, 36.9MB/s]
     56%|#####5    | 94.3M/170M [00:02<00:01, 41.6MB/
 s]
     58%|#####7    | 98.4M/170M [00:02<00:01, 41.4MB/s]
     60%|######    | 103M/170M [00:02<00:01, 40.1MB/s] 
     63%|######2   | 106M/170M [00:03<00:02, 30.7MB/s]
     67%|######7   | 114M/170M [00:03<00:01, 41.9MB/s]
     71%|#######   | 120M/170M [00:03<00:01, 39.3MB/s]
     75%|#######4  | 127M/170M [00:03<00:00, 46.6MB/s]
     78%|#######7  | 132M/170M [00:03<00:01, 38.0MB/s]
     80%|########  | 136M/170M [00:03<00:01, 31.1MB/s]
     85%|########4 | 144M/170M [00:04<00:00, 34.6MB/s]
     89%|########9 | 152M/170M [00:04<00:00, 37.1MB/s]
     93%|#########3| 158M/170M [00:04<00:00, 41.7MB/s]
     96%|#########5| 163M/170M [00:04<00:00, 40.2MB/s]
     98%|#########8| 167M/170M [00:04<00:00, 41.9MB/s]
    100%|##########| 170M/170M [00:04<00:00, 38.6MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
       (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -295,7 +295,7 @@ Get boxes with score larger than 0.9
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 3 minutes  38.808 seconds)
+   **Total running time of the script:** ( 3 minutes  39.616 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
index ff308550c7..0a3960e6fe 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
@@ -227,7 +227,7 @@ training. Other models require a full post training calibration.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
       warnings.warn(msg)
     Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
-
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     59%|#####8    | 7.99M/13.6M [00:00<00:00, 55.1MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 65.7MB/s]
+
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
     47%|####6     | 6.30M/13.6M [00:00<00:00, 63.7MB/s]
     91%|#########1| 12.4M/13.6M [00:00<00:00, 26.2MB/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 31.2MB/s]
 
 
 
@@ -409,7 +409,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      88.5574      88.4660      92.2653      88.2301       0.4377                  
+      89.2362      89.1545      92.8293      88.7173       0.4802                  
 
 
 
@@ -457,7 +457,7 @@ TODO
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  24.738 seconds)
+   **Total running time of the script:** ( 1 minutes  27.215 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_prequantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
index 9e5fb3ba2a..f85754e8ef 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
@@ -423,7 +423,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      109.5889     110.0667     111.0692     106.5090      1.1982                  
+      110.5219     110.0997     145.9361     109.4201      3.5949                  
 
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
index 925d259fe3..cda9014477 100644
--- a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
@@ -257,7 +257,7 @@ We create a Relay VM to build and execute the model.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  11.937 seconds)
+   **Total running time of the script:** ( 2 minutes  4.781 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_quantized.py:
diff --git a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
index c7f5b31b80..880f5f828a 100644
--- a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**12:03.581** total execution time for **how_to_deploy_models** files:
+**12:00.164** total execution time for **how_to_deploy_models** files:
 
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:38.808 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:39.616 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 02:11.937 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 02:04.781 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:24.738 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:27.215 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:20.344 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``)                   | 01:20.171 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:51.753 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 00:52.262 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:49.056 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:50.206 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:47.284 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno_tvmc.py` (``deploy_model_on_adreno_tvmc.py``)         | 00:44.473 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:29.860 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:31.003 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:29.793 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:30.431 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.007 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
index 2c2c7080a4..b271f3e740 100644
--- a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
@@ -463,7 +463,7 @@ First let us define two helper functions to get the mobilenet model and a cat im
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipc8b6e29a-184d-47ab-948b-86074a1767b1 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipfa0ccf25-9a17-4416-bf1e-95c592342c62 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 
 
 
diff --git a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
index 2c306a6440..bafc020014 100644
--- a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:56.482** total execution time for **how_to_extend_tvm** files:
+**00:57.589** total execution time for **how_to_extend_tvm** files:
 
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:52.575 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:53.696 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.741 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.737 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.159 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.150 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.007 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
index f700380384..41cccc0060 100644
--- a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
@@ -220,10 +220,10 @@ profile the execution time of each passes.
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 24349us [24349us] (48.79%; 48.79%)
-    FoldScaleAxis: 25557us [8us] (51.21%; 51.21%)
-            FoldConstant: 25549us [1783us] (51.19%; 99.97%)
-                    InferType: 23766us [23766us] (47.62%; 93.02%)
+    InferType: 24207us [24207us] (48.38%; 48.38%)
+    FoldScaleAxis: 25825us [9us] (51.62%; 51.62%)
+            FoldConstant: 25816us [1846us] (51.60%; 99.97%)
+                    InferType: 23970us [23970us] (47.91%; 92.85%)
 
 
 
@@ -262,10 +262,10 @@ Refer to following sections and :py:func:`tvm.instrument.pass_instrument` for th
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 23729us [23729us] (48.21%; 48.21%)
-    FoldScaleAxis: 25490us [7us] (51.79%; 51.79%)
-            FoldConstant: 25484us [1808us] (51.78%; 99.97%)
-                    InferType: 23676us [23676us] (48.10%; 92.91%)
+    InferType: 24087us [24087us] (48.17%; 48.17%)
+    FoldScaleAxis: 25913us [9us] (51.83%; 51.83%)
+            FoldConstant: 25904us [1905us] (51.81%; 99.96%)
+                    InferType: 23999us [23999us] (48.00%; 92.65%)
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
index 2e09935233..ede413d725 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
@@ -331,7 +331,7 @@ latency of convolution.
 
  .. code-block:: none
 
-    Convolution: 50.913600 ms
+    Convolution: 53.561344 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
index 02f6e7f7ba..29e1012d7c 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
@@ -598,7 +598,7 @@ be able to run on our build server
 
  .. code-block:: none
 
-    conv2d with tensor core: 12.271203 ms
+    conv2d with tensor core: 12.260438 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
index 7e530c4201..faf2c177ac 100644
--- a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
@@ -134,8 +134,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 
  .. code-block:: none
 
-    Numpy running time: 0.017501
-    Baseline: 3.293998
+    Numpy running time: 0.018971
+    Baseline: 3.400320
 
 
 
@@ -227,7 +227,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 
  .. code-block:: none
 
-    Opt1: 0.290424
+    Opt1: 0.295508
 
 
 
@@ -318,7 +318,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
 
  .. code-block:: none
 
-    Opt2: 0.265512
+    Opt2: 0.282423
 
 
 
@@ -406,7 +406,7 @@ the access pattern for A matrix is more cache friendly.
 
  .. code-block:: none
 
-    Opt3: 0.113756
+    Opt3: 0.119760
 
 
 
@@ -523,7 +523,7 @@ flattening.
 
  .. code-block:: none
 
-    Opt4: 0.106791
+    Opt4: 0.107420
 
 
 
@@ -635,7 +635,7 @@ write to C when all the block results are ready.
 
  .. code-block:: none
 
-    Opt5: 0.111192
+    Opt5: 0.111840
 
 
 
@@ -748,7 +748,7 @@ Furthermore, we can also utilize multi-core processors to do the thread-level pa
 
  .. code-block:: none
 
-    Opt6: 0.131794
+    Opt6: 0.133748
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
index 1691ab6509..763b3fead7 100644
--- a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:33.366** total execution time for **how_to_optimize_operators** files:
+**00:34.510** total execution time for **how_to_optimize_operators** files:
 
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:30.072 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:31.044 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:02.020 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:02.068 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.273 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.398 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
index e325aec613..0f8ce240df 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
@@ -5,18 +5,18 @@
 
 Computation times
 =================
-**03:35.911** total execution time for **how_to_tune_with_autoscheduler** files:
+**03:35.555** total execution time for **how_to_tune_with_autoscheduler** files:
 
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:28.334 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:30.986 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:17.141 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 01:15.639 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:17.648 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 00:17.164 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:16.607 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:15.996 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:16.077 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:15.665 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:00.104 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:00.105 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
index 458b519b9e..9871a88939 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
@@ -767,7 +767,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 0.355 ms
+    Execution time of this operator: 0.341 ms
 
 
 
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
index fbd33d9ab9..5128e05f7c 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
@@ -647,7 +647,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       8.1275       8.1284       8.1326       8.1215       0.0046                  
+       8.0884       8.0900       8.0932       8.0820       0.0047                  
 
 
 
@@ -674,7 +674,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  17.141 seconds)
+   **Total running time of the script:** ( 1 minutes  15.639 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
index 3feea12469..a4de7c5776 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
@@ -666,7 +666,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      766.5506     766.8004     767.7174     765.1341      1.0693                  
+      769.4255     766.6812     779.7061     761.8892      7.5281                  
 
 
 
@@ -693,7 +693,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  28.334 seconds)
+   **Total running time of the script:** ( 1 minutes  30.986 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_x86.py:
diff --git a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
index 368bd7fbd5..e1a89b0efd 100644
--- a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:23.621** total execution time for **how_to_tune_with_autotvm** files:
+**00:23.496** total execution time for **how_to_tune_with_autotvm** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:23.582 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:23.455 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.023 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.025 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)             | 00:00.006 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
index f3d0170245..d598b3ca59 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
@@ -326,7 +326,7 @@ and measure running time.
 
     Best config:
     ,None
-    Time cost of this operator: 0.037181
+    Time cost of this operator: 0.037022
 
 
 
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
index d1c73e311b..22b9bf0760 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
@@ -360,10 +360,10 @@ Timing the untuned program
     ########## Build without Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  305.4     98.737   (1, 2, 10, 10, 3)  2       1        [305.4]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.936     0.949    (1, 6, 10, 10)     1       1        [2.936]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.969     0.313    (1, 1, 10, 10, 3)  1       1        [0.969]           
-    Total_time                                    -                                             309.306   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  303.7     98.751   (1, 2, 10, 10, 3)  2       1        [303.7]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.888     0.939    (1, 6, 10, 10)     1       1        [2.888]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.953     0.31     (1, 1, 10, 10, 3)  1       1        [0.953]           
+    Total_time                                    -                                             307.541   -        -                  -       -        -                 
 
 
 
@@ -428,10 +428,10 @@ Timing the tuned program
     ########## Build with Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  135.9     98.031   (1, 6, 10, 10, 1)  2       1        [135.9]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.772     1.278    (1, 6, 10, 10)     1       1        [1.772]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.958     0.691    (1, 1, 10, 10, 3)  1       1        [0.958]           
-    Total_time                                    -                                             138.629   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.3     97.514   (1, 6, 10, 10, 1)  2       1        [102.3]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.77      1.687    (1, 6, 10, 10)     1       1        [1.77]            
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.839     0.799    (1, 3, 10, 10, 1)  1       1        [0.839]           
+    Total_time                                    -                                             104.909   -        -                  -       -        -                 
 
 
 
@@ -439,7 +439,7 @@ Timing the tuned program
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  24.708 seconds)
+   **Total running time of the script:** ( 1 minutes  25.761 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_autotune.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
index f380be43b0..4bb8ebd338 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
@@ -118,7 +118,7 @@ download a cat image and preprocess it to use as the model input.
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/ao/quantization/utils.py:310: UserWarning: must run observer before calling calculate_qparams. Returning default values.
       warnings.warn(
     Downloading: "https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
-
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 54.6MB/s]
+
      0%|          | 0.00/3.42M [00:00<?, ?B/s]
     61%|######    | 2.09M/3.42M [00:00<00:00, 10.2MB/s]
    100%|##########| 3.42M/3.42M [00:00<00:00, 16.3MB/s]
     /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
       device=storage.device,
     /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -326,7 +326,7 @@ Look up prediction top 1 index in 1000 class synset.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  29.227 seconds)
+   **Total running time of the script:** ( 1 minutes  27.750 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_pytorch.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
index e776ae28bf..c7bcf01df4 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
@@ -217,7 +217,7 @@ take about **2 minutes** to download the Stanford Cars, while COCO 2017 validati
  .. code-block:: none
 
 
-    '/tmp/tmprmdg7ez0/images/random'
+    '/tmp/tmp6sy5e1ny/images/random'
 
 
 
@@ -317,8 +317,8 @@ objects to other stuff? We can display some examples from our datasets using ``m
 
  .. code-block:: none
 
-    /tmp/tmprmdg7ez0/images/target contains 8144 images
-    /tmp/tmprmdg7ez0/images/random contains 5000 images
+    /tmp/tmp6sy5e1ny/images/target contains 8144 images
+    /tmp/tmp6sy5e1ny/images/random contains 5000 images
 
 
 
@@ -493,13 +493,13 @@ the time on our validation set).
  .. code-block:: none
 
     Epoch 1/3
-    328/328 - 40s - loss: 0.2249 - accuracy: 0.9241 - val_loss: 0.1093 - val_accuracy: 0.9615 - 40s/epoch - 122ms/step
+    328/328 - 41s - loss: 0.2471 - accuracy: 0.9174 - val_loss: 0.1166 - val_accuracy: 0.9585 - 41s/epoch - 126ms/step
     Epoch 2/3
-    328/328 - 36s - loss: 0.1029 - accuracy: 0.9606 - val_loss: 0.1046 - val_accuracy: 0.9634 - 36s/epoch - 108ms/step
+    328/328 - 36s - loss: 0.1057 - accuracy: 0.9601 - val_loss: 0.1530 - val_accuracy: 0.9452 - 36s/epoch - 108ms/step
     Epoch 3/3
-    328/328 - 35s - loss: 0.0654 - accuracy: 0.9753 - val_loss: 0.1115 - val_accuracy: 0.9637 - 35s/epoch - 108ms/step
+    328/328 - 35s - loss: 0.0701 - accuracy: 0.9758 - val_loss: 0.1449 - val_accuracy: 0.9524 - 35s/epoch - 108ms/step
 
-    <keras.callbacks.History object at 0x7ff52a956220>
+    <keras.callbacks.History object at 0x7fb1c88a37f0>
 
 
 
@@ -860,7 +860,7 @@ Arduino tutorial for how to do that `on GitHub <https://github.com/guberti/tvm-a
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 4 minutes  40.205 seconds)
+   **Total running time of the script:** ( 4 minutes  43.349 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_train.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
index b7abc45cfe..ef322fde8d 100644
--- a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
@@ -5,20 +5,20 @@
 
 Computation times
 =================
-**08:03.311** total execution time for **how_to_work_with_microtvm** files:
+**08:06.900** total execution time for **how_to_work_with_microtvm** files:
 
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 04:40.205 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)           | 04:43.349 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:29.227 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``)       | 01:27.750 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:24.708 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)     | 01:25.761 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:11.618 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)               | 00:11.944 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:08.903 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:09.070 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.651 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)         | 00:09.027 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)         | 00:00.000 | 0.0 MB |
 +-----------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
index 1a7967a0b0..19eeccf181 100644
--- a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:39.257** total execution time for **how_to_work_with_relay** files:
+**00:40.357** total execution time for **how_to_work_with_relay** files:
 
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:34.288 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:35.188 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.154 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:03.243 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.809 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.919 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``)                 | 00:00.006 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
index 555dc83c3a..68d8f62876 100644
--- a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
@@ -278,7 +278,7 @@ The following example customizes CUDA lowering rule for :code:`exp`.
  .. code-block:: none
 
 
-    <function my_cuda_math_rule at 0x7ff3b86458b0>
+    <function my_cuda_math_rule at 0x7fb04c15d670>
 
 
 
diff --git a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
index 851e8dac33..b121f5dc23 100644
--- a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
@@ -5,22 +5,22 @@
 
 Computation times
 =================
-**00:06.264** total execution time for **how_to_work_with_schedules** files:
+**00:06.192** total execution time for **how_to_work_with_schedules** files:
 
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:03.326 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:03.164 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.195 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.251 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.754 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.761 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.737 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.758 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.113 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.117 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.059 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.060 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.053 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.054 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.027 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)               | 00:00.028 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
index f0a53343ed..df42ed40a6 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:35.641** total execution time for **topic_vta_tutorials_autotvm** files:
+**00:34.822** total execution time for **topic_vta_tutorials_autotvm** files:
 
 +---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:35.633 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:34.814 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)     | 00:00.008 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
index c8e9967067..193e1ffe24 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -293,7 +293,7 @@ The compilation steps are:
       warnings.warn(
     /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
       graph, lib, params = relay.build(
-    resnet18_v1 inference graph built in 37.80s!
+    resnet18_v1 inference graph built in 37.63s!
 
 
 
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
index 1d7fed8157..3f5e4e8606 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
@@ -337,7 +337,7 @@ The compilation steps are:
 
     /workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
       warnings.warn(
-    yolov3-tiny inference graph built in 26.79s!
+    yolov3-tiny inference graph built in 25.56s!
 
 
 
@@ -445,11 +445,6 @@ Download test image
 
 
 
-.. rst-class:: sphx-glr-timing
-
-   **Total running time of the script:** ( 1 minutes  0.628 seconds)
-
-
 .. _sphx_glr_download_topic_vta_tutorials_frontend_deploy_detection.py:
 
 .. only:: html
diff --git a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
index fff48e5dfd..7702cc0bf5 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**01:59.550** total execution time for **topic_vta_tutorials_frontend** files:
+**01:57.864** total execution time for **topic_vta_tutorials_frontend** files:
 
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 01:00.628 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 00:59.107 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 00:58.922 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 00:58.757 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
index 9e48bc3337..fa098c2a7a 100644
--- a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:03.376** total execution time for **topic_vta_tutorials_optimize** files:
+**00:03.368** total execution time for **topic_vta_tutorials_optimize** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.831 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.827 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.545 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.542 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
index aafafaceb1..fc375ae5e1 100644
--- a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:00.931** total execution time for **topic_vta_tutorials** files:
+**00:00.935** total execution time for **topic_vta_tutorials** files:
 
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.478 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.480 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.454 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.455 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
index 76c3ef0dbf..d167099459 100644
--- a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
@@ -207,13 +207,6 @@ trials, we can load the best schedule from the log file and apply it.
 
 
 
-.. rst-class:: sphx-glr-script-out
-
- .. code-block:: none
-
-    *E
-
-
 
 
 
@@ -325,7 +318,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 97.216 ms
+    Execution time of this operator: 95.529 ms
 
 
 
@@ -423,7 +416,7 @@ resume the status and do more 5 trials.
  .. code-block:: none
 
     Resume search:
-    *E
+
 
 
 
@@ -441,7 +434,7 @@ operations.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  50.538 seconds)
+   **Total running time of the script:** ( 1 minutes  25.982 seconds)
 
 
 .. _sphx_glr_download_tutorial_auto_scheduler_matmul_x86.py:
diff --git a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
index d9b26e860f..1ca56cd0be 100644
--- a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
@@ -454,16 +454,16 @@ reduce variance, we take 5 measurements and average them.
     waiting for device...
     device available
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 11.38/11.38     result: MeasureResult(costs=(0.0235916778,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6312711238861084, timestamp=1687174328.710246)        [('tile_y', [-1, 32]), ('tile_x', [-1, 512])],None,95
-    No: 2   GFLOPS: 9.36/11.38      result: MeasureResult(costs=(0.0286716888,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.725764274597168, timestamp=1687174329.4380782)        [('tile_y', [-1, 512]), ('tile_x', [-1, 32])],None,59
-    No: 3   GFLOPS: 11.44/11.44     result: MeasureResult(costs=(0.0234637456,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6387925148010254, timestamp=1687174330.0867321)       [('tile_y', [-1, 4]), ('tile_x', [-1, 256])],None,82
-    No: 4   GFLOPS: 10.26/11.44     result: MeasureResult(costs=(0.026157574,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.692347526550293, timestamp=1687174330.770482)  [('tile_y', [-1, 2]), ('tile_x', [-1, 8])],None,31
-    No: 5   GFLOPS: 0.52/11.44      result: MeasureResult(costs=(0.5141622518,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.494819641113281, timestamp=1687174339.400494) [('tile_y', [-1, 32]), ('tile_x', [-1, 1])],None,5
-    No: 6   GFLOPS: 3.78/11.44      result: MeasureResult(costs=(0.0709655928,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4131886959075928, timestamp=1687174340.8004124)       [('tile_y', [-1, 32]), ('tile_x', [-1, 8])],None,35
-    No: 7   GFLOPS: 10.80/11.44     result: MeasureResult(costs=(0.0248468302,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6811978816986084, timestamp=1687174341.471038)        [('tile_y', [-1, 256]), ('tile_x', [-1, 256])],None,88
-    No: 8   GFLOPS: 10.58/11.44     result: MeasureResult(costs=(0.02536371,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6931350231170654, timestamp=1687174342.1444263) [('tile_y', [-1, 8]), ('tile_x', [-1, 16])],None,43
-    No: 9   GFLOPS: 3.86/11.44      result: MeasureResult(costs=(0.0696048634,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.347289800643921, timestamp=1687174343.600975) [('tile_y', [-1, 8]), ('tile_x', [-1, 2])],None,13
-    No: 10  GFLOPS: 1.02/11.44      result: MeasureResult(costs=(0.2639063404,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.449601411819458, timestamp=1687174348.0901854)        [('tile_y', [-1, 64]), ('tile_x', [-1, 2])],None,16
+    No: 1   GFLOPS: 10.47/10.47     result: MeasureResult(costs=(0.025650282199999996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6601579189300537, timestamp=1687186325.3884518)       [('tile_y', [-1, 2]), ('tile_x', [-1, 64])],None,61
+    No: 2   GFLOPS: 15.91/15.91     result: MeasureResult(costs=(0.0168689998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5638737678527832, timestamp=1687186325.9174454)       [('tile_y', [-1, 16]), ('tile_x', [-1, 64])],None,64
+    No: 3   GFLOPS: 12.53/15.91     result: MeasureResult(costs=(0.021419787,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7103521823883057, timestamp=1687186326.529788) [('tile_y', [-1, 64]), ('tile_x', [-1, 128])],None,76
+    No: 4   GFLOPS: 11.85/15.91     result: MeasureResult(costs=(0.0226513548,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6428844928741455, timestamp=1687186327.1516368)       [('tile_y', [-1, 16]), ('tile_x', [-1, 16])],None,44
+    No: 5   GFLOPS: 12.52/15.91     result: MeasureResult(costs=(0.0214371522,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6466293334960938, timestamp=1687186327.909414)        [('tile_y', [-1, 128]), ('tile_x', [-1, 128])],None,77
+    No: 6   GFLOPS: 0.50/15.91      result: MeasureResult(costs=(0.5324666002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.800585269927979, timestamp=1687186336.687465) [('tile_y', [-1, 64]), ('tile_x', [-1, 1])],None,6
+    No: 7   GFLOPS: 11.58/15.91     result: MeasureResult(costs=(0.023183954399999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6200051307678223, timestamp=1687186337.321072)        [('tile_y', [-1, 32]), ('tile_x', [-1, 512])],None,95
+    No: 8   GFLOPS: 8.51/15.91      result: MeasureResult(costs=(0.031551859599999996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7654030323028564, timestamp=1687186338.0872898)       [('tile_y', [-1, 512]), ('tile_x', [-1, 256])],None,89
+    No: 9   GFLOPS: 11.08/15.91     result: MeasureResult(costs=(0.024220844600000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6247663497924805, timestamp=1687186338.8868592)       [('tile_y', [-1, 8]), ('tile_x', [-1, 512])],None,93
+    No: 10  GFLOPS: 11.82/15.91     result: MeasureResult(costs=(0.022712121,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6573696136474609, timestamp=1687186339.5125413)        [('tile_y', [-1, 64]), ('tile_x', [-1, 32])],None,56
 
 
 
diff --git a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
index 924eacfb85..7fde36d46c 100644
--- a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
@@ -311,7 +311,7 @@ standard deviation.
 
  .. code-block:: none
 
-    {'mean': 487.7951962596853, 'median': 488.0434418999357, 'std': 1.7243702626508126}
+    {'mean': 493.4093647796544, 'median': 492.87042814830784, 'std': 1.429450833251269}
 
 
 
@@ -582,31 +582,30 @@ the tuning data to.
 
  .. code-block:: none
 
-
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   18.87/  19.00 GFLOPS | Progress: (4/20) | 9.25 s
    [Task  1/25]  Current/Best:    7.20/  22.99 GFLOPS | Progress: (8/20) | 12.00 s
    [Task  1/25]  Current/Best:   10.02/  22.99 GFLOPS | Progress: (12/20) | 15.75 s
    [Task  1/25]  Current/Best:    3.29/  22.99 GFLOPS | Progress: (16/20) | 18.35 s
    [Task  1/25]  Current/Best:   19.49/  22.99 GFLOPS | Progress: (20/20) | 22.17 s Done.
-
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   16.41/  16.41 GFLOPS | Progress: (4/20) | 4.82 s
    [Task  2/25]  Current/Best:   20.34/  20.34 GFLOPS | Progress: (8/20) | 6.78 s
    [Task  2/25]  Current/Best:   16.53/  20.34 GFLOPS | Progress: (12/20) | 9.60 s
    [Task  2/25]  Current/Best:   12.03/  20.34 GFLOPS | Progress: (16/20) | 11.18 s
    [Task  2/25]  Current/Best:   15.92/  20.34 GFLOPS | Progress: (20/20) | 12.86 s Done.
-
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:   22.76/  22.76 GFLOPS | Progress: (4/20) | 5.17 s
    [Task  3/25]  Current/Best:   12.27/  22.76 GFLOPS | Progress: (8/20) | 7.55 s
    [Task  3/25]  Current/Best:   15.52/  23.72 GFLOPS | Progress: (12/20) | 10.19 s
    [Task  3/25]  Current/Best:   20.25/  23.72 GFLOPS | Progress: (16/20) | 12.60 s
    [Task  3/25]  Current/Best:   10.26/  23.72 GFLOPS | Progress: (20/20) | 15.86 s Done.
-
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:    5.82/  21.45 GFLOPS | Progress: (4/20) | 5.96 s
    [Task  4/25]  Current/Best:   18.36/  21.45 GFLOPS | Progress: (8/20) | 7.71 s
    [Task  4/25]  Current/Best:   15.49/  21.45 GFLOPS | Progress: (12/20) | 9.98 s
    [Task  4/25]  Current/Best:   20.17/  21.45 GFLOPS | Progress: (16/20) | 11.99 s
    [Task  4/25]  Current/Best:   14.37/  21.45 GFLOPS | Progress: (20/20) | 14.56 s Done.
-
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:   17.90/  17.95 GFLOPS | Progress: (4/20) | 5.46 s
    [Task  5/25]  Current/Best:    6.29/  19.68 GFLOPS | Progress: (8/20) | 7.42 s
    [Task  5/25]  Current/Best:   15.23/  19.68 GFLOPS | Progress: (12/20) | 10.63 s
    [Task  5/25]  Current/Best:   18.31/  19.68 GFLOPS | Progress: (16/20) | 13.68 s
    [Task  5/25]  Current/Best:    8.84/  19.68 GFLOPS | Progress: (20/20) | 15.75 s Done.
-
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   15.43/  19.72 GFLOPS | Progress: (4/20) | 5.13 s
    [Task  6/25]  Current/Best:   16.23/  19.72 GFLOPS | Progress: (8/20) | 7.84 s
    [Task  6/25]  Current/Best:   17.39/  19.72 GFLOPS | Progress: (12/20) | 10.90 s
    [Task  6/25]  Current/Best:   12.26/  19.99 GFLOPS | Progress: (16/20) | 13.16 s
    [Task  6/25]  Current/Best:   17.54/  19.99 GFLOPS | Progress: (20/20) | 15.48 s Done.
-
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:   13.87/  13.87 GFLOPS | Progress: (4/20) | 5.74 s
    [Task  7/25]  Current/Best:   19.00/  19.00 GFLOPS | Progress: (8/20) | 8.09 s
    [Task  7/25]  Current/Best:    5.12/  19.00 GFLOPS | Progress: (12/20) | 10.95 s
    [Task  7/25]  Current/Best:   13.03/  19.57 GFLOPS | Progress: (16/20) | 13.77 s
    [Task  7/25]  Current/Best:   21.52/  21.52 GFLOPS | Progress: (20/20) | 16.67 s Done.
-
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   13.79/  13.79 GFLOPS | Progress: (4/20) | 11.81 s
    [Task  8/25]  Current/Best:   13.16/  13.79 GFLOPS | Progress: (8/20) | 15.03 s
    [Task  8/25]  Current/Best:   12.07/  13.79 GFLOPS | Progress: (12/20) | 18.43 s
    [Task  8/25]  Current/Best:   17.04/  17.04 GFLOPS | Progress: (16/20) | 25.31 s
    [Task  8/25]  Current/Best:   16.02/  17.04 GFLOPS | Progress: (20/20) | 27.69 s Done.
-
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:    9.45/  23.19 GFLOPS | Progress: (4/20) | 7.32 s
    [Task  9/25]  Current/Best:   20.35/  23.19 GFLOPS | Progress: (8/20) | 18.43 s
    [Task  9/25]  Current/Best:   19.30/  23.19 GFLOPS | Progress: (12/20) | 29.50 s
    [Task  9/25]  Current/Best:   18.90/  23.19 GFLOPS | Progress: (16/20) | 32.36 s
    [Task  9/25]  Current/Best:   12.40/  23.19 GFLOPS | Progress: (20/20) | 34.82 s Done.
-
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   11.09/  16.65 GFLOPS | Progress: (4/20) | 5.63 s
    [Task 10/25]  Current/Best:   13.74/  16.65 GFLOPS | Progress: (8/20) | 8.58 s
    [Task 10/25]  Current/Best:    7.18/  16.65 GFLOPS | Progress: (12/20) | 11.96 s
    [Task 10/25]  Current/Best:   12.65/  16.65 GFLOPS | Progress: (16/20) | 14.01 s
    [Task 10/25]  Current/Best:   11.77/  20.15 GFLOPS | Progress: (20/20) | 15.74 s Done.
-
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:   10.09/  23.71 GFLOPS | Progress: (4/20) | 5.54 s
    [Task 11/25]  Current/Best:   18.87/  23.71 GFLOPS | Progress: (8/20) | 7.85 s
    [Task 11/25]  Current/Best:   12.35/  23.71 GFLOPS | Progress: (12/20) | 11.12 s
    [Task 11/25]  Current/Best:    7.13/  23.71 GFLOPS | Progress: (16/20) | 13.65 s
    [Task 11/25]  Current/Best:   17.85/  23.71 GFLOPS | Progress: (20/20) | 16.45 s Done.
-
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   13.55/  13.70 GFLOPS | Progress: (4/20) | 5.51 s
    [Task 12/25]  Current/Best:   12.41/  21.47 GFLOPS | Progress: (8/20) | 8.02 s
    [Task 12/25]  Current/Best:   11.13/  21.47 GFLOPS | Progress: (12/20) | 10.20 s
    [Task 12/25]  Current/Best:    4.64/  21.47 GFLOPS | Progress: (16/20) | 15.02 s
    [Task 12/25]  Current/Best:   11.57/  21.47 GFLOPS | Progress: (20/20) | 18.05 s Done.
-
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:   20.56/  20.56 GFLOPS | Progress: (4/20) | 6.30 s
    [Task 13/25]  Current/Best:    3.10/  20.56 GFLOPS | Progress: (8/20) | 9.67 s
    [Task 13/25]  Current/Best:   12.04/  21.47 GFLOPS | Progress: (12/20) | 13.66 s
    [Task 13/25]  Current/Best:    6.74/  22.13 GFLOPS | Progress: (16/20) | 17.64 s
    [Task 13/25]  Current/Best:   10.00/  22.13 GFLOPS | Progress: (20/20) | 20.99 s Done.
-
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   11.47/  11.47 GFLOPS | Progress: (4/20) | 4.96 s
    [Task 14/25]  Current/Best:    5.38/  16.64 GFLOPS | Progress: (8/20) | 16.02 s
    [Task 14/25]  Current/Best:   15.10/  16.64 GFLOPS | Progress: (12/20) | 22.00 s
    [Task 14/25]  Current/Best:   15.92/  16.64 GFLOPS | Progress: (16/20) | 26.46 s
    [Task 14/25]  Current/Best:   13.02/  18.82 GFLOPS | Progress: (20/20) | 28.31 s Done.
-
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:    8.81/  10.63 GFLOPS | Progress: (4/20) | 10.92 s
    [Task 15/25]  Current/Best:   15.71/  15.71 GFLOPS | Progress: (8/20) | 18.15 s
    [Task 15/25]  Current/Best:    6.82/  15.71 GFLOPS | Progress: (12/20) | 26.03 s
    [Task 15/25]  Current/Best:    7.17/  18.02 GFLOPS | Progress: (16/20) | 27.78 s
    [Task 15/25]  Current/Best:   15.84/  19.83 GFLOPS | Progress: (20/20) | 31.33 s Done.
-
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   19.41/  19.41 GFLOPS | Progress: (4/20) | 4.83 s
    [Task 16/25]  Current/Best:    7.81/  19.41 GFLOPS | Progress: (8/20) | 6.67 s
    [Task 16/25]  Current/Best:    3.05/  19.41 GFLOPS | Progress: (12/20) | 8.57 s
    [Task 16/25]  Current/Best:   20.51/  20.51 GFLOPS | Progress: (16/20) | 11.75 s
    [Task 16/25]  Current/Best:   18.18/  20.51 GFLOPS | Progress: (20/20) | 14.26 s Done.
-
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:    5.52/  23.59 GFLOPS | Progress: (4/20) | 5.07 s
    [Task 17/25]  Current/Best:   22.78/  23.59 GFLOPS | Progress: (8/20) | 6.91 s
    [Task 17/25]  Current/Best:   11.79/  23.59 GFLOPS | Progress: (12/20) | 10.22 s
    [Task 17/25]  Current/Best:    7.29/  23.59 GFLOPS | Progress: (16/20) | 12.45 s
    [Task 17/25]  Current/Best:   21.90/  23.59 GFLOPS | Progress: (20/20) | 15.23 s Done.
-
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:    5.42/  15.83 GFLOPS | Progress: (4/20) | 8.93 s
    [Task 18/25]  Current/Best:   15.36/  15.83 GFLOPS | Progress: (8/20) | 11.33 s
    [Task 18/25]  Current/Best:    6.81/  15.83 GFLOPS | Progress: (12/20) | 18.08 s
    [Task 18/25]  Current/Best:   18.66/  18.66 GFLOPS | Progress: (16/20) | 21.02 s
    [Task 18/25]  Current/Best:    9.97/  23.94 GFLOPS | Progress: (20/20) | 23.32 s Done.
-
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   21.17/  21.17 GFLOPS | Progress: (4/20) | 7.42 s
    [Task 19/25]  Current/Best:   20.07/  21.17 GFLOPS | Progress: (8/20) | 10.77 s
    [Task 19/25]  Current/Best:   20.17/  21.17 GFLOPS | Progress: (12/20) | 13.94 s
    [Task 19/25]  Current/Best:    9.32/  21.17 GFLOPS | Progress: (16/20) | 18.92 s
    [Task 19/25]  Current/Best:   10.42/  21.17 GFLOPS | Progress: (20/20) | 23.86 s Done.
-
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:    7.33/  12.34 GFLOPS | Progress: (4/20) | 15.05 s
    [Task 20/25]  Current/Best:   16.52/  16.52 GFLOPS | Progress: (8/20) | 18.60 s
    [Task 20/25]  Current/Best:    1.58/  16.97 GFLOPS | Progress: (12/20) | 23.99 s
    [Task 20/25]  Current/Best:   18.44/  18.44 GFLOPS | Progress: (16/20) | 26.72 s
    [Task 20/25]  Current/Best:   20.21/  20.21 GFLOPS | Progress: (20/20) | 30.75 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:   21.15/  21.15 GFLOPS | Progress: (4/20) | 7.30 s
    [Task 21/25]  Current/Best:    9.86/  21.15 GFLOPS | Progress: (8/20) | 18.56 s
    [Task 21/25]  Current/Best:   22.11/  22.11 GFLOPS | Progress: (12/20) | 21.89 s
    [Task 21/25]  Current/Best:   12.72/  22.11 GFLOPS | Progress: (16/20) | 29.54 s
    [Task 21/25]  Current/Best:    8.10/  22.11 GFLOPS | Progress: (20
 /20) | 40.76 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:    9.29/  16.51 GFLOPS | Progress: (4/20) | 5.87 s
    [Task 22/25]  Current/Best:   10.39/  16.51 GFLOPS | Progress: (8/20) | 7.69 s
    [Task 22/25]  Current/Best:    8.54/  19.41 GFLOPS | Progress: (12/20) | 10.01 s
    [Task 22/25]  Current/Best:   11.30/  21.49 GFLOPS | Progress: (16/20) | 11.83 s
    [Task 22/25]  Current/Best:   16.61/  21.49 GFLOPS | Progress: (20/20) | 13.45 s Done.
-
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   18.11/  22.44 GFLOPS | Progress: (4/20) | 6.84 s
    [Task 23/25]  Current/Best:   10.10/  22.44 GFLOPS | Progress: (8/20) | 10.24 s
    [Task 23/25]  Current/Best:   19.19/  22.44 GFLOPS | Progress: (12/20) | 13.97 s
    [Task 23/25]  Current/Best:   10.75/  22.44 GFLOPS | Progress: (16/20) | 17.30 s
    [Task 23/25]  Current/Best:   11.97/  22.44 GFLOPS | Progress: (20/20) | 20.09 s Done.
-
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    3.65/   3.71 GFLOPS | Progress: (4/20) | 13.48 s Done.
+
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   10.92/  20.88 GFLOPS | Progress: (4/20) | 10.06 s
    [Task  1/25]  Current/Best:   15.74/  23.70 GFLOPS | Progress: (8/20) | 12.15 s
    [Task  1/25]  Current/Best:    9.39/  23.70 GFLOPS | Progress: (12/20) | 19.96 s
    [Task  1/25]  Current/Best:   18.80/  23.70 GFLOPS | Progress: (16/20) | 22.70 s
    [Task  1/25]  Current/Best:   18.80/  23.70 GFLOPS | Progress: (20/20) | 25.17 s Done.
+
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   15.87/  15.87 GFLOPS | Progress: (4/20) | 4.46 s
    [Task  2/25]  Current/Best:   11.47/  16.42 GFLOPS | Progress: (8/20) | 6.19 s
    [Task  2/25]  Current/Best:    6.22/  18.57 GFLOPS | Progress: (12/20) | 8.02 s
    [Task  2/25]  Current/Best:   14.75/  18.57 GFLOPS | Progress: (16/20) | 9.65 s
    [Task  2/25]  Current/Best:   13.58/  18.57 GFLOPS | Progress: (20/20) | 11.58 s Done.
+
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:   21.31/  21.31 GFLOPS | Progress: (4/20) | 4.88 s
    [Task  3/25]  Current/Best:    3.16/  24.23 GFLOPS | Progress: (8/20) | 7.45 s
    [Task  3/25]  Current/Best:   20.78/  24.23 GFLOPS | Progress: (12/20) | 9.67 s
    [Task  3/25]  Current/Best:    9.67/  24.23 GFLOPS | Progress: (16/20) | 11.89 s
    [Task  3/25]  Current/Best:   20.72/  24.23 GFLOPS | Progress: (20/20) | 14.42 s Done.
+
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:   16.16/  16.16 GFLOPS | Progress: (4/20) | 6.05 s
    [Task  4/25]  Current/Best:   15.43/  16.16 GFLOPS | Progress: (8/20) | 8.10 s
    [Task  4/25]  Current/Best:   21.58/  21.58 GFLOPS | Progress: (12/20) | 9.92 s
    [Task  4/25]  Current/Best:   11.58/  21.58 GFLOPS | Progress: (16/20) | 12.69 s
    [Task  4/25]  Current/Best:   12.51/  21.58 GFLOPS | Progress: (20/20) | 15.00 s Done.
+
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:   15.79/  16.11 GFLOPS | Progress: (4/20) | 4.93 s
    [Task  5/25]  Current/Best:    7.12/  16.11 GFLOPS | Progress: (8/20) | 7.22 s
    [Task  5/25]  Current/Best:   11.63/  16.11 GFLOPS | Progress: (12/20) | 9.91 s
    [Task  5/25]  Current/Best:   13.32/  16.11 GFLOPS | Progress: (16/20) | 12.24 s
    [Task  5/25]  Current/Best:   13.98/  16.11 GFLOPS | Progress: (20/20) | 14.62 s Done.
+
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   11.41/  22.82 GFLOPS | Progress: (4/20) | 6.18 s
    [Task  6/25]  Current/Best:   17.13/  22.82 GFLOPS | Progress: (8/20) | 9.56 s
    [Task  6/25]  Current/Best:   13.68/  22.82 GFLOPS | Progress: (12/20) | 11.79 s
    [Task  6/25]  Current/Best:   23.06/  23.06 GFLOPS | Progress: (16/20) | 14.11 s
    [Task  6/25]  Current/Best:    5.70/  23.06 GFLOPS | Progress: (20/20) | 17.75 s Done.
+
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:   18.70/  18.70 GFLOPS | Progress: (4/20) | 5.47 s
    [Task  7/25]  Current/Best:   21.14/  21.14 GFLOPS | Progress: (8/20) | 7.94 s
    [Task  7/25]  Current/Best:   17.40/  21.14 GFLOPS | Progress: (12/20) | 11.06 s
    [Task  7/25]  Current/Best:    8.46/  21.14 GFLOPS | Progress: (16/20) | 14.84 s
    [Task  7/25]  Current/Best:   11.32/  21.14 GFLOPS | Progress: (20/20) | 17.98 s Done.
+
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:    7.64/  14.08 GFLOPS | Progress: (4/20) | 8.89 s
    [Task  8/25]  Current/Best:    7.45/  14.08 GFLOPS | Progress: (8/20) | 20.83 s
    [Task  8/25]  Current/Best:   17.30/  17.30 GFLOPS | Progress: (12/20) | 24.79 s
    [Task  8/25]  Current/Best:    2.58/  17.30 GFLOPS | Progress: (16/20) | 28.41 s
    [Task  8/25]  Current/Best:    8.58/  17.30 GFLOPS | Progress: (20/20) | 37.36 s
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+
    [Task  9/25]  Current/Best:   16.25/  16.25 GFLOPS | Progress: (4/20) | 7.26 s
    [Task  9/25]  Current/Best:   17.85/  18.54 GFLOPS | Progress: (8/20) | 9.18 s
    [Task  9/25]  Current/Best:   16.27/  18.54 GFLOPS | Progress: (12/20) | 13.16 s
    [Task  9/25]  Current/Best:   12.88/  18.54 GFLOPS | Progress: (16/20) | 15.34 s
    [Task  9/25]  Current/Best:   17.06/  22.84 GFLOPS | Progress: (20/20) | 19.34 s Done.
+
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   13.06/  21.21 GFLOPS | Progress: (4/20) | 4.58 s
    [Task 10/25]  Current/Best:   16.01/  21.21 GFLOPS | Progress: (8/20) | 7.02 s
    [Task 10/25]  Current/Best:   16.07/  21.21 GFLOPS | Progress: (12/20) | 8.77 s
    [Task 10/25]  Current/Best:   13.31/  21.21 GFLOPS | Progress: (16/20) | 11.19 s
    [Task 10/25]  Current/Best:   10.65/  21.21 GFLOPS | Progress: (20/20) | 14.89 s Done.
+
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:    6.16/   9.97 GFLOPS | Progress: (4/20) | 5.55 s
    [Task 11/25]  Current/Best:   20.67/  21.76 GFLOPS | Progress: (8/20) | 8.37 s
    [Task 11/25]  Current/Best:   19.65/  21.76 GFLOPS | Progress: (12/20) | 10.49 s
    [Task 11/25]  Current/Best:   14.86/  21.76 GFLOPS | Progress: (16/20) | 13.32 s
    [Task 11/25]  Current/Best:   13.09/  22.98 GFLOPS | Progress: (20/20) | 15.29 s Done.
+
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   12.18/  19.28 GFLOPS | Progress: (4/20) | 5.19 s
    [Task 12/25]  Current/Best:    6.18/  19.28 GFLOPS | Progress: (8/20) | 7.94 s
    [Task 12/25]  Current/Best:   15.95/  19.28 GFLOPS | Progress: (12/20) | 11.18 s
    [Task 12/25]  Current/Best:   14.99/  19.28 GFLOPS | Progress: (16/20) | 13.69 s
    [Task 12/25]  Current/Best:   12.16/  19.28 GFLOPS | Progress: (20/20) | 16.28 s Done.
+
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:   18.94/  19.05 GFLOPS | Progress: (4/20) | 6.37 s
    [Task 13/25]  Current/Best:   15.68/  19.05 GFLOPS | Progress: (8/20) | 8.78 s
    [Task 13/25]  Current/Best:   12.75/  19.05 GFLOPS | Progress: (12/20) | 12.15 s
    [Task 13/25]  Current/Best:   18.31/  19.84 GFLOPS | Progress: (16/20) | 14.59 s
    [Task 13/25]  Current/Best:   21.66/  21.66 GFLOPS | Progress: (20/20) | 18.28 s Done.
+
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   10.29/  18.34 GFLOPS | Progress: (4/20) | 4.86 s
    [Task 14/25]  Current/Best:   14.65/  18.34 GFLOPS | Progress: (8/20) | 7.19 s
    [Task 14/25]  Current/Best:   12.62/  18.34 GFLOPS | Progress: (12/20) | 12.95 s
    [Task 14/25]  Current/Best:   11.14/  18.34 GFLOPS | Progress: (16/20) | 15.48 s
    [Task 14/25]  Current/Best:   12.09/  20.44 GFLOPS | Progress: (20/20) | 20.85 s Done.
+
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:   16.65/  19.00 GFLOPS | Progress: (4/20) | 6.72 s
    [Task 15/25]  Current/Best:   14.01/  20.94 GFLOPS | Progress: (8/20) | 8.50 s
    [Task 15/25]  Current/Best:   15.36/  20.96 GFLOPS | Progress: (12/20) | 10.86 s
    [Task 15/25]  Current/Best:   16.47/  20.96 GFLOPS | Progress: (16/20) | 13.44 s
    [Task 15/25]  Current/Best:   15.44/  20.96 GFLOPS | Progress: (20/20) | 24.49 s
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   17.29/  17.29 GFLOPS | Progress: (4/20) | 5.77 s
    [Task 16/25]  Current/Best:   18.35/  18.35 GFLOPS | Progress: (8/20) | 7.91 s
    [Task 16/25]  Current/Best:    6.14/  18.35 GFLOPS | Progress: (12/20) | 10.09 s
    [Task 16/25]  Current/Best:   13.27/  18.85 GFLOPS | Progress: (16/20) | 12.35 s
    [Task 16/25]  Current/Best:   14.53/  18.85 GFLOPS | Progress: (20/20
 ) | 14.85 s Done.
+
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:   20.55/  20.55 GFLOPS | Progress: (4/20) | 6.07 s
    [Task 17/25]  Current/Best:    5.37/  20.55 GFLOPS | Progress: (8/20) | 9.19 s
    [Task 17/25]  Current/Best:   17.99/  20.78 GFLOPS | Progress: (12/20) | 12.12 s
    [Task 17/25]  Current/Best:    8.19/  20.78 GFLOPS | Progress: (16/20) | 15.25 s
    [Task 17/25]  Current/Best:    6.21/  20.78 GFLOPS | Progress: (20/20) | 18.26 s Done.
+
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:   16.11/  16.11 GFLOPS | Progress: (4/20) | 7.68 s
    [Task 18/25]  Current/Best:   10.38/  18.73 GFLOPS | Progress: (8/20) | 10.31 s
    [Task 18/25]  Current/Best:   11.96/  23.57 GFLOPS | Progress: (12/20) | 13.04 s
    [Task 18/25]  Current/Best:    2.96/  23.57 GFLOPS | Progress: (16/20) | 20.68 s
    [Task 18/25]  Current/Best:    7.83/  23.57 GFLOPS | Progress: (20/20) | 26.39 s Done.
+
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   12.67/  18.47 GFLOPS | Progress: (4/20) | 6.62 s
    [Task 19/25]  Current/Best:   13.49/  18.47 GFLOPS | Progress: (8/20) | 9.64 s
    [Task 19/25]  Current/Best:    6.16/  18.47 GFLOPS | Progress: (12/20) | 12.81 s
    [Task 19/25]  Current/Best:   13.04/  18.90 GFLOPS | Progress: (16/20) | 15.64 s
    [Task 19/25]  Current/Best:    7.84/  18.90 GFLOPS | Progress: (20/20) | 20.68 s Done.
+
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:   10.86/  18.12 GFLOPS | Progress: (4/20) | 5.37 s
    [Task 20/25]  Current/Best:   20.11/  20.11 GFLOPS | Progress: (8/20) | 8.14 s
    [Task 20/25]  Current/Best:   18.96/  20.11 GFLOPS | Progress: (12/20) | 11.87 s
    [Task 20/25]  Current/Best:   12.31/  20.11 GFLOPS | Progress: (16/20) | 14.92 s
    [Task 20/25]  Current/Best:   14.34/  20.11 GFLOPS | Progress: (20/20) | 26.50 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:    5.25/  18.58 GFLOPS | Progress: (4/20) | 6.16 s
    [Task 21/25]  Current/Best:   18.34/  23.22 GFLOPS | Progress: (8/20) | 8.04 s
    [Task 21/25]  Current/Best:    9.99/  23.22 GFLOPS | Progress: (12/20) | 13.93 s
    [Task 21/25]  Current/Best:   11.44/  23.22 GFLOPS | Progress: (16/20) | 16.18 s
    [Task 21/25]  Current/Best:   18.63/  23.22 GFLOPS | Progress: (20/20
 ) | 27.46 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:    4.44/  18.05 GFLOPS | Progress: (4/20) | 6.16 s
    [Task 22/25]  Current/Best:    6.79/  18.05 GFLOPS | Progress: (8/20) | 11.46 s
    [Task 22/25]  Current/Best:   20.23/  20.72 GFLOPS | Progress: (12/20) | 14.41 s
    [Task 22/25]  Current/Best:    9.89/  20.72 GFLOPS | Progress: (16/20) | 17.31 s
    [Task 22/25]  Current/Best:   15.06/  20.72 GFLOPS | Progress: (20/20) | 19.25 s Done.
+
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   19.17/  20.68 GFLOPS | Progress: (4/20) | 5.53 s
    [Task 23/25]  Current/Best:   10.41/  20.68 GFLOPS | Progress: (8/20) | 12.05 s
    [Task 23/25]  Current/Best:   10.58/  22.34 GFLOPS | Progress: (12/20) | 14.66 s
    [Task 23/25]  Current/Best:   18.73/  22.54 GFLOPS | Progress: (16/20) | 17.79 s
    [Task 23/25]  Current/Best:    6.34/  22.54 GFLOPS | Progress: (20/20) | 20.56 s Done.
+
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
      Done.
-
    [Task 24/25]  Current/Best:    2.11/   3.71 GFLOPS | Progress: (8/20) | 23.81 s
    [Task 24/25]  Current/Best:    4.18/   4.18 GFLOPS | Progress: (12/20) | 34.84 s
    [Task 24/25]  Current/Best:    8.26/  10.17 GFLOPS | Progress: (16/20) | 45.83 s
    [Task 24/25]  Current/Best:    4.08/  10.17 GFLOPS | Progress: (20/20) | 50.80 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    5.72/   9.56 GFLOPS | Progress: (4/20) | 4.21 s
    [Task 25/25]  Current/Best:    2.89/   9.71 GFLOPS | Progress: (8/20) | 9.47 s
    [Task 25/25]  Current/Best:    5.74/   9.71 GFLOPS | Progress: (12/20) | 11.13 s
    [Task 25/25]  Current/Best:    1.44/   9.71 GFLOPS | Progress: (16/20) | 17.17 s
    [Task 25/25]  Current/Best:    6.92/   9.71 GFLOPS | Progress: (20/20) | 24.19 s Done.
-
+     Done.
+
    [Task 24/25]  Current/Best:    5.48/   5.48 GFLOPS | Progress: (4/20) | 13.60 s
    [Task 24/25]  Current/Best:    8.57/   8.57 GFLOPS | Progress: (8/20) | 16.98 s
    [Task 24/25]  Current/Best:    7.24/   8.57 GFLOPS | Progress: (12/20) | 20.04 s
    [Task 24/25]  Current/Best:    3.26/   8.57 GFLOPS | Progress: (16/20) | 30.73 s
    [Task 24/25]  Current/Best:    5.95/   8.57 GFLOPS | Progress: (20/20) | 33.87 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    1.55/   5.15 GFLOPS | Progress: (4/20) | 9.71 s
    [Task 25/25]  Current/Best:    4.78/   5.80 GFLOPS | Progress: (8/20) | 20.70 s
    [Task 25/25]  Current/Best:    2.98/   8.12 GFLOPS | Progress: (12/20) | 31.69 s
    [Task 25/25]  Current/Best:    6.64/   8.12 GFLOPS | Progress: (16/20) | 34.47 s
    [Task 25/25]  Current/Best:    7.61/   8.12 GFLOPS | Progress: (20/20) | 45.43 s
 
 
 
@@ -675,6 +674,7 @@ model using optimized operators to speed up our computations.
  .. code-block:: none
 
      Done.
+     Done.
 
 
 
@@ -709,7 +709,7 @@ Verify that the optimized model runs and produces the same results:
  .. code-block:: none
 
     class='n02123045 tabby, tabby cat' with probability=0.621104
-    class='n02123159 tiger cat' with probability=0.356378
+    class='n02123159 tiger cat' with probability=0.356377
     class='n02124075 Egyptian cat' with probability=0.019712
     class='n02129604 tiger, Panthera tigris' with probability=0.001215
     class='n04040759 radiator' with probability=0.000262
@@ -766,8 +766,8 @@ improvement in comparing the optimized model to the unoptimized model.
 
  .. code-block:: none
 
-    optimized: {'mean': 402.6041530506336, 'median': 401.1850734008476, 'std': 3.8736724392891415}
-    unoptimized: {'mean': 487.7951962596853, 'median': 488.0434418999357, 'std': 1.7243702626508126}
+    optimized: {'mean': 408.9835010492243, 'median': 408.2894904000568, 'std': 2.957360840474384}
+    unoptimized: {'mean': 493.4093647796544, 'median': 492.87042814830784, 'std': 1.429450833251269}
 
 
 
@@ -790,7 +790,7 @@ profiling/benchmarking.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 13 minutes  27.114 seconds)
+   **Total running time of the script:** ( 13 minutes  3.721 seconds)
 
 
 .. _sphx_glr_download_tutorial_autotvm_relay_x86.py:
diff --git a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
index 873d85568b..20c0bc7698 100644
--- a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
@@ -274,7 +274,7 @@ device and returns the measured cost. Network overhead is excluded.
 
  .. code-block:: none
 
-    1.193e-07 secs/op
+    1.159e-07 secs/op
 
 
 
diff --git a/docs/_sources/tutorial/intro_topi.rst.txt b/docs/_sources/tutorial/intro_topi.rst.txt
index 10b7e46d51..fcb98e72c5 100644
--- a/docs/_sources/tutorial/intro_topi.rst.txt
+++ b/docs/_sources/tutorial/intro_topi.rst.txt
@@ -270,7 +270,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
 
  .. code-block:: none
 
-    [stage(a, placeholder(a, 0x1af8a720)), stage(b, placeholder(b, 0x1d098870)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T [...]
+    [stage(a, placeholder(a, 0xae08760)), stage(b, placeholder(b, 0x1ada0170)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T. [...]
 
 
 
diff --git a/docs/_sources/tutorial/sg_execution_times.rst.txt b/docs/_sources/tutorial/sg_execution_times.rst.txt
index 6a40cda127..45790f2ac9 100644
--- a/docs/_sources/tutorial/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorial/sg_execution_times.rst.txt
@@ -5,31 +5,31 @@
 
 Computation times
 =================
-**17:28.044** total execution time for **tutorial** files:
+**16:34.063** total execution time for **tutorial** files:
 
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 13:27.114 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 13:03.721 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:50.538 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:25.982 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 01:00.236 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 01:00.225 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:41.336 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:40.535 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:26.732 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:21.563 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.996 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.982 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.871 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.851 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.221 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.204 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_uma.py` (``uma.py``)                                             | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tvmc_python.py` (``tvmc_python.py``)                             | 00:00.000 | 0.0 MB |
-+------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)   | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
+| :ref:`sphx_glr_tutorial_tvmc_python.py` (``tvmc_python.py``)                             | 00:00.000 | 0.0 MB |
++------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_introduction.py` (``introduction.py``)                           | 00:00.000 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_install.py` (``install.py``)                                     | 00:00.000 | 0.0 MB |
diff --git a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
index 60fb894eb2..adbaa48014 100644
--- a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
@@ -286,7 +286,7 @@ helper function to run a profile of the TVM generated code.
  .. code-block:: none
 
     Numpy running time: 0.000007
-    naive: 0.000007
+    naive: 0.000008
 
 
 
@@ -389,7 +389,7 @@ compile and run this new schedule with the parallel operation applied:
 
  .. code-block:: none
 
-    parallel: 0.000007
+    parallel: 0.000008
 
 
 
@@ -498,10 +498,10 @@ We can now compare the different schedules
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                   numpy    7.179829990491271e-06                    1.0
-                   naive              6.6888e-06      0.9316098025800646
-                parallel              6.9881e-06      0.9732960264038019
-                  vector             3.92135e-05       5.461619572041826
+                   numpy    7.042039942461997e-06                    1.0
+                   naive              7.8677e-06       1.117247284066006
+                parallel              8.1401e-06      1.1559292572194793
+                  vector             3.92583e-05       5.574847674930219
 
 
 
@@ -922,7 +922,7 @@ matrix multiplication.
 
  .. code-block:: none
 
-    Numpy running time: 0.017677
+    Numpy running time: 0.018719
 
 
 
@@ -980,7 +980,7 @@ optimizations.
 
  .. code-block:: none
 
-    none: 3.427272
+    none: 3.416357
 
 
 
@@ -1080,7 +1080,7 @@ schedule.
 
  .. code-block:: none
 
-    blocking: 0.298237
+    blocking: 0.297736
 
 
 
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.
 
  .. code-block:: none
 
-    vectorization: 0.278766
+    vectorization: 0.283784
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1230,7 +1230,7 @@ more cache friendly.
 
  .. code-block:: none
 
-    loop permutation: 0.116643
+    loop permutation: 0.117506
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1321,7 +1321,7 @@ optimized schedule.
 
  .. code-block:: none
 
-    array packing: 0.106191
+    array packing: 0.106528
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1404,7 +1404,7 @@ to `C` when all the block results are ready.
 
  .. code-block:: none
 
-    block caching: 0.111985
+    block caching: 0.111828
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1478,7 +1478,7 @@ of thread-level parallelization.
 
  .. code-block:: none
 
-    parallelization: 0.132418
+    parallelization: 0.132464
     # from tvm.script import ir as I
     # from tvm.script import tir as T
 
@@ -1548,13 +1548,13 @@ working, we can compare the results.
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                    none            3.4272720055                     1.0
-                blocking            0.2982374362     0.08701889891476254
-           vectorization            0.2787663073     0.08133766647428126
-        loop permutation     0.11664279019999998     0.03403371253078675
-           array packing     0.10619090550000002     0.03098409035804206
-           block caching     0.11198485169999998     0.03267463204562974
-         parallelization     0.13241790800000003     0.03863653301736749
+                    none            3.4163570219                     1.0
+                blocking            0.2977356527     0.08715004046456917
+           vectorization            0.2837844282     0.08306638515261905
+        loop permutation             0.117506043     0.03439512973812358
+           array packing            0.1065281419    0.031181794296415363
+           block caching            0.1118275401     0.03273297825231607
+         parallelization             0.132463913    0.038773439705177666
 
 
 
@@ -1596,7 +1596,7 @@ the computation for specific platforms.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  0.236 seconds)
+   **Total running time of the script:** ( 1 minutes  0.225 seconds)
 
 
 .. _sphx_glr_download_tutorial_tensor_expr_get_started.py:
diff --git a/docs/commit_hash b/docs/commit_hash
index 2bcb921ea8..ccf1f36ae0 100644
--- a/docs/commit_hash
+++ b/docs/commit_hash
@@ -1 +1 @@
-fa223b077f3fb375e42f1e0d4563330daa730f18
+e280e01fc1d2f79cc4f1cda2257a51bd20605bcd
diff --git a/docs/how_to/compile_models/from_darknet.html b/docs/how_to/compile_models/from_darknet.html
index 8957458ffb..28c5c2eb8c 100644
--- a/docs/how_to/compile_models/from_darknet.html
+++ b/docs/how_to/compile_models/from_darknet.html
@@ -595,7 +595,7 @@ class:[&#39;truck 0.9266&#39;] left:471 top:83 right:689 bottom:169
 class:[&#39;bicycle 0.9984&#39;] left:111 top:113 right:577 bottom:447
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  31.073 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  33.484 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-darknet-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7716f96385bd5abb6e822041e285be54/from_darknet.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_darknet.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_mxnet.html b/docs/how_to/compile_models/from_mxnet.html
index 1e57f54823..3e9d00dec5 100644
--- a/docs/how_to/compile_models/from_mxnet.html
+++ b/docs/how_to/compile_models/from_mxnet.html
@@ -449,7 +449,7 @@
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;x&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#tuple" title="builtins.tuple" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">x</span><span class="o">.</span><span class="n">shape</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipce8aa542-1a2b-4473-9be6-28cc3b10b648 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip513d3b55-72e3-4612-800d-59d41fd413e9 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
 x (1, 3, 224, 224)
 </pre></div>
 </div>
diff --git a/docs/how_to/compile_models/from_oneflow.html b/docs/how_to/compile_models/from_oneflow.html
index 79d31115be..a5ab6bd31f 100644
--- a/docs/how_to/compile_models/from_oneflow.html
+++ b/docs/how_to/compile_models/from_oneflow.html
@@ -459,10 +459,13 @@ Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdo
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip&quot; to /workspace/.oneflow/flowvision_cache/resnet18.zip
 
   0%|          | 0.00/41.5M [00:00&lt;?, ?B/s]
- 21%|##1       | 8.89M/41.5M [00:00&lt;00:00, 93.2MB/s]
- 55%|#####4    | 22.8M/41.5M [00:00&lt;00:00, 124MB/s]
- 83%|########3 | 34.6M/41.5M [00:00&lt;00:00, 50.8MB/s]
-100%|##########| 41.5M/41.5M [00:00&lt;00:00, 59.0MB/s]
+ 18%|#8        | 7.56M/41.5M [00:00&lt;00:00, 79.3MB/s]
+ 36%|###6      | 15.1M/41.5M [00:00&lt;00:00, 38.5MB/s]
+ 48%|####7     | 19.8M/41.5M [00:00&lt;00:00, 39.6MB/s]
+ 58%|#####8    | 24.2M/41.5M [00:00&lt;00:00, 27.0MB/s]
+ 77%|#######7  | 32.0M/41.5M [00:00&lt;00:00, 33.5MB/s]
+ 92%|#########2| 38.3M/41.5M [00:01&lt;00:00, 34.5MB/s]
+100%|##########| 41.5M/41.5M [00:01&lt;00:00, 35.0MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_paddle.html b/docs/how_to/compile_models/from_paddle.html
index 2b214b209c..cd754b585e 100644
--- a/docs/how_to/compile_models/from_paddle.html
+++ b/docs/how_to/compile_models/from_paddle.html
@@ -494,7 +494,7 @@ To begin, we’ll install PaddlePaddle&gt;=2.1.3:</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>TVM prediction top-1 id: 282, class name:  282: &#39;tiger cat&#39;,
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  1.199 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  2.566 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-paddle-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/16269b77359771348d507395692524cf/from_paddle.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_paddle.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_pytorch.html b/docs/how_to/compile_models/from_pytorch.html
index d9a8ccd9dc..13524f3ad5 100644
--- a/docs/how_to/compile_models/from_pytorch.html
+++ b/docs/how_to/compile_models/from_pytorch.html
@@ -442,13 +442,12 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/resnet18-f37072fd.pth&quot; to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
 
   0%|          | 0.00/44.7M [00:00&lt;?, ?B/s]
- 14%|#4        | 6.30M/44.7M [00:00&lt;00:00, 47.0MB/s]
- 24%|##4       | 10.8M/44.7M [00:00&lt;00:00, 42.9MB/s]
- 48%|####8     | 21.5M/44.7M [00:00&lt;00:00, 70.5MB/s]
- 64%|######4   | 28.6M/44.7M [00:00&lt;00:00, 71.9MB/s]
- 80%|########  | 35.8M/44.7M [00:00&lt;00:00, 45.6MB/s]
- 92%|#########2| 41.2M/44.7M [00:00&lt;00:00, 42.7MB/s]
-100%|##########| 44.7M/44.7M [00:00&lt;00:00, 51.5MB/s]
+ 18%|#7        | 7.99M/44.7M [00:00&lt;00:00, 58.3MB/s]
+ 36%|###5      | 16.0M/44.7M [00:00&lt;00:00, 59.9MB/s]
+ 54%|#####3    | 24.0M/44.7M [00:00&lt;00:00, 60.5MB/s]
+ 72%|#######1  | 32.0M/44.7M [00:00&lt;00:00, 49.3MB/s]
+ 90%|########9 | 40.0M/44.7M [00:00&lt;00:00, 51.5MB/s]
+100%|##########| 44.7M/44.7M [00:00&lt;00:00, 56.3MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_tensorflow.html b/docs/how_to/compile_models/from_tensorflow.html
index fb64269851..8b4a917ae1 100644
--- a/docs/how_to/compile_models/from_tensorflow.html
+++ b/docs/how_to/compile_models/from_tensorflow.html
@@ -662,7 +662,7 @@ banana (score = 0.00022)
 desk (score = 0.00019)
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  34.776 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  32.171 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-tensorflow-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7f1d3d1b878694c201c614c807cdebc8/from_tensorflow.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_tensorflow.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/sg_execution_times.html b/docs/how_to/compile_models/sg_execution_times.html
index bbb1b8c0d3..b7017d74eb 100644
--- a/docs/how_to/compile_models/sg_execution_times.html
+++ b/docs/how_to/compile_models/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-compile-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>07:06.216</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
+<p><strong>07:06.702</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -358,44 +358,44 @@
 <col style="width: 8%" />
 </colgroup>
 <tbody>
-<tr class="row-odd"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
-<td><p>01:34.776</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
+<td><p>01:33.484</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
-<td><p>01:31.073</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
+<td><p>01:32.171</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></td>
-<td><p>01:01.199</p></td>
+<td><p>01:02.566</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></td>
-<td><p>00:40.053</p></td>
+<td><p>00:41.185</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
-<td><p>00:36.932</p></td>
+<td><p>00:37.515</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></td>
-<td><p>00:33.323</p></td>
+<td><p>00:32.795</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
-<td><p>00:27.939</p></td>
+<td><p>00:28.034</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
-<td><p>00:26.791</p></td>
+<td><p>00:24.826</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></td>
-<td><p>00:11.202</p></td>
+<td><p>00:11.297</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></td>
-<td><p>00:02.926</p></td>
+<td><p>00:02.828</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno.html b/docs/how_to/deploy_models/deploy_model_on_adreno.html
index a8a6dd5999..794410d28d 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno.html
@@ -840,10 +840,10 @@ Top5 predictions:
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
- 4225.1644    4223.8821    4235.2511    4221.9718      3.6720
+ 4227.0592    4221.3038    4278.4657    4220.0554     17.1578
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  20.344 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  20.171 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-model-on-adreno-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/2387d8448da213eb625e6b3d916327d4/deploy_model_on_adreno.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_model_on_adreno.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
index d233cb5fe5..dd1dd128e6 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno_tvmc.html
@@ -448,30 +448,24 @@ to run this tutorial with a real device over rpc.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
 
      8192/102967424 [..............................] - ETA: 0s
-  6635520/102967424 [&gt;.............................] - ETA: 1s
-  8380416/102967424 [=&gt;............................] - ETA: 2s
- 15024128/102967424 [===&gt;..........................] - ETA: 1s
- 16769024/102967424 [===&gt;..........................] - ETA: 2s
- 24764416/102967424 [======&gt;.......................] - ETA: 1s
- 25157632/102967424 [======&gt;.......................] - ETA: 1s
- 33546240/102967424 [========&gt;.....................] - ETA: 1s
- 40189952/102967424 [==========&gt;...................] - ETA: 1s
- 41934848/102967424 [===========&gt;..................] - ETA: 1s
- 47079424/102967424 [============&gt;.................] - ETA: 1s
- 50323456/102967424 [=============&gt;................] - ETA: 1s
+  5079040/102967424 [&gt;.............................] - ETA: 0s
+ 10575872/102967424 [==&gt;...........................] - ETA: 0s
+ 16957440/102967424 [===&gt;..........................] - ETA: 0s
+ 24002560/102967424 [=====&gt;........................] - ETA: 0s
+ 29106176/102967424 [=======&gt;......................] - ETA: 0s
+ 33546240/102967424 [========&gt;.....................] - ETA: 0s
+ 41213952/102967424 [===========&gt;..................] - ETA: 0s
+ 44130304/102967424 [===========&gt;..................] - ETA: 0s
+ 50323456/102967424 [=============&gt;................] - ETA: 0s
+ 52518912/102967424 [==============&gt;...............] - ETA: 0s
  56967168/102967424 [===============&gt;..............] - ETA: 0s
- 58712064/102967424 [================&gt;.............] - ETA: 0s
- 65355776/102967424 [==================&gt;...........] - ETA: 0s
- 67100672/102967424 [==================&gt;...........] - ETA: 0s
+ 66043904/102967424 [==================&gt;...........] - ETA: 0s
  69296128/102967424 [===================&gt;..........] - ETA: 0s
  75489280/102967424 [====================&gt;.........] - ETA: 0s
- 82124800/102967424 [======================&gt;.......] - ETA: 0s
- 83877888/102967424 [=======================&gt;......] - ETA: 0s
- 90521600/102967424 [=========================&gt;....] - ETA: 0s
- 92266496/102967424 [=========================&gt;....] - ETA: 0s
-100646912/102967424 [============================&gt;.] - ETA: 0s
-102850560/102967424 [============================&gt;.] - ETA: 0s
-102967424/102967424 [==============================] - 2s 0us/step
+ 88023040/102967424 [========================&gt;.....] - ETA: 0s
+ 97812480/102967424 [===========================&gt;..] - ETA: 0s
+102817792/102967424 [============================&gt;.] - ETA: 0s
+102967424/102967424 [==============================] - 1s 0us/step
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_model_on_android.html b/docs/how_to/deploy_models/deploy_model_on_android.html
index a04e7f6620..5768e33d8e 100644
--- a/docs/how_to/deploy_models/deploy_model_on_android.html
+++ b/docs/how_to/deploy_models/deploy_model_on_android.html
@@ -672,7 +672,7 @@ to the remote android device.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  15.8892      15.6744      16.7882      15.3324       0.4693
+  15.8511      15.7483      16.8541      15.3328       0.4226
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
index 543f6fa64f..d67a1f4b38 100644
--- a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
+++ b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
@@ -464,32 +464,37 @@ be unstable.</p>
 Downloading: &quot;https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth&quot; to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
 
   0%|          | 0.00/170M [00:00&lt;?, ?B/s]
-  4%|3         | 6.30M/170M [00:00&lt;00:02, 59.2MB/s]
-  7%|7         | 12.0M/170M [00:00&lt;00:03, 45.0MB/s]
- 10%|9         | 16.4M/170M [00:00&lt;00:05, 29.4MB/s]
- 14%|#4        | 24.0M/170M [00:00&lt;00:04, 35.5MB/s]
- 18%|#7        | 30.3M/170M [00:00&lt;00:03, 40.7MB/s]
- 24%|##3       | 40.0M/170M [00:01&lt;00:03, 43.8MB/s]
- 28%|##8       | 48.0M/170M [00:01&lt;00:02, 45.7MB/s]
- 34%|###3      | 57.4M/170M [00:01&lt;00:02, 56.9MB/s]
- 38%|###7      | 64.0M/170M [00:01&lt;00:01, 57.0MB/s]
- 42%|####2     | 72.0M/170M [00:01&lt;00:01, 62.7MB/s]
- 46%|####6     | 78.4M/170M [00:01&lt;00:02, 44.2MB/s]
- 49%|####9     | 83.6M/170M [00:01&lt;00:01, 45.3MB/s]
- 52%|#####2    | 88.6M/170M [00:02&lt;00:02, 35.4MB/s]
- 56%|#####5    | 94.3M/170M [00:02&lt;00:02, 37.5MB/s]
- 60%|######    | 102M/170M [00:02&lt;00:01, 44.3MB/s]
- 66%|######5   | 112M/170M [00:02&lt;00:01, 46.5MB/s]
- 70%|######9   | 118M/170M [00:02&lt;00:01, 43.6MB/s]
- 72%|#######2  | 123M/170M [00:02&lt;00:01, 42.4MB/s]
- 75%|#######4  | 127M/170M [00:03&lt;00:01, 41.1MB/s]
- 77%|#######7  | 131M/170M [00:03&lt;00:01, 39.0MB/s]
- 80%|########  | 136M/170M [00:03&lt;00:00, 36.2MB/s]
- 85%|########4 | 144M/170M [00:03&lt;00:00, 42.1MB/s]
- 89%|########9 | 152M/170M [00:03&lt;00:00, 47.1MB/s]
- 94%|#########4| 160M/170M [00:03&lt;00:00, 50.8MB/s]
- 99%|#########8| 168M/170M [00:03&lt;00:00, 51.8MB/s]
-100%|##########| 170M/170M [00:03&lt;00:00, 45.4MB/s]
+  4%|3         | 6.30M/170M [00:00&lt;00:03, 46.3MB/s]
+  6%|6         | 10.7M/170M [00:00&lt;00:05, 30.5MB/s]
+  9%|9         | 16.0M/170M [00:00&lt;00:04, 35.4MB/s]
+ 14%|#4        | 24.0M/170M [00:00&lt;00:03, 38.4MB/s]
+ 18%|#7        | 30.3M/170M [00:00&lt;00:04, 35.9MB/s]
+ 20%|#9        | 33.8M/170M [00:01&lt;00:04, 30.8MB/s]
+ 24%|##3       | 40.0M/170M [00:01&lt;00:03, 37.3MB/s]
+ 27%|##7       | 46.3M/170M [00:01&lt;00:03, 40.8MB/s]
+ 30%|##9       | 50.4M/170M [00:01&lt;00:03, 35.2MB/s]
+ 33%|###2      | 56.0M/170M [00:01&lt;00:03, 38.7MB/s]
+ 37%|###6      | 62.3M/170M [00:01&lt;00:02, 44.2MB/s]
+ 39%|###9      | 66.8M/170M [00:01&lt;00:02, 39.2MB/s]
+ 42%|####2     | 72.0M/170M [00:02&lt;00:02, 38.5MB/s]
+ 46%|####6     | 78.3M/170M [00:02&lt;00:02, 41.7MB/s]
+ 49%|####8     | 82.4M/170M [00:02&lt;00:02, 40.8MB/s]
+ 52%|#####1    | 88.0M/170M [00:02&lt;00:02, 36.9MB/s]
+ 56%|#####5    | 94.3M/170M [00:02&lt;00:01, 41.6MB/s]
+ 58%|#####7    | 98.4M/170M [00:02&lt;00:01, 41.4MB/s]
+ 60%|######    | 103M/170M [00:02&lt;00:01, 40.1MB/s]
+ 63%|######2   | 106M/170M [00:03&lt;00:02, 30.7MB/s]
+ 67%|######7   | 114M/170M [00:03&lt;00:01, 41.9MB/s]
+ 71%|#######   | 120M/170M [00:03&lt;00:01, 39.3MB/s]
+ 75%|#######4  | 127M/170M [00:03&lt;00:00, 46.6MB/s]
+ 78%|#######7  | 132M/170M [00:03&lt;00:01, 38.0MB/s]
+ 80%|########  | 136M/170M [00:03&lt;00:01, 31.1MB/s]
+ 85%|########4 | 144M/170M [00:04&lt;00:00, 34.6MB/s]
+ 89%|########9 | 152M/170M [00:04&lt;00:00, 37.1MB/s]
+ 93%|#########3| 158M/170M [00:04&lt;00:00, 41.7MB/s]
+ 96%|#########5| 163M/170M [00:04&lt;00:00, 40.2MB/s]
+ 98%|#########8| 167M/170M [00:04&lt;00:00, 41.9MB/s]
+100%|##########| 170M/170M [00:04&lt;00:00, 38.6MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/nn/functional.py:3912: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
   (torch.floor((input.size(i + 2).float() * torch.tensor(scale_factors[i], dtype=torch.float32)).float()))
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torchvision/ops/boxes.py:157: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
@@ -583,7 +588,7 @@ torchvision rcnn models.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Get 9 valid boxes
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  38.808 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  39.616 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-object-detection-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7795da4b258c8feff986668b95ef57ad/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized.html b/docs/how_to/deploy_models/deploy_prequantized.html
index fbc2ea3e32..e36dd301c7 100644
--- a/docs/how_to/deploy_models/deploy_prequantized.html
+++ b/docs/how_to/deploy_models/deploy_prequantized.html
@@ -505,8 +505,9 @@ training. Other models require a full post training calibration.</p>
 Downloading: &quot;https://download.pytorch.org/models/mobilenet_v2-b0353104.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
 
   0%|          | 0.00/13.6M [00:00&lt;?, ?B/s]
- 59%|#####8    | 7.99M/13.6M [00:00&lt;00:00, 55.1MB/s]
-100%|##########| 13.6M/13.6M [00:00&lt;00:00, 65.7MB/s]
+ 47%|####6     | 6.30M/13.6M [00:00&lt;00:00, 63.7MB/s]
+ 91%|#########1| 12.4M/13.6M [00:00&lt;00:00, 26.2MB/s]
+100%|##########| 13.6M/13.6M [00:00&lt;00:00, 31.2MB/s]
 </pre></div>
 </div>
 </div>
@@ -597,7 +598,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  88.5574      88.4660      92.2653      88.2301       0.4377
+  89.2362      89.1545      92.8293      88.7173       0.4802
 </pre></div>
 </div>
 <div class="admonition note">
@@ -636,7 +637,7 @@ This includes support for the VNNI 8 bit dot product instruction (CascadeLake or
 <div class="section" id="deploy-a-quantized-tflite-model">
 <h2>Deploy a quantized TFLite Model<a class="headerlink" href="#deploy-a-quantized-tflite-model" title="Permalink to this headline">¶</a></h2>
 <p>TODO</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  24.738 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  27.215 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/fb8217c13f4351224c6cf3aacf1a87fc/deploy_prequantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized_tflite.html b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
index 9c59fcb4b3..dcfb5fe8b6 100644
--- a/docs/how_to/deploy_models/deploy_prequantized_tflite.html
+++ b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
@@ -590,7 +590,7 @@ TFLite Top-5 labels: [387 102 386 341 349]
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  109.5889     110.0667     111.0692     106.5090      1.1982
+  110.5219     110.0997     145.9361     109.4201      3.5949
 </pre></div>
 </div>
 <div class="admonition note">
diff --git a/docs/how_to/deploy_models/deploy_quantized.html b/docs/how_to/deploy_models/deploy_quantized.html
index 31b6cc3de7..ae069838a6 100644
--- a/docs/how_to/deploy_models/deploy_quantized.html
+++ b/docs/how_to/deploy_models/deploy_quantized.html
@@ -531,7 +531,7 @@ for calibration. But the accuracy might be impacted.</p>
   warnings.warn(
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  11.937 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  4.781 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-quantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7810ecf51bfc05f7d5e8a400ac3e815d/deploy_quantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_quantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/sg_execution_times.html b/docs/how_to/deploy_models/sg_execution_times.html
index 09ea777dd7..88fcf9af17 100644
--- a/docs/how_to/deploy_models/sg_execution_times.html
+++ b/docs/how_to/deploy_models/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-deploy-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>12:03.581</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
+<p><strong>12:00.164</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 86%" />
@@ -359,39 +359,39 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></td>
-<td><p>03:38.808</p></td>
+<td><p>03:39.616</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></td>
-<td><p>02:11.937</p></td>
+<td><p>02:04.781</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></td>
-<td><p>01:24.738</p></td>
+<td><p>01:27.215</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_adreno.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno.py</span></code>)</p></td>
-<td><p>01:20.344</p></td>
+<td><p>01:20.171</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></td>
-<td><p>00:51.753</p></td>
+<td><p>00:52.262</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></td>
-<td><p>00:49.056</p></td>
+<td><p>00:50.206</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_adreno_tvmc.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-tvmc-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno™ with tvmc Interface</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno_tvmc.py</span></code>)</p></td>
-<td><p>00:47.284</p></td>
+<td><p>00:44.473</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_nano.html#sphx-glr-how-to-deploy-models-deploy-model-on-nano-py"><span class="std std-ref">Deploy the Pretrained Model on Jetson Nano</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_nano.py</span></code>)</p></td>
-<td><p>00:29.860</p></td>
+<td><p>00:31.003</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></td>
-<td><p>00:29.793</p></td>
+<td><p>00:30.431</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/bring_your_own_datatypes.html b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
index 74319d88aa..ec2a954f40 100644
--- a/docs/how_to/extend_tvm/bring_your_own_datatypes.html
+++ b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
@@ -629,7 +629,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
 <span class="n">module</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">params</span></a> <span class="o">=</span> <span class="n">get_mobilenet</span><span class="p">()</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipc8b6e29a-184d-47ab-948b-86074a1767b1 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipfa0ccf25-9a17-4416-bf1e-95c592342c62 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 </pre></div>
 </div>
 <p>It’s easy to execute MobileNet with native TVM:</p>
diff --git a/docs/how_to/extend_tvm/sg_execution_times.html b/docs/how_to/extend_tvm/sg_execution_times.html
index e59e3f65b5..ae186754b1 100644
--- a/docs/how_to/extend_tvm/sg_execution_times.html
+++ b/docs/how_to/extend_tvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-extend-tvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:56.482</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
+<p><strong>00:57.589</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></td>
-<td><p>00:52.575</p></td>
+<td><p>00:53.696</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></td>
-<td><p>00:02.741</p></td>
+<td><p>00:02.737</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></td>
-<td><p>00:01.159</p></td>
+<td><p>00:01.150</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/use_pass_instrument.html b/docs/how_to/extend_tvm/use_pass_instrument.html
index d7e5d1fecf..9b78543da5 100644
--- a/docs/how_to/extend_tvm/use_pass_instrument.html
+++ b/docs/how_to/extend_tvm/use_pass_instrument.html
@@ -536,10 +536,10 @@ profile the execution time of each passes.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 24349us [24349us] (48.79%; 48.79%)
-FoldScaleAxis: 25557us [8us] (51.21%; 51.21%)
-        FoldConstant: 25549us [1783us] (51.19%; 99.97%)
-                InferType: 23766us [23766us] (47.62%; 93.02%)
+InferType: 24207us [24207us] (48.38%; 48.38%)
+FoldScaleAxis: 25825us [9us] (51.62%; 51.62%)
+        FoldConstant: 25816us [1846us] (51.60%; 99.97%)
+                InferType: 23970us [23970us] (47.91%; 92.85%)
 </pre></div>
 </div>
 </div>
@@ -561,10 +561,10 @@ Refer to following sections and <a class="reference internal" href="../../refere
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 23729us [23729us] (48.21%; 48.21%)
-FoldScaleAxis: 25490us [7us] (51.79%; 51.79%)
-        FoldConstant: 25484us [1808us] (51.78%; 99.97%)
-                InferType: 23676us [23676us] (48.10%; 92.91%)
+InferType: 24087us [24087us] (48.17%; 48.17%)
+FoldScaleAxis: 25913us [9us] (51.83%; 51.83%)
+        FoldConstant: 25904us [1905us] (51.81%; 99.96%)
+                InferType: 23999us [23999us] (48.00%; 92.65%)
 </pre></div>
 </div>
 <p>Register empty list to clear existing instruments.</p>
diff --git a/docs/how_to/optimize_operators/opt_conv_cuda.html b/docs/how_to/optimize_operators/opt_conv_cuda.html
index 96a8e1ba7d..3812ebb9dc 100644
--- a/docs/how_to/optimize_operators/opt_conv_cuda.html
+++ b/docs/how_to/optimize_operators/opt_conv_cuda.html
@@ -585,7 +585,7 @@ latency of convolution.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Convolution: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">*</span> <span cl [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 50.913600 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 53.561344 ms
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-optimize-operators-opt-conv-cuda-py">
diff --git a/docs/how_to/optimize_operators/opt_conv_tensorcore.html b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
index 957a4b2f93..edb0756572 100644
--- a/docs/how_to/optimize_operators/opt_conv_tensorcore.html
+++ b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
@@ -867,7 +867,7 @@ be able to run on our build server</p>
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;conv2d with tensor core: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">* [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 12.271203 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 12.260438 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/optimize_operators/opt_gemm.html b/docs/how_to/optimize_operators/opt_gemm.html
index b267fbc4d2..a1f6f7053b 100644
--- a/docs/how_to/optimize_operators/opt_gemm.html
+++ b/docs/how_to/optimize_operators/opt_gemm.html
@@ -482,8 +482,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Baseline: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.017501
-Baseline: 3.293998
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018971
+Baseline: 3.400320
 </pre></div>
 </div>
 <p>In TVM, we can always inspect lower level IR to debug or optimize our schedule.
@@ -542,7 +542,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt1: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.290424
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.295508
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -599,7 +599,7 @@ vastly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt2: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.265512
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.282423
 </pre></div>
 </div>
 <p>Here is the generated IR after vectorization.</p>
@@ -654,7 +654,7 @@ the access pattern for A matrix is more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt3: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.113756
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.119760
 </pre></div>
 </div>
 <p>Here is the generated IR after loop permutation.</p>
@@ -731,7 +731,7 @@ flattening.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt4: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.106791
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.107420
 </pre></div>
 </div>
 <p>Here is the generated IR after array packing.</p>
@@ -809,7 +809,7 @@ write to C when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt5: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111192
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111840
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -889,7 +889,7 @@ class Module:
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt6: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">opt6_time</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.131794
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.133748
 </pre></div>
 </div>
 <p>Here is the generated IR after parallelization.</p>
diff --git a/docs/how_to/optimize_operators/sg_execution_times.html b/docs/how_to/optimize_operators/sg_execution_times.html
index 7dd9ebc5f5..55f7782a97 100644
--- a/docs/how_to/optimize_operators/sg_execution_times.html
+++ b/docs/how_to/optimize_operators/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-optimize-operators-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:33.366</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
+<p><strong>00:34.510</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></td>
-<td><p>00:30.072</p></td>
+<td><p>00:31.044</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></td>
-<td><p>00:02.020</p></td>
+<td><p>00:02.068</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></td>
-<td><p>00:01.273</p></td>
+<td><p>00:01.398</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
index 0525efcbc7..62e5ffb871 100644
--- a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
+++ b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autoscheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>03:35.911</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
+<p><strong>03:35.555</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 85%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></td>
-<td><p>01:28.334</p></td>
+<td><p>01:30.986</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></td>
-<td><p>01:17.141</p></td>
+<td><p>01:15.639</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></td>
-<td><p>00:17.648</p></td>
+<td><p>00:17.164</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></td>
-<td><p>00:16.607</p></td>
+<td><p>00:15.996</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></td>
-<td><p>00:16.077</p></td>
+<td><p>00:15.665</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></td>
-<td><p>00:00.104</p></td>
+<td><p>00:00.105</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
index 507b820581..38c4ca7781 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
@@ -1023,7 +1023,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.355 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.341 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
index 4ca8ceb813..22d1f54848 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
@@ -926,7 +926,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   8.1275       8.1284       8.1326       8.1215       0.0046
+   8.0884       8.0900       8.0932       8.0820       0.0047
 </pre></div>
 </div>
 </div>
@@ -948,7 +948,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  17.141 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  15.639 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-cuda-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/eafe360d52540634c9eea0fa89e804bd/tune_network_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
index ba06d1cef2..8b235aef2e 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
@@ -945,7 +945,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  766.5506     766.8004     767.7174     765.1341      1.0693
+  769.4255     766.6812     779.7061     761.8892      7.5281
 </pre></div>
 </div>
 </div>
@@ -967,7 +967,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  28.334 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  30.986 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/e416b94ca1090b0897c0f6e0df95b911/tune_network_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_x86.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autotvm/sg_execution_times.html b/docs/how_to/tune_with_autotvm/sg_execution_times.html
index cf12077cab..b8ab5f3154 100644
--- a/docs/how_to/tune_with_autotvm/sg_execution_times.html
+++ b/docs/how_to/tune_with_autotvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:23.621</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
+<p><strong>00:23.496</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></td>
-<td><p>00:23.582</p></td>
+<td><p>00:23.455</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></td>
-<td><p>00:00.023</p></td>
+<td><p>00:00.025</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a Convolutional Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></td>
diff --git a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
index a06211ac64..5eaf00c360 100644
--- a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
@@ -620,7 +620,7 @@ and measure running time.</p>
 
 Best config:
 ,None
-Time cost of this operator: 0.037181
+Time cost of this operator: 0.037022
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autotvm-tune-conv2d-cuda-py">
diff --git a/docs/how_to/work_with_microtvm/micro_autotune.html b/docs/how_to/work_with_microtvm/micro_autotune.html
index 5f337f72a8..fb2976c22a 100644
--- a/docs/how_to/work_with_microtvm/micro_autotune.html
+++ b/docs/how_to/work_with_microtvm/micro_autotune.html
@@ -654,10 +654,10 @@ the tuned operator.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build without Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  305.4     98.737   (1, 2, 10, 10, 3)  2       1        [305.4]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.936     0.949    (1, 6, 10, 10)     1       1        [2.936]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.969     0.313    (1, 1, 10, 10, 3)  1       1        [0.969]
-Total_time                                    -                                             309.306   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  303.7     98.751   (1, 2, 10, 10, 3)  2       1        [303.7]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       2.888     0.939    (1, 6, 10, 10)     1       1        [2.888]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.953     0.31     (1, 1, 10, 10, 3)  1       1        [0.953]
+Total_time                                    -                                             307.541   -        -                  -       -        -
 </pre></div>
 </div>
 </div>
@@ -709,13 +709,13 @@ Total_time                                    -
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build with Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  135.9     98.031   (1, 6, 10, 10, 1)  2       1        [135.9]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.772     1.278    (1, 6, 10, 10)     1       1        [1.772]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.958     0.691    (1, 1, 10, 10, 3)  1       1        [0.958]
-Total_time                                    -                                             138.629   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.3     97.514   (1, 6, 10, 10, 1)  2       1        [102.3]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.77      1.687    (1, 6, 10, 10)     1       1        [1.77]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.839     0.799    (1, 3, 10, 10, 1)  1       1        [0.839]
+Total_time                                    -                                             104.909   -        -                  -       -        -
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  24.708 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  25.761 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-autotune-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/9ccca8fd489a1486ac71b55a55c320c5/micro_autotune.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_autotune.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_pytorch.html b/docs/how_to/work_with_microtvm/micro_pytorch.html
index d2fcafc65e..c3e7776d11 100644
--- a/docs/how_to/work_with_microtvm/micro_pytorch.html
+++ b/docs/how_to/work_with_microtvm/micro_pytorch.html
@@ -465,7 +465,8 @@ download a cat image and preprocess it to use as the model input.</p>
 Downloading: &quot;https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
 
   0%|          | 0.00/3.42M [00:00&lt;?, ?B/s]
-100%|##########| 3.42M/3.42M [00:00&lt;00:00, 54.6MB/s]
+ 61%|######    | 2.09M/3.42M [00:00&lt;00:00, 10.2MB/s]
+100%|##########| 3.42M/3.42M [00:00&lt;00:00, 16.3MB/s]
 /venv/apache-tvm-py3.8/lib/python3.8/site-packages/torch/_utils.py:314: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
   device=storage.device,
 /workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -593,7 +594,7 @@ via the host <cite>main.cc`</cite> or if a Zephyr emulated board is selected as
 Torch top-1 id: 282, class name: tiger cat
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.227 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  27.750 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/12b9ecc04c41abaa12022061771821d1/micro_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_train.html b/docs/how_to/work_with_microtvm/micro_train.html
index 2f5dbfb02a..db89d6cd8b 100644
--- a/docs/how_to/work_with_microtvm/micro_train.html
+++ b/docs/how_to/work_with_microtvm/micro_train.html
@@ -533,7 +533,7 @@ take about <strong>2 minutes</strong> to download the Stanford Cars, while COCO
 <a href="https://docs.python.org/3/library/shutil.html#shutil.move" title="shutil.move" class="sphx-glr-backref-module-shutil sphx-glr-backref-type-py-function"><span class="n">shutil</span><span class="o">.</span><span class="n">move</span></a><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx-glr-backref-typ [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmprmdg7ez0/images/random&#39;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmp6sy5e1ny/images/random&#39;
 </pre></div>
 </div>
 </div>
@@ -593,8 +593,8 @@ objects to other stuff? We can display some examples from our datasets using <co
     <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmprmdg7ez0/images/target contains 8144 images
-/tmp/tmprmdg7ez0/images/random contains 5000 images
+<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmp6sy5e1ny/images/target contains 8144 images
+/tmp/tmp6sy5e1ny/images/random contains 5000 images
 </pre></div>
 </div>
 </div>
@@ -706,13 +706,13 @@ the time on our validation set).</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Epoch 1/3
-328/328 - 40s - loss: 0.2249 - accuracy: 0.9241 - val_loss: 0.1093 - val_accuracy: 0.9615 - 40s/epoch - 122ms/step
+328/328 - 41s - loss: 0.2471 - accuracy: 0.9174 - val_loss: 0.1166 - val_accuracy: 0.9585 - 41s/epoch - 126ms/step
 Epoch 2/3
-328/328 - 36s - loss: 0.1029 - accuracy: 0.9606 - val_loss: 0.1046 - val_accuracy: 0.9634 - 36s/epoch - 108ms/step
+328/328 - 36s - loss: 0.1057 - accuracy: 0.9601 - val_loss: 0.1530 - val_accuracy: 0.9452 - 36s/epoch - 108ms/step
 Epoch 3/3
-328/328 - 35s - loss: 0.0654 - accuracy: 0.9753 - val_loss: 0.1115 - val_accuracy: 0.9637 - 35s/epoch - 108ms/step
+328/328 - 35s - loss: 0.0701 - accuracy: 0.9758 - val_loss: 0.1449 - val_accuracy: 0.9524 - 35s/epoch - 108ms/step
 
-&lt;keras.callbacks.History object at 0x7ff52a956220&gt;
+&lt;keras.callbacks.History object at 0x7fb1c88a37f0&gt;
 </pre></div>
 </div>
 </div>
@@ -976,7 +976,7 @@ as intended.</p>
 <p>From here, we could modify the model to read live images from the camera - we have another
 Arduino tutorial for how to do that <a class="reference external" href="https://github.com/guberti/tvm-arduino-demos/tree/master/examples/person_detection">on GitHub</a>. Alternatively, we could also
 <a class="reference external" href="https://tvm.apache.org/docs/how_to/work_with_microtvm/micro_autotune.html">use TVM’s autotuning capabilities</a> to dramatically improve the model’s performance.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  40.205 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  43.349 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-train-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/b52cec46baf4f78d6bcd94cbe269c8a6/micro_train.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_train.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/sg_execution_times.html b/docs/how_to/work_with_microtvm/sg_execution_times.html
index f964469096..3928f5dfb4 100644
--- a/docs/how_to/work_with_microtvm/sg_execution_times.html
+++ b/docs/how_to/work_with_microtvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-microtvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>08:03.311</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
+<p><strong>08:06.900</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -359,27 +359,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_train.html#sphx-glr-how-to-work-with-microtvm-micro-train-py"><span class="std std-ref">5. Training Vision Models for microTVM on Arduino</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_train.py</span></code>)</p></td>
-<td><p>04:40.205</p></td>
+<td><p>04:43.349</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_pytorch.html#sphx-glr-how-to-work-with-microtvm-micro-pytorch-py"><span class="std std-ref">4. microTVM PyTorch Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_pytorch.py</span></code>)</p></td>
-<td><p>01:29.227</p></td>
+<td><p>01:27.750</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">6. Model Tuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
-<td><p>01:24.708</p></td>
+<td><p>01:25.761</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_aot.html#sphx-glr-how-to-work-with-microtvm-micro-aot-py"><span class="std std-ref">3. microTVM Ahead-of-Time (AOT) Compilation</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_aot.py</span></code>)</p></td>
-<td><p>00:11.618</p></td>
+<td><p>00:11.944</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">2. microTVM TFLite Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></td>
-<td><p>00:08.903</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="micro_custom_ide.html#sphx-glr-how-to-work-with-microtvm-micro-custom-ide-py"><span class="std std-ref">9. Bring microTVM to your own development environment</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_custom_ide.py</span></code>)</p></td>
+<td><p>00:09.070</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="micro_custom_ide.html#sphx-glr-how-to-work-with-microtvm-micro-custom-ide-py"><span class="std std-ref">9. Bring microTVM to your own development environment</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_custom_ide.py</span></code>)</p></td>
-<td><p>00:08.651</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">2. microTVM TFLite Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></td>
+<td><p>00:09.027</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">7. Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_relay/sg_execution_times.html b/docs/how_to/work_with_relay/sg_execution_times.html
index a4babc6a25..b0337538a3 100644
--- a/docs/how_to/work_with_relay/sg_execution_times.html
+++ b/docs/how_to/work_with_relay/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-relay-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:39.257</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
+<p><strong>00:40.357</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,15 +359,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="using_pipeline_executor.html#sphx-glr-how-to-work-with-relay-using-pipeline-executor-py"><span class="std std-ref">Using Pipeline Executor in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_pipeline_executor.py</span></code>)</p></td>
-<td><p>00:34.288</p></td>
+<td><p>00:35.188</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></td>
-<td><p>00:03.154</p></td>
+<td><p>00:03.243</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></td>
-<td><p>00:01.809</p></td>
+<td><p>00:01.919</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/intrin_math.html b/docs/how_to/work_with_schedules/intrin_math.html
index 84cd2c5af4..0584f25294 100644
--- a/docs/how_to/work_with_schedules/intrin_math.html
+++ b/docs/how_to/work_with_schedules/intrin_math.html
@@ -559,7 +559,7 @@ The following example customizes CUDA lowering rule for <code class="code docuti
 <a href="../../reference/api/python/ir.html#tvm.ir.register_intrin_lowering" title="tvm.ir.register_intrin_lowering" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-function"><span class="n">register_intrin_lowering</span></a><span class="p">(</span><span class="s2">&quot;tir.exp&quot;</span><span class="p">,</span> <span class="n">target</span><span class="o">=</span><span class="s2">&quot;cuda&quot;</span><span class="p">,</span> <span class="n">f</span><span class="o">= [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7ff3b86458b0&gt;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7fb04c15d670&gt;
 </pre></div>
 </div>
 <p>Register the rule to TVM with override option to override existing rule.
diff --git a/docs/how_to/work_with_schedules/sg_execution_times.html b/docs/how_to/work_with_schedules/sg_execution_times.html
index cc2cbb28cb..1c9262f9a7 100644
--- a/docs/how_to/work_with_schedules/sg_execution_times.html
+++ b/docs/how_to/work_with_schedules/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-schedules-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:06.264</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
+<p><strong>00:06.192</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,35 +359,35 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></td>
-<td><p>00:03.326</p></td>
+<td><p>00:03.164</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></td>
-<td><p>00:01.195</p></td>
+<td><p>00:01.251</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></td>
-<td><p>00:00.754</p></td>
+<td><p>00:00.761</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></td>
-<td><p>00:00.737</p></td>
+<td><p>00:00.758</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></td>
-<td><p>00:00.113</p></td>
+<td><p>00:00.117</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></td>
-<td><p>00:00.059</p></td>
+<td><p>00:00.060</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></td>
-<td><p>00:00.053</p></td>
+<td><p>00:00.054</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tuple_inputs.html#sphx-glr-how-to-work-with-schedules-tuple-inputs-py"><span class="std std-ref">Compute and Reduce with Tuple Inputs</span></a> (<code class="docutils literal notranslate"><span class="pre">tuple_inputs.py</span></code>)</p></td>
-<td><p>00:00.027</p></td>
+<td><p>00:00.028</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/reference/api/python/auto_scheduler.html b/docs/reference/api/python/auto_scheduler.html
index b72f52a4cf..5a841ec107 100644
--- a/docs/reference/api/python/auto_scheduler.html
+++ b/docs/reference/api/python/auto_scheduler.html
@@ -1627,7 +1627,7 @@ history states as starting point to perform Evolutionary Search).</p></li>
 
 <dl class="py class">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.SketchPolicy">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
 <dd><p>The search policy that searches in a hierarchical search space defined by sketches.
 The policy randomly samples programs from the space defined by sketches and use evolutionary
 search to fine-tune them.</p>
@@ -1911,7 +1911,7 @@ Candidates:
 
 <dl class="py function">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.auto_schedule">
-<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
+<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
 <dd><p>THIS API IS DEPRECATED.</p>
 <p>Run auto scheduling search for a task.</p>
 <dl class="field-list simple">
diff --git a/docs/reference/api/typedoc/classes/bytestreamreader.html b/docs/reference/api/typedoc/classes/bytestreamreader.html
index 099e71c3ca..0af967ef18 100644
--- a/docs/reference/api/typedoc/classes/bytestreamreader.html
+++ b/docs/reference/api/typedoc/classes/bytestreamreader.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -141,7 +141,7 @@
 					<div class="tsd-signature tsd-kind-icon">bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Uint8Array</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L45">rpc_server.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -151,7 +151,7 @@
 					<div class="tsd-signature tsd-kind-icon">offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L44">rpc_server.ts:44</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -168,7 +168,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L65">rpc_server.ts:65</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">Uint8Array</span></h4>
@@ -185,7 +185,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L51">rpc_server.ts:51</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -202,7 +202,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L59">rpc_server.ts:59</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/cachedcallstack.html b/docs/reference/api/typedoc/classes/cachedcallstack.html
index 20df64ee74..6b77133b84 100644
--- a/docs/reference/api/typedoc/classes/cachedcallstack.html
+++ b/docs/reference/api/typedoc/classes/cachedcallstack.html
@@ -144,7 +144,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L223">memory.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L223">memory.ts:223</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">temp<wbr>Args<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><a href="../interfaces/disposable.html" class="tsd-signature-type">Disposable</a><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L208">memory.ts:208</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L208">memory.ts:208</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -194,7 +194,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L312">memory.ts:312</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L312">memory.ts:312</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L284">memory.ts:284</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L284">memory.ts:284</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L388">memory.ts:388</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L388">memory.ts:388</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -300,7 +300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L376">memory.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L376">memory.ts:376</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -340,7 +340,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L267">memory.ts:267</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L267">memory.ts:267</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -373,7 +373,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L243">memory.ts:243</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L243">memory.ts:243</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -390,7 +390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L321">memory.ts:321</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L321">memory.ts:321</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -422,7 +422,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L252">memory.ts:252</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L252">memory.ts:252</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -444,7 +444,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L359">memory.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L359">memory.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -470,7 +470,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L342">memory.ts:342</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L342">memory.ts:342</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -496,7 +496,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L350">memory.ts:350</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L350">memory.ts:350</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -522,7 +522,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L326">memory.ts:326</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L326">memory.ts:326</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -548,7 +548,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L363">memory.ts:363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L363">memory.ts:363</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -574,7 +574,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L346">memory.ts:346</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L346">memory.ts:346</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -600,7 +600,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L334">memory.ts:334</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L334">memory.ts:334</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/dldatatype.html b/docs/reference/api/typedoc/classes/dldatatype.html
index 132dcb0919..0e8ea3566a 100644
--- a/docs/reference/api/typedoc/classes/dldatatype.html
+++ b/docs/reference/api/typedoc/classes/dldatatype.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">bits<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L357">runtime.ts:357</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L357">runtime.ts:357</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">code<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L355">runtime.ts:355</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L355">runtime.ts:355</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -177,7 +177,7 @@
 					<div class="tsd-signature tsd-kind-icon">lanes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L359">runtime.ts:359</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L359">runtime.ts:359</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -199,7 +199,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L376">runtime.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L376">runtime.ts:376</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -216,7 +216,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L367">runtime.ts:367</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L367">runtime.ts:367</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/dldevice.html b/docs/reference/api/typedoc/classes/dldevice.html
index a9aab23555..7aa6b6646a 100644
--- a/docs/reference/api/typedoc/classes/dldevice.html
+++ b/docs/reference/api/typedoc/classes/dldevice.html
@@ -118,7 +118,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L299">runtime.ts:299</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L299">runtime.ts:299</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L297">runtime.ts:297</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L297">runtime.ts:297</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -161,7 +161,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L295">runtime.ts:295</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L295">runtime.ts:295</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -183,7 +183,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L320">runtime.ts:320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L320">runtime.ts:320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -205,7 +205,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L327">runtime.ts:327</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L327">runtime.ts:327</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/environment.html b/docs/reference/api/typedoc/classes/environment.html
index 0ee4d243cb..c4833f9188 100644
--- a/docs/reference/api/typedoc/classes/environment.html
+++ b/docs/reference/api/typedoc/classes/environment.html
@@ -125,7 +125,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/environment.ts#L86">environment.ts:86</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/environment.ts#L86">environment.ts:86</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 					<aside class="tsd-sources">
 						<p>Implementation of <a href="../interfaces/libraryprovider.html">LibraryProvider</a>.<a href="../interfaces/libraryprovider.html#imports">imports</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/environment.ts#L70">environment.ts:70</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/environment.ts#L70">environment.ts:70</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/environment.ts#L69">environment.ts:69</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/environment.ts#L69">environment.ts:69</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -210,7 +210,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">ctypes.FTVMWasmPackedCFunc</span><span class="tsd-signature-symbol"> | </span><span class="tsd-signature-type">undefined</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = [undefined,]</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/environment.ts#L78">environment.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/environment.ts#L78">environment.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -228,7 +228,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<wbr>Free<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/environment.ts#L84">environment.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/environment.ts#L84">environment.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/environment.ts#L105">environment.ts:105</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/environment.ts#L105">environment.ts:105</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ffilibrary.html b/docs/reference/api/typedoc/classes/ffilibrary.html
index 44f4b99594..143c1c7209 100644
--- a/docs/reference/api/typedoc/classes/ffilibrary.html
+++ b/docs/reference/api/typedoc/classes/ffilibrary.html
@@ -131,7 +131,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L50">runtime.ts:50</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L50">runtime.ts:50</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L47">runtime.ts:47</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L47">runtime.ts:47</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L46">runtime.ts:46</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L46">runtime.ts:46</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L45">runtime.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L45">runtime.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">webGPUContext<span class="tsd-signature-symbol">:</span> <a href="webgpucontext.html" class="tsd-signature-type">WebGPUContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L48">runtime.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L48">runtime.ts:48</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -203,7 +203,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L77">runtime.ts:77</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L77">runtime.ts:77</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L67">runtime.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L67">runtime.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -243,7 +243,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L85">runtime.ts:85</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L85">runtime.ts:85</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <a href="cachedcallstack.html" class="tsd-signature-type">CachedCallStack</a></h4>
@@ -260,7 +260,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L96">runtime.ts:96</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L96">runtime.ts:96</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -283,7 +283,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L73">runtime.ts:73</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L73">runtime.ts:73</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/instance.html b/docs/reference/api/typedoc/classes/instance.html
index 2dae0231ec..16e8170940 100644
--- a/docs/reference/api/typedoc/classes/instance.html
+++ b/docs/reference/api/typedoc/classes/instance.html
@@ -161,7 +161,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L844">runtime.ts:844</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L844">runtime.ts:844</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L834">runtime.ts:834</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L834">runtime.ts:834</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -234,7 +234,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L833">runtime.ts:833</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L833">runtime.ts:833</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -251,7 +251,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L973">runtime.ts:973</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L973">runtime.ts:973</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -296,7 +296,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L932">runtime.ts:932</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L932">runtime.ts:932</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -318,7 +318,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L901">runtime.ts:901</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L901">runtime.ts:901</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -381,7 +381,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1215">runtime.ts:1215</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -412,7 +412,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1000">runtime.ts:1000</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -453,7 +453,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1207">runtime.ts:1207</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -491,7 +491,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L922">runtime.ts:922</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L922">runtime.ts:922</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -508,7 +508,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1235">runtime.ts:1235</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -552,7 +552,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L943">runtime.ts:943</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L943">runtime.ts:943</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -577,7 +577,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1088">runtime.ts:1088</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -609,7 +609,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1363">runtime.ts:1363</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -640,7 +640,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1123">runtime.ts:1123</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -672,7 +672,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1016">runtime.ts:1016</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -695,7 +695,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1281">runtime.ts:1281</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -729,7 +729,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L986">runtime.ts:986</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L986">runtime.ts:986</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -769,7 +769,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1341">runtime.ts:1341</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -817,7 +817,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1055">runtime.ts:1055</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -857,7 +857,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1320">runtime.ts:1320</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -900,7 +900,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1197">runtime.ts:1197</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -938,7 +938,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1491">runtime.ts:1491</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -990,7 +990,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1009">runtime.ts:1009</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1014,7 +1014,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1151">runtime.ts:1151</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1046,7 +1046,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1078,7 +1078,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1292">runtime.ts:1292</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1110,7 +1110,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1223">runtime.ts:1223</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1141,7 +1141,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L957">runtime.ts:957</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L957">runtime.ts:957</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/memory.html b/docs/reference/api/typedoc/classes/memory.html
index 88febf40b8..9fd48a8b77 100644
--- a/docs/reference/api/typedoc/classes/memory.html
+++ b/docs/reference/api/typedoc/classes/memory.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L40">memory.ts:40</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L40">memory.ts:40</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Memory</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L32">memory.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L32">memory.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span><span class="tsd-signature-symbol"> = true</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L33">memory.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L33">memory.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L154">memory.ts:154</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L154">memory.ts:154</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -210,7 +210,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L90">memory.ts:90</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L90">memory.ts:90</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -233,7 +233,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L97">memory.ts:97</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L97">memory.ts:97</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -256,7 +256,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L74">memory.ts:74</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L74">memory.ts:74</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -279,7 +279,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L81">memory.ts:81</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L81">memory.ts:81</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -302,7 +302,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L104">memory.ts:104</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L104">memory.ts:104</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -325,7 +325,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L132">memory.ts:132</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L132">memory.ts:132</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -362,7 +362,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L145">memory.ts:145</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L145">memory.ts:145</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -393,7 +393,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L60">memory.ts:60</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L60">memory.ts:60</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -416,7 +416,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L67">memory.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L67">memory.ts:67</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -439,7 +439,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L53">memory.ts:53</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L53">memory.ts:53</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -462,7 +462,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L114">memory.ts:114</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L114">memory.ts:114</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -485,7 +485,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L124">memory.ts:124</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L124">memory.ts:124</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -502,7 +502,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/memory.ts#L175">memory.ts:175</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/memory.ts#L175">memory.ts:175</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/module.html b/docs/reference/api/typedoc/classes/module.html
index 40e0001da3..4927e70a27 100644
--- a/docs/reference/api/typedoc/classes/module.html
+++ b/docs/reference/api/typedoc/classes/module.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L614">runtime.ts:614</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L614">runtime.ts:614</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L626">runtime.ts:626</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L626">runtime.ts:626</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -186,7 +186,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L653">runtime.ts:653</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L653">runtime.ts:653</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -218,7 +218,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L641">runtime.ts:641</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L641">runtime.ts:641</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L687">runtime.ts:687</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L687">runtime.ts:687</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ndarray.html b/docs/reference/api/typedoc/classes/ndarray.html
index 9b2d9e6401..19d6d353f0 100644
--- a/docs/reference/api/typedoc/classes/ndarray.html
+++ b/docs/reference/api/typedoc/classes/ndarray.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L401">runtime.ts:401</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L401">runtime.ts:401</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <a href="dldevice.html" class="tsd-signature-type">DLDevice</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L394">runtime.ts:394</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L394">runtime.ts:394</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -173,7 +173,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L390">runtime.ts:390</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L390">runtime.ts:390</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -188,7 +188,7 @@
 					<div class="tsd-signature tsd-kind-icon">ndim<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L388">runtime.ts:388</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L388">runtime.ts:388</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -203,7 +203,7 @@
 					<div class="tsd-signature tsd-kind-icon">shape<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L392">runtime.ts:392</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L392">runtime.ts:392</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -225,7 +225,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L480">runtime.ts:480</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L480">runtime.ts:480</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -258,7 +258,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L524">runtime.ts:524</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L524">runtime.ts:524</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -290,7 +290,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L465">runtime.ts:465</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L465">runtime.ts:465</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -307,7 +307,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L458">runtime.ts:458</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L458">runtime.ts:458</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -339,7 +339,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L584">runtime.ts:584</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L584">runtime.ts:584</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -363,7 +363,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L553">runtime.ts:553</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L553">runtime.ts:553</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/packedfunccell.html b/docs/reference/api/typedoc/classes/packedfunccell.html
index 9ce0b7cb69..75c5b13c73 100644
--- a/docs/reference/api/typedoc/classes/packedfunccell.html
+++ b/docs/reference/api/typedoc/classes/packedfunccell.html
@@ -117,7 +117,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L248">runtime.ts:248</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L248">runtime.ts:248</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L255">runtime.ts:255</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L255">runtime.ts:255</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -163,7 +163,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L264">runtime.ts:264</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L264">runtime.ts:264</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/rpcserver.html b/docs/reference/api/typedoc/classes/rpcserver.html
index c86b316390..62e7e7e103 100644
--- a/docs/reference/api/typedoc/classes/rpcserver.html
+++ b/docs/reference/api/typedoc/classes/rpcserver.html
@@ -115,7 +115,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L95">rpc_server.ts:95</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">unknown</span><span class="tsd-signat [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L84">rpc_server.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -201,7 +201,7 @@
 					<div class="tsd-signature tsd-kind-icon">key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -211,7 +211,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L83">rpc_server.ts:83</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -242,7 +242,7 @@
 					<div class="tsd-signature tsd-kind-icon">socket<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">WebSocket</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -252,7 +252,7 @@
 					<div class="tsd-signature tsd-kind-icon">state<span class="tsd-signature-symbol">:</span> <a href="../enums/rpcserverstate.html" class="tsd-signature-type">RPCServerState</a><span class="tsd-signature-symbol"> = RPCServerState.InitHeader</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -262,7 +262,7 @@
 					<div class="tsd-signature tsd-kind-icon">url<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/classes/runtimecontext.html b/docs/reference/api/typedoc/classes/runtimecontext.html
index db7e7948e1..80cc4a3adb 100644
--- a/docs/reference/api/typedoc/classes/runtimecontext.html
+++ b/docs/reference/api/typedoc/classes/runtimecontext.html
@@ -132,7 +132,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L148">runtime.ts:148</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L148">runtime.ts:148</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Item<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L143">runtime.ts:143</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L143">runtime.ts:143</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Get<wbr>Size<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L144">runtime.ts:144</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L144">runtime.ts:144</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -192,7 +192,7 @@
 					<div class="tsd-signature tsd-kind-icon">array<wbr>Make<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L145">runtime.ts:145</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -202,7 +202,7 @@
 					<div class="tsd-signature tsd-kind-icon">get<wbr>Sys<wbr>Lib<span class="tsd-signature-symbol">:</span> <a href="../index.html#packedfunc" class="tsd-signature-type">PackedFunc</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L146">runtime.ts:146</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L146">runtime.ts:146</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -219,7 +219,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L189">runtime.ts:189</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L189">runtime.ts:189</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -263,7 +263,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L163">runtime.ts:163</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L163">runtime.ts:163</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -280,7 +280,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L208">runtime.ts:208</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L208">runtime.ts:208</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
@@ -309,7 +309,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L157">runtime.ts:157</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L157">runtime.ts:157</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -326,7 +326,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L167">runtime.ts:167</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L167">runtime.ts:167</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -343,7 +343,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L198">runtime.ts:198</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L198">runtime.ts:198</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-type-parameters-title">Type parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/scalar.html b/docs/reference/api/typedoc/classes/scalar.html
index d9b8dca351..3b712808fb 100644
--- a/docs/reference/api/typedoc/classes/scalar.html
+++ b/docs/reference/api/typedoc/classes/scalar.html
@@ -112,7 +112,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -137,7 +137,7 @@
 					<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L235">runtime.ts:235</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L235">runtime.ts:235</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -152,7 +152,7 @@
 					<div class="tsd-signature tsd-kind-icon">value<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L233">runtime.ts:233</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L233">runtime.ts:233</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmarray.html b/docs/reference/api/typedoc/classes/tvmarray.html
index bf74cbd906..b032771e57 100644
--- a/docs/reference/api/typedoc/classes/tvmarray.html
+++ b/docs/reference/api/typedoc/classes/tvmarray.html
@@ -133,7 +133,7 @@
 							<aside class="tsd-sources">
 								<p>Overrides <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#constructor">constructor</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L784">runtime.ts:784</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L784">runtime.ts:784</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -162,7 +162,7 @@
 					<aside class="tsd-sources">
 						<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#ctx">ctx</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#dispose">dispose</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -197,7 +197,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L804">runtime.ts:804</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L804">runtime.ts:804</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -230,7 +230,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#gethandle">getHandle</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L796">runtime.ts:796</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L796">runtime.ts:796</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -283,7 +283,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typeindex">typeIndex</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -306,7 +306,7 @@
 							<aside class="tsd-sources">
 								<p>Inherited from <a href="tvmobject.html">TVMObject</a>.<a href="tvmobject.html#typekey">typeKey</a></p>
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/tvmobject.html b/docs/reference/api/typedoc/classes/tvmobject.html
index 1357675ec0..0767baebf1 100644
--- a/docs/reference/api/typedoc/classes/tvmobject.html
+++ b/docs/reference/api/typedoc/classes/tvmobject.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
 					<div class="tsd-signature tsd-kind-icon">ctx<span class="tsd-signature-symbol">:</span> <a href="runtimecontext.html" class="tsd-signature-type">RuntimeContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L703">runtime.ts:703</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L703">runtime.ts:703</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -175,7 +175,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L715">runtime.ts:715</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L715">runtime.ts:715</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -192,7 +192,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L730">runtime.ts:730</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L730">runtime.ts:730</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L738">runtime.ts:738</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L738">runtime.ts:738</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -246,7 +246,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L758">runtime.ts:758</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L758">runtime.ts:758</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/webgpucontext.html b/docs/reference/api/typedoc/classes/webgpucontext.html
index d615ee92e4..353179d6f2 100644
--- a/docs/reference/api/typedoc/classes/webgpucontext.html
+++ b/docs/reference/api/typedoc/classes/webgpucontext.html
@@ -120,7 +120,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -145,7 +145,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">GPUDevice</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -155,7 +155,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -172,7 +172,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -209,7 +209,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L172">webgpu.ts:172</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/enums/argtypecode.html b/docs/reference/api/typedoc/enums/argtypecode.html
index ca9a02ca93..f87fbfab8d 100644
--- a/docs/reference/api/typedoc/enums/argtypecode.html
+++ b/docs/reference/api/typedoc/enums/argtypecode.html
@@ -106,7 +106,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 6</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L242">ctypes.ts:242</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -116,7 +116,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L238">ctypes.ts:238</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -126,7 +126,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L236">ctypes.ts:236</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -136,7 +136,7 @@
 					<div class="tsd-signature tsd-kind-icon">Null<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L240">ctypes.ts:240</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 12</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L248">ctypes.ts:248</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMDLTensor<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 7</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L243">ctypes.ts:243</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L241">ctypes.ts:241</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMModule<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 9</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L245">ctypes.ts:245</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMNDArray<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 13</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L249">ctypes.ts:249</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -196,7 +196,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L244">ctypes.ts:244</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -206,7 +206,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObjectRValue<wbr>Ref<wbr>Arg<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 14</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L250">ctypes.ts:250</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -216,7 +216,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMOpaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L239">ctypes.ts:239</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -226,7 +226,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMPacked<wbr>Func<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 10</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L246">ctypes.ts:246</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -236,7 +236,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 11</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L247">ctypes.ts:247</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -246,7 +246,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L237">ctypes.ts:237</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/aynccallbackcode.html b/docs/reference/api/typedoc/enums/aynccallbackcode.html
index 89815e465d..06c6c68bd0 100644
--- a/docs/reference/api/typedoc/enums/aynccallbackcode.html
+++ b/docs/reference/api/typedoc/enums/aynccallbackcode.html
@@ -93,7 +93,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Exception<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L812">runtime.ts:812</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L812">runtime.ts:812</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -103,7 +103,7 @@
 					<div class="tsd-signature tsd-kind-icon">k<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L811">runtime.ts:811</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L811">runtime.ts:811</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/dldatatypecode.html b/docs/reference/api/typedoc/enums/dldatatypecode.html
index e1d960fdd7..341180b57b 100644
--- a/docs/reference/api/typedoc/enums/dldatatypecode.html
+++ b/docs/reference/api/typedoc/enums/dldatatypecode.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L339">runtime.ts:339</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L339">runtime.ts:339</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L337">runtime.ts:337</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L337">runtime.ts:337</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">Opaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L340">runtime.ts:340</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L340">runtime.ts:340</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -125,7 +125,7 @@
 					<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L338">runtime.ts:338</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L338">runtime.ts:338</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/rpcserverstate.html b/docs/reference/api/typedoc/enums/rpcserverstate.html
index 26aaa20008..a09106897c 100644
--- a/docs/reference/api/typedoc/enums/rpcserverstate.html
+++ b/docs/reference/api/typedoc/enums/rpcserverstate.html
@@ -90,7 +90,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<wbr>Key<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">Init<wbr>Server<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Body<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L34">rpc_server.ts:34</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Header<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L33">rpc_server.ts:33</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">Wait<wbr>For<wbr>Callback<span class="tsd-signature-symbol">:</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/enums/sizeof.html b/docs/reference/api/typedoc/enums/sizeof.html
index 55040572e2..420283dcc1 100644
--- a/docs/reference/api/typedoc/enums/sizeof.html
+++ b/docs/reference/api/typedoc/enums/sizeof.html
@@ -100,7 +100,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -110,7 +110,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32 + I32</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L229">ctypes.ts:229</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -120,7 +120,7 @@
 					<div class="tsd-signature tsd-kind-icon">F32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -130,7 +130,7 @@
 					<div class="tsd-signature tsd-kind-icon">F64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -140,7 +140,7 @@
 					<div class="tsd-signature tsd-kind-icon">I32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -150,7 +150,7 @@
 					<div class="tsd-signature tsd-kind-icon">I64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -160,7 +160,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMValue<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -170,7 +170,7 @@
 					<div class="tsd-signature tsd-kind-icon">U16<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -180,7 +180,7 @@
 					<div class="tsd-signature tsd-kind-icon">U8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/index.html b/docs/reference/api/typedoc/index.html
index b9657ae3b9..67abb22bd9 100644
--- a/docs/reference/api/typedoc/index.html
+++ b/docs/reference/api/typedoc/index.html
@@ -182,7 +182,7 @@
 					<div class="tsd-signature tsd-kind-icon">FObject<wbr>Constructor<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, lib<span class="tsd-signature-symbol">: </span><a href="classes/ffilibrary.html" class="tsd-signature-type">FFILibrary</a>, ctx<span class="tsd-signature-symbol">: </span><a href="classes/runtimecontext.html" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L778">runtime.ts:778</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L778">runtime.ts:778</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Alloc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>shape<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, ndim<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeCode<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeBits<span class="tsd [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L113">ctypes.ts:113</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -288,7 +288,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>Bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">num [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L129">ctypes.ts:129</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -332,7 +332,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>To<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>from<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, to<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-sig [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L145">ctypes.ts:145</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -376,7 +376,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>ToBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</sp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L137">ctypes.ts:137</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -420,7 +420,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L122">ctypes.ts:122</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -456,7 +456,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMBackend<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number< [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L161">ctypes.ts:161</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -508,7 +508,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCFunc<wbr>Set<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ret<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L78">ctypes.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -556,7 +556,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMCb<wbr>Arg<wbr>ToReturn<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, code<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span c [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L84">ctypes.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -595,7 +595,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Call<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L68">ctypes.ts:68</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -651,7 +651,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L58">ctypes.ts:58</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -687,7 +687,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Get<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span cla [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L101">ctypes.ts:101</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -726,7 +726,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>List<wbr>Global<wbr>Names<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>outSize<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, outArray<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L89">ctypes.ts:89</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -765,7 +765,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Register<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, f<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, override<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L95">ctypes.ts:95</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -808,7 +808,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMGet<wbr>Last<wbr>Error<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -838,7 +838,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L53">ctypes.ts:53</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -874,7 +874,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Get<wbr>Function<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, funcName<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, queryImports<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">numbe [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -922,7 +922,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Import<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, dep<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-si [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -962,7 +962,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L169">ctypes.ts:169</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -998,7 +998,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Get<wbr>Type<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>obj<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt;  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L174">ctypes.ts:174</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1037,7 +1037,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Index2<wbr>Key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_index<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, out_type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><spa [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1076,7 +1076,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMObject<wbr>Type<wbr>Key2<wbr>Index<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>type_key<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out_tindex<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol">  [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L184">ctypes.ts:184</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1115,7 +1115,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMSynchronize<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>deviceType<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, deviceId<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signatur [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L151">ctypes.ts:151</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1157,7 +1157,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Alloc<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>size<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L189">ctypes.ts:189</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1193,7 +1193,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Free<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ptr<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L192">ctypes.ts:192</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1229,7 +1229,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Func<wbr>Create<wbr>FromCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resource<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L209">ctypes.ts:209</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1269,7 +1269,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>args<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1321,7 +1321,7 @@
 					<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<wbr>Finalizer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resourceHandle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1357,7 +1357,7 @@
 					<div class="tsd-signature tsd-kind-icon">GPUPointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1372,7 +1372,7 @@
 					<div class="tsd-signature tsd-kind-icon">Packed<wbr>Func<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">...</span>args<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol"> &amp; </span><a href="interfaces/disp [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L37">runtime.ts:37</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L37">runtime.ts:37</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1387,7 +1387,7 @@
 					<div class="tsd-signature tsd-kind-icon">Pointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1402,7 +1402,7 @@
 					<div class="tsd-signature tsd-kind-icon">Ptr<wbr>Offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1417,7 +1417,7 @@
 					<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Base<span class="tsd-signature-symbol">:</span> <a href="classes/tvmobject.html" class="tsd-signature-type">TVMObject</a><span class="tsd-signature-symbol"> | </span><a href="classes/ndarray.html" class="tsd-signature-type">NDArray</a><span class="tsd-signature-symbol"> | </span><a href="classes/module.html" class="tsd-signature-type">Module</a><span class="tsd-signature-symbol"> | </span><a href="index.html#packedfunc" class="t [...]
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L781">runtime.ts:781</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L781">runtime.ts:781</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1435,7 +1435,7 @@
 					<div class="tsd-signature tsd-kind-icon">RPC_<wbr>MAGIC<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">1045105</span><span class="tsd-signature-symbol"> = 1045105</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/rpc_server.ts#L38">rpc_server.ts:38</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -1457,7 +1457,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/support.ts#L25">support.ts:25</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/support.ts#L25">support.ts:25</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1489,7 +1489,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/support.ts#L39">support.ts:39</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/support.ts#L39">support.ts:39</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1518,7 +1518,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/support.ts#L52">support.ts:52</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/support.ts#L52">support.ts:52</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1555,7 +1555,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/compact.ts#L38">compact.ts:38</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/compact.ts#L38">compact.ts:38</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1586,7 +1586,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1608,7 +1608,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/environment.ts#L32">environment.ts:32</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/environment.ts#L32">environment.ts:32</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1639,7 +1639,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/compact.ts#L24">compact.ts:24</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/compact.ts#L24">compact.ts:24</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1661,7 +1661,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L1749">runtime.ts:1749</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1726,7 +1726,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/support.ts#L62">support.ts:62</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/support.ts#L62">support.ts:62</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -1748,7 +1748,7 @@
 					<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<wbr>Code<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L343">runtime.ts:343</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L343">runtime.ts:343</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1757,7 +1757,7 @@
 						<div class="tsd-signature tsd-kind-icon">0<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;int&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L344">runtime.ts:344</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L344">runtime.ts:344</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1767,7 +1767,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;uint&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L345">runtime.ts:345</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L345">runtime.ts:345</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1777,7 +1777,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;float&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L346">runtime.ts:346</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L346">runtime.ts:346</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1787,7 +1787,7 @@
 						<div class="tsd-signature tsd-kind-icon">3<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;handle&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L347">runtime.ts:347</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L347">runtime.ts:347</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1798,7 +1798,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Enum<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L272">runtime.ts:272</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L272">runtime.ts:272</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1807,7 +1807,7 @@
 						<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L273">runtime.ts:273</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L273">runtime.ts:273</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1817,7 +1817,7 @@
 						<div class="tsd-signature tsd-kind-icon">15<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;webgpu&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L277">runtime.ts:277</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L277">runtime.ts:277</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1827,7 +1827,7 @@
 						<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;cuda&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L274">runtime.ts:274</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L274">runtime.ts:274</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1837,7 +1837,7 @@
 						<div class="tsd-signature tsd-kind-icon">4<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;opencl&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L275">runtime.ts:275</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L275">runtime.ts:275</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1847,7 +1847,7 @@
 						<div class="tsd-signature tsd-kind-icon">8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = &quot;metal&quot;</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L276">runtime.ts:276</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L276">runtime.ts:276</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1858,7 +1858,7 @@
 					<div class="tsd-signature tsd-kind-icon">Device<wbr>Str<wbr>ToEnum<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L280">runtime.ts:280</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L280">runtime.ts:280</a></li>
 						</ul>
 					</aside>
 					<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1867,7 +1867,7 @@
 						<div class="tsd-signature tsd-kind-icon">cl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L283">runtime.ts:283</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L283">runtime.ts:283</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1877,7 +1877,7 @@
 						<div class="tsd-signature tsd-kind-icon">cpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 1</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L281">runtime.ts:281</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L281">runtime.ts:281</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1887,7 +1887,7 @@
 						<div class="tsd-signature tsd-kind-icon">cuda<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 2</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L282">runtime.ts:282</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L282">runtime.ts:282</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1897,7 +1897,7 @@
 						<div class="tsd-signature tsd-kind-icon">metal<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 8</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L286">runtime.ts:286</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L286">runtime.ts:286</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1907,7 +1907,7 @@
 						<div class="tsd-signature tsd-kind-icon">opencl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L284">runtime.ts:284</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L284">runtime.ts:284</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1917,7 +1917,7 @@
 						<div class="tsd-signature tsd-kind-icon">vulkan<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 7</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L285">runtime.ts:285</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L285">runtime.ts:285</a></li>
 							</ul>
 						</aside>
 					</section>
@@ -1927,7 +1927,7 @@
 						<div class="tsd-signature tsd-kind-icon">webgpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 15</span></div>
 						<aside class="tsd-sources">
 							<ul>
-								<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/runtime.ts#L287">runtime.ts:287</a></li>
+								<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/runtime.ts#L287">runtime.ts:287</a></li>
 							</ul>
 						</aside>
 					</section>
diff --git a/docs/reference/api/typedoc/interfaces/disposable.html b/docs/reference/api/typedoc/interfaces/disposable.html
index 314ff2ce07..9c4eb7e47a 100644
--- a/docs/reference/api/typedoc/interfaces/disposable.html
+++ b/docs/reference/api/typedoc/interfaces/disposable.html
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">dispose<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/types.ts#L52">types.ts:52</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/types.ts#L52">types.ts:52</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/interfaces/functioninfo.html b/docs/reference/api/typedoc/interfaces/functioninfo.html
index 665e692800..d1c7870118 100644
--- a/docs/reference/api/typedoc/interfaces/functioninfo.html
+++ b/docs/reference/api/typedoc/interfaces/functioninfo.html
@@ -95,7 +95,7 @@
 					<div class="tsd-signature tsd-kind-icon">arg_<wbr>types<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -105,7 +105,7 @@
 					<div class="tsd-signature tsd-kind-icon">launch_<wbr>param_<wbr>tags<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -115,7 +115,7 @@
 					<div class="tsd-signature tsd-kind-icon">name<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
 						</ul>
 					</aside>
 				</section>
diff --git a/docs/reference/api/typedoc/interfaces/libraryprovider.html b/docs/reference/api/typedoc/interfaces/libraryprovider.html
index 7fde26412f..e7557f6a75 100644
--- a/docs/reference/api/typedoc/interfaces/libraryprovider.html
+++ b/docs/reference/api/typedoc/interfaces/libraryprovider.html
@@ -112,7 +112,7 @@
 					<div class="tsd-signature tsd-kind-icon">imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/types.ts#L34">types.ts:34</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/types.ts#L34">types.ts:34</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -127,7 +127,7 @@
 					<div class="tsd-signature tsd-kind-icon">start<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>inst<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">Instance</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/fa223b077/web/src/types.ts#L39">types.ts:39</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/e280e01fc/web/src/types.ts#L39">types.ts:39</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
diff --git a/docs/searchindex.js b/docs/searchindex.js
index bb379734ab..e85551182c 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
+Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
diff --git a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
index f8e1ffd099..63221928fc 100644
--- a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:35.641</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
+<p><strong>00:34.822</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 82%" />
@@ -359,7 +359,7 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-relay-vta-py"><span class="std std-ref">Auto-tuning a convolutional network on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_vta.py</span></code>)</p></td>
-<td><p>00:35.633</p></td>
+<td><p>00:34.814</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_alu_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-alu-vta-py"><span class="std std-ref">Auto-tuning a ALU fused op on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_alu_vta.py</span></code>)</p></td>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_classification.html b/docs/topic/vta/tutorials/frontend/deploy_classification.html
index 28d7217139..7d7398b74b 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_classification.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_classification.html
@@ -593,7 +593,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
   warnings.warn(
 /workspace/vta/tutorials/frontend/deploy_classification.py:212: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
   graph, lib, params = relay.build(
-resnet18_v1 inference graph built in 37.80s!
+resnet18_v1 inference graph built in 37.63s!
 </pre></div>
 </div>
 </div>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_detection.html b/docs/topic/vta/tutorials/frontend/deploy_detection.html
index e1c270093d..866e299da6 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_detection.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_detection.html
@@ -611,7 +611,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/workspace/python/tvm/relay/build_module.py:345: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
   warnings.warn(
-yolov3-tiny inference graph built in 26.79s!
+yolov3-tiny inference graph built in 25.56s!
 </pre></div>
 </div>
 </div>
@@ -696,7 +696,6 @@ Download test image</p>
         alu_counter     :           849056
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  0.628 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-topic-vta-tutorials-frontend-deploy-detection-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../../../_downloads/65b9451c8de050d7cd9da2fe5a49acc6/deploy_detection.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_detection.py</span></code></a></p>
diff --git a/docs/topic/vta/tutorials/frontend/sg_execution_times.html b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
index ae51e4d681..a5314f3122 100644
--- a/docs/topic/vta/tutorials/frontend/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-frontend-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>01:59.550</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
+<p><strong>01:57.864</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_detection.html#sphx-glr-topic-vta-tutorials-frontend-deploy-detection-py"><span class="std std-ref">Deploy Pretrained Vision Detection Model from Darknet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_detection.py</span></code>)</p></td>
-<td><p>01:00.628</p></td>
+<td><p>00:59.107</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_classification.html#sphx-glr-topic-vta-tutorials-frontend-deploy-classification-py"><span class="std std-ref">Deploy Pretrained Vision Model from MxNet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_classification.py</span></code>)</p></td>
-<td><p>00:58.922</p></td>
+<td><p>00:58.757</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/optimize/sg_execution_times.html b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
index ffb0c20417..285ba19f09 100644
--- a/docs/topic/vta/tutorials/optimize/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-optimize-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:03.376</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
+<p><strong>00:03.368</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="convolution_opt.html#sphx-glr-topic-vta-tutorials-optimize-convolution-opt-py"><span class="std std-ref">2D Convolution Optimization</span></a> (<code class="docutils literal notranslate"><span class="pre">convolution_opt.py</span></code>)</p></td>
-<td><p>00:02.831</p></td>
+<td><p>00:02.827</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="matrix_multiply_opt.html#sphx-glr-topic-vta-tutorials-optimize-matrix-multiply-opt-py"><span class="std std-ref">Matrix Multiply Blocking</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply_opt.py</span></code>)</p></td>
-<td><p>00:00.545</p></td>
+<td><p>00:00.542</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/topic/vta/tutorials/sg_execution_times.html b/docs/topic/vta/tutorials/sg_execution_times.html
index 0dfefa1622..ae98acf8ac 100644
--- a/docs/topic/vta/tutorials/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-topic-vta-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:00.931</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
+<p><strong>00:00.935</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -359,11 +359,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="matrix_multiply.html#sphx-glr-topic-vta-tutorials-matrix-multiply-py"><span class="std std-ref">Simple Matrix Multiply</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply.py</span></code>)</p></td>
-<td><p>00:00.478</p></td>
+<td><p>00:00.480</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="vta_get_started.html#sphx-glr-topic-vta-tutorials-vta-get-started-py"><span class="std std-ref">Get Started with VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">vta_get_started.py</span></code>)</p></td>
-<td><p>00:00.454</p></td>
+<td><p>00:00.455</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/tutorial/auto_scheduler_matmul_x86.html b/docs/tutorial/auto_scheduler_matmul_x86.html
index b4ff169520..202fa95e3f 100644
--- a/docs/tutorial/auto_scheduler_matmul_x86.html
+++ b/docs/tutorial/auto_scheduler_matmul_x86.html
@@ -502,9 +502,6 @@ trials, we can load the best schedule from the log file and apply it.</p>
 <a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">sch</span></a><span class="p">,</span> <a href="../reference/api/python/ir.html#tvm.ir.Array" title="tvm.ir.Array" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">args</span></a> <span class="o">=</span> <a href="../reference/api/pyth [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>*E
-</pre></div>
-</div>
 </div>
 <div class="section" id="inspecting-the-optimized-schedule">
 <h2>Inspecting the Optimized Schedule<a class="headerlink" href="#inspecting-the-optimized-schedule" title="Permalink to this headline">¶</a></h2>
@@ -582,7 +579,7 @@ class Module:
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 97.216 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 95.529 ms
 </pre></div>
 </div>
 </div>
@@ -644,7 +641,6 @@ resume the status and do more 5 trials.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Resume search:
-*E
 </pre></div>
 </div>
 </div>
@@ -655,7 +651,7 @@ automatically optimize a matrix multiplication, without the need to specify a
 search template.  It ends a series of examples that starts from the Tensor
 Expression (TE) language that demonstrates how TVM can optimize computational
 operations.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  50.538 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  25.982 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-auto-scheduler-matmul-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/eac4389b114db015e95cb3cdf8b86b83/auto_scheduler_matmul_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">auto_scheduler_matmul_x86.py</span></code></a></p>
diff --git a/docs/tutorial/autotvm_matmul_x86.html b/docs/tutorial/autotvm_matmul_x86.html
index 7c3dbbca0d..14b8e29187 100644
--- a/docs/tutorial/autotvm_matmul_x86.html
+++ b/docs/tutorial/autotvm_matmul_x86.html
@@ -690,16 +690,16 @@ reduce variance, we take 5 measurements and average them.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>waiting for device...
 device available
 Get devices for measurement successfully!
-No: 1   GFLOPS: 11.38/11.38     result: MeasureResult(costs=(0.0235916778,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6312711238861084, timestamp=1687174328.710246)        [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 512])],None,95
-No: 2   GFLOPS: 9.36/11.38      result: MeasureResult(costs=(0.0286716888,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.725764274597168, timestamp=1687174329.4380782)        [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 32])],None,59
-No: 3   GFLOPS: 11.44/11.44     result: MeasureResult(costs=(0.0234637456,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6387925148010254, timestamp=1687174330.0867321)       [(&#39;tile_y&#39;, [-1, 4]), (&#39;tile_x&#39;, [-1, 256])],None,82
-No: 4   GFLOPS: 10.26/11.44     result: MeasureResult(costs=(0.026157574,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.692347526550293, timestamp=1687174330.770482)  [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 8])],None,31
-No: 5   GFLOPS: 0.52/11.44      result: MeasureResult(costs=(0.5141622518,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.494819641113281, timestamp=1687174339.400494) [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 1])],None,5
-No: 6   GFLOPS: 3.78/11.44      result: MeasureResult(costs=(0.0709655928,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4131886959075928, timestamp=1687174340.8004124)       [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 8])],None,35
-No: 7   GFLOPS: 10.80/11.44     result: MeasureResult(costs=(0.0248468302,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6811978816986084, timestamp=1687174341.471038)        [(&#39;tile_y&#39;, [-1, 256]), (&#39;tile_x&#39;, [-1, 256])],None,88
-No: 8   GFLOPS: 10.58/11.44     result: MeasureResult(costs=(0.02536371,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6931350231170654, timestamp=1687174342.1444263) [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 16])],None,43
-No: 9   GFLOPS: 3.86/11.44      result: MeasureResult(costs=(0.0696048634,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.347289800643921, timestamp=1687174343.600975) [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 2])],None,13
-No: 10  GFLOPS: 1.02/11.44      result: MeasureResult(costs=(0.2639063404,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.449601411819458, timestamp=1687174348.0901854)        [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 2])],None,16
+No: 1   GFLOPS: 10.47/10.47     result: MeasureResult(costs=(0.025650282199999996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6601579189300537, timestamp=1687186325.3884518)       [(&#39;tile_y&#39;, [-1, 2]), (&#39;tile_x&#39;, [-1, 64])],None,61
+No: 2   GFLOPS: 15.91/15.91     result: MeasureResult(costs=(0.0168689998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5638737678527832, timestamp=1687186325.9174454)       [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 64])],None,64
+No: 3   GFLOPS: 12.53/15.91     result: MeasureResult(costs=(0.021419787,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7103521823883057, timestamp=1687186326.529788) [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 128])],None,76
+No: 4   GFLOPS: 11.85/15.91     result: MeasureResult(costs=(0.0226513548,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6428844928741455, timestamp=1687186327.1516368)       [(&#39;tile_y&#39;, [-1, 16]), (&#39;tile_x&#39;, [-1, 16])],None,44
+No: 5   GFLOPS: 12.52/15.91     result: MeasureResult(costs=(0.0214371522,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6466293334960938, timestamp=1687186327.909414)        [(&#39;tile_y&#39;, [-1, 128]), (&#39;tile_x&#39;, [-1, 128])],None,77
+No: 6   GFLOPS: 0.50/15.91      result: MeasureResult(costs=(0.5324666002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.800585269927979, timestamp=1687186336.687465) [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 1])],None,6
+No: 7   GFLOPS: 11.58/15.91     result: MeasureResult(costs=(0.023183954399999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6200051307678223, timestamp=1687186337.321072)        [(&#39;tile_y&#39;, [-1, 32]), (&#39;tile_x&#39;, [-1, 512])],None,95
+No: 8   GFLOPS: 8.51/15.91      result: MeasureResult(costs=(0.031551859599999996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.7654030323028564, timestamp=1687186338.0872898)       [(&#39;tile_y&#39;, [-1, 512]), (&#39;tile_x&#39;, [-1, 256])],None,89
+No: 9   GFLOPS: 11.08/15.91     result: MeasureResult(costs=(0.024220844600000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6247663497924805, timestamp=1687186338.8868592)       [(&#39;tile_y&#39;, [-1, 8]), (&#39;tile_x&#39;, [-1, 512])],None,93
+No: 10  GFLOPS: 11.82/15.91     result: MeasureResult(costs=(0.022712121,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6573696136474609, timestamp=1687186339.5125413)        [(&#39;tile_y&#39;, [-1, 64]), (&#39;tile_x&#39;, [-1, 32])],None,56
 </pre></div>
 </div>
 <p>With tuning completed, we can choose the configuration from the log file that
diff --git a/docs/tutorial/autotvm_relay_x86.html b/docs/tutorial/autotvm_relay_x86.html
index c79c201f1c..417189eead 100644
--- a/docs/tutorial/autotvm_relay_x86.html
+++ b/docs/tutorial/autotvm_relay_x86.html
@@ -568,7 +568,7 @@ standard deviation.</p>
 <span class="nb">print</span><span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 487.7951962596853, &#39;median&#39;: 488.0434418999357, &#39;std&#39;: 1.7243702626508126}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;mean&#39;: 493.4093647796544, &#39;median&#39;: 492.87042814830784, &#39;std&#39;: 1.429450833251269}
 </pre></div>
 </div>
 </div>
@@ -757,178 +757,178 @@ depending on the specifics of the model and the target platform.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  1/25]  Current/Best:   18.87/  19.00 GFLOPS | Progress: (4/20) | 9.25 s
-[Task  1/25]  Current/Best:    7.20/  22.99 GFLOPS | Progress: (8/20) | 12.00 s
-[Task  1/25]  Current/Best:   10.02/  22.99 GFLOPS | Progress: (12/20) | 15.75 s
-[Task  1/25]  Current/Best:    3.29/  22.99 GFLOPS | Progress: (16/20) | 18.35 s
-[Task  1/25]  Current/Best:   19.49/  22.99 GFLOPS | Progress: (20/20) | 22.17 s Done.
+[Task  1/25]  Current/Best:   10.92/  20.88 GFLOPS | Progress: (4/20) | 10.06 s
+[Task  1/25]  Current/Best:   15.74/  23.70 GFLOPS | Progress: (8/20) | 12.15 s
+[Task  1/25]  Current/Best:    9.39/  23.70 GFLOPS | Progress: (12/20) | 19.96 s
+[Task  1/25]  Current/Best:   18.80/  23.70 GFLOPS | Progress: (16/20) | 22.70 s
+[Task  1/25]  Current/Best:   18.80/  23.70 GFLOPS | Progress: (20/20) | 25.17 s Done.
 
 [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  2/25]  Current/Best:   16.41/  16.41 GFLOPS | Progress: (4/20) | 4.82 s
-[Task  2/25]  Current/Best:   20.34/  20.34 GFLOPS | Progress: (8/20) | 6.78 s
-[Task  2/25]  Current/Best:   16.53/  20.34 GFLOPS | Progress: (12/20) | 9.60 s
-[Task  2/25]  Current/Best:   12.03/  20.34 GFLOPS | Progress: (16/20) | 11.18 s
-[Task  2/25]  Current/Best:   15.92/  20.34 GFLOPS | Progress: (20/20) | 12.86 s Done.
+[Task  2/25]  Current/Best:   15.87/  15.87 GFLOPS | Progress: (4/20) | 4.46 s
+[Task  2/25]  Current/Best:   11.47/  16.42 GFLOPS | Progress: (8/20) | 6.19 s
+[Task  2/25]  Current/Best:    6.22/  18.57 GFLOPS | Progress: (12/20) | 8.02 s
+[Task  2/25]  Current/Best:   14.75/  18.57 GFLOPS | Progress: (16/20) | 9.65 s
+[Task  2/25]  Current/Best:   13.58/  18.57 GFLOPS | Progress: (20/20) | 11.58 s Done.
 
 [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  3/25]  Current/Best:   22.76/  22.76 GFLOPS | Progress: (4/20) | 5.17 s
-[Task  3/25]  Current/Best:   12.27/  22.76 GFLOPS | Progress: (8/20) | 7.55 s
-[Task  3/25]  Current/Best:   15.52/  23.72 GFLOPS | Progress: (12/20) | 10.19 s
-[Task  3/25]  Current/Best:   20.25/  23.72 GFLOPS | Progress: (16/20) | 12.60 s
-[Task  3/25]  Current/Best:   10.26/  23.72 GFLOPS | Progress: (20/20) | 15.86 s Done.
+[Task  3/25]  Current/Best:   21.31/  21.31 GFLOPS | Progress: (4/20) | 4.88 s
+[Task  3/25]  Current/Best:    3.16/  24.23 GFLOPS | Progress: (8/20) | 7.45 s
+[Task  3/25]  Current/Best:   20.78/  24.23 GFLOPS | Progress: (12/20) | 9.67 s
+[Task  3/25]  Current/Best:    9.67/  24.23 GFLOPS | Progress: (16/20) | 11.89 s
+[Task  3/25]  Current/Best:   20.72/  24.23 GFLOPS | Progress: (20/20) | 14.42 s Done.
 
 [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  4/25]  Current/Best:    5.82/  21.45 GFLOPS | Progress: (4/20) | 5.96 s
-[Task  4/25]  Current/Best:   18.36/  21.45 GFLOPS | Progress: (8/20) | 7.71 s
-[Task  4/25]  Current/Best:   15.49/  21.45 GFLOPS | Progress: (12/20) | 9.98 s
-[Task  4/25]  Current/Best:   20.17/  21.45 GFLOPS | Progress: (16/20) | 11.99 s
-[Task  4/25]  Current/Best:   14.37/  21.45 GFLOPS | Progress: (20/20) | 14.56 s Done.
+[Task  4/25]  Current/Best:   16.16/  16.16 GFLOPS | Progress: (4/20) | 6.05 s
+[Task  4/25]  Current/Best:   15.43/  16.16 GFLOPS | Progress: (8/20) | 8.10 s
+[Task  4/25]  Current/Best:   21.58/  21.58 GFLOPS | Progress: (12/20) | 9.92 s
+[Task  4/25]  Current/Best:   11.58/  21.58 GFLOPS | Progress: (16/20) | 12.69 s
+[Task  4/25]  Current/Best:   12.51/  21.58 GFLOPS | Progress: (20/20) | 15.00 s Done.
 
 [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  5/25]  Current/Best:   17.90/  17.95 GFLOPS | Progress: (4/20) | 5.46 s
-[Task  5/25]  Current/Best:    6.29/  19.68 GFLOPS | Progress: (8/20) | 7.42 s
-[Task  5/25]  Current/Best:   15.23/  19.68 GFLOPS | Progress: (12/20) | 10.63 s
-[Task  5/25]  Current/Best:   18.31/  19.68 GFLOPS | Progress: (16/20) | 13.68 s
-[Task  5/25]  Current/Best:    8.84/  19.68 GFLOPS | Progress: (20/20) | 15.75 s Done.
+[Task  5/25]  Current/Best:   15.79/  16.11 GFLOPS | Progress: (4/20) | 4.93 s
+[Task  5/25]  Current/Best:    7.12/  16.11 GFLOPS | Progress: (8/20) | 7.22 s
+[Task  5/25]  Current/Best:   11.63/  16.11 GFLOPS | Progress: (12/20) | 9.91 s
+[Task  5/25]  Current/Best:   13.32/  16.11 GFLOPS | Progress: (16/20) | 12.24 s
+[Task  5/25]  Current/Best:   13.98/  16.11 GFLOPS | Progress: (20/20) | 14.62 s Done.
 
 [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  6/25]  Current/Best:   15.43/  19.72 GFLOPS | Progress: (4/20) | 5.13 s
-[Task  6/25]  Current/Best:   16.23/  19.72 GFLOPS | Progress: (8/20) | 7.84 s
-[Task  6/25]  Current/Best:   17.39/  19.72 GFLOPS | Progress: (12/20) | 10.90 s
-[Task  6/25]  Current/Best:   12.26/  19.99 GFLOPS | Progress: (16/20) | 13.16 s
-[Task  6/25]  Current/Best:   17.54/  19.99 GFLOPS | Progress: (20/20) | 15.48 s Done.
+[Task  6/25]  Current/Best:   11.41/  22.82 GFLOPS | Progress: (4/20) | 6.18 s
+[Task  6/25]  Current/Best:   17.13/  22.82 GFLOPS | Progress: (8/20) | 9.56 s
+[Task  6/25]  Current/Best:   13.68/  22.82 GFLOPS | Progress: (12/20) | 11.79 s
+[Task  6/25]  Current/Best:   23.06/  23.06 GFLOPS | Progress: (16/20) | 14.11 s
+[Task  6/25]  Current/Best:    5.70/  23.06 GFLOPS | Progress: (20/20) | 17.75 s Done.
 
 [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  7/25]  Current/Best:   13.87/  13.87 GFLOPS | Progress: (4/20) | 5.74 s
-[Task  7/25]  Current/Best:   19.00/  19.00 GFLOPS | Progress: (8/20) | 8.09 s
-[Task  7/25]  Current/Best:    5.12/  19.00 GFLOPS | Progress: (12/20) | 10.95 s
-[Task  7/25]  Current/Best:   13.03/  19.57 GFLOPS | Progress: (16/20) | 13.77 s
-[Task  7/25]  Current/Best:   21.52/  21.52 GFLOPS | Progress: (20/20) | 16.67 s Done.
+[Task  7/25]  Current/Best:   18.70/  18.70 GFLOPS | Progress: (4/20) | 5.47 s
+[Task  7/25]  Current/Best:   21.14/  21.14 GFLOPS | Progress: (8/20) | 7.94 s
+[Task  7/25]  Current/Best:   17.40/  21.14 GFLOPS | Progress: (12/20) | 11.06 s
+[Task  7/25]  Current/Best:    8.46/  21.14 GFLOPS | Progress: (16/20) | 14.84 s
+[Task  7/25]  Current/Best:   11.32/  21.14 GFLOPS | Progress: (20/20) | 17.98 s Done.
 
 [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  8/25]  Current/Best:   13.79/  13.79 GFLOPS | Progress: (4/20) | 11.81 s
-[Task  8/25]  Current/Best:   13.16/  13.79 GFLOPS | Progress: (8/20) | 15.03 s
-[Task  8/25]  Current/Best:   12.07/  13.79 GFLOPS | Progress: (12/20) | 18.43 s
-[Task  8/25]  Current/Best:   17.04/  17.04 GFLOPS | Progress: (16/20) | 25.31 s
-[Task  8/25]  Current/Best:   16.02/  17.04 GFLOPS | Progress: (20/20) | 27.69 s Done.
-
-[Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task  9/25]  Current/Best:    9.45/  23.19 GFLOPS | Progress: (4/20) | 7.32 s
-[Task  9/25]  Current/Best:   20.35/  23.19 GFLOPS | Progress: (8/20) | 18.43 s
-[Task  9/25]  Current/Best:   19.30/  23.19 GFLOPS | Progress: (12/20) | 29.50 s
-[Task  9/25]  Current/Best:   18.90/  23.19 GFLOPS | Progress: (16/20) | 32.36 s
-[Task  9/25]  Current/Best:   12.40/  23.19 GFLOPS | Progress: (20/20) | 34.82 s Done.
+[Task  8/25]  Current/Best:    7.64/  14.08 GFLOPS | Progress: (4/20) | 8.89 s
+[Task  8/25]  Current/Best:    7.45/  14.08 GFLOPS | Progress: (8/20) | 20.83 s
+[Task  8/25]  Current/Best:   17.30/  17.30 GFLOPS | Progress: (12/20) | 24.79 s
+[Task  8/25]  Current/Best:    2.58/  17.30 GFLOPS | Progress: (16/20) | 28.41 s
+[Task  8/25]  Current/Best:    8.58/  17.30 GFLOPS | Progress: (20/20) | 37.36 s
+[Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+
+[Task  9/25]  Current/Best:   16.25/  16.25 GFLOPS | Progress: (4/20) | 7.26 s
+[Task  9/25]  Current/Best:   17.85/  18.54 GFLOPS | Progress: (8/20) | 9.18 s
+[Task  9/25]  Current/Best:   16.27/  18.54 GFLOPS | Progress: (12/20) | 13.16 s
+[Task  9/25]  Current/Best:   12.88/  18.54 GFLOPS | Progress: (16/20) | 15.34 s
+[Task  9/25]  Current/Best:   17.06/  22.84 GFLOPS | Progress: (20/20) | 19.34 s Done.
 
 [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 10/25]  Current/Best:   11.09/  16.65 GFLOPS | Progress: (4/20) | 5.63 s
-[Task 10/25]  Current/Best:   13.74/  16.65 GFLOPS | Progress: (8/20) | 8.58 s
-[Task 10/25]  Current/Best:    7.18/  16.65 GFLOPS | Progress: (12/20) | 11.96 s
-[Task 10/25]  Current/Best:   12.65/  16.65 GFLOPS | Progress: (16/20) | 14.01 s
-[Task 10/25]  Current/Best:   11.77/  20.15 GFLOPS | Progress: (20/20) | 15.74 s Done.
+[Task 10/25]  Current/Best:   13.06/  21.21 GFLOPS | Progress: (4/20) | 4.58 s
+[Task 10/25]  Current/Best:   16.01/  21.21 GFLOPS | Progress: (8/20) | 7.02 s
+[Task 10/25]  Current/Best:   16.07/  21.21 GFLOPS | Progress: (12/20) | 8.77 s
+[Task 10/25]  Current/Best:   13.31/  21.21 GFLOPS | Progress: (16/20) | 11.19 s
+[Task 10/25]  Current/Best:   10.65/  21.21 GFLOPS | Progress: (20/20) | 14.89 s Done.
 
 [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 11/25]  Current/Best:   10.09/  23.71 GFLOPS | Progress: (4/20) | 5.54 s
-[Task 11/25]  Current/Best:   18.87/  23.71 GFLOPS | Progress: (8/20) | 7.85 s
-[Task 11/25]  Current/Best:   12.35/  23.71 GFLOPS | Progress: (12/20) | 11.12 s
-[Task 11/25]  Current/Best:    7.13/  23.71 GFLOPS | Progress: (16/20) | 13.65 s
-[Task 11/25]  Current/Best:   17.85/  23.71 GFLOPS | Progress: (20/20) | 16.45 s Done.
+[Task 11/25]  Current/Best:    6.16/   9.97 GFLOPS | Progress: (4/20) | 5.55 s
+[Task 11/25]  Current/Best:   20.67/  21.76 GFLOPS | Progress: (8/20) | 8.37 s
+[Task 11/25]  Current/Best:   19.65/  21.76 GFLOPS | Progress: (12/20) | 10.49 s
+[Task 11/25]  Current/Best:   14.86/  21.76 GFLOPS | Progress: (16/20) | 13.32 s
+[Task 11/25]  Current/Best:   13.09/  22.98 GFLOPS | Progress: (20/20) | 15.29 s Done.
 
 [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 12/25]  Current/Best:   13.55/  13.70 GFLOPS | Progress: (4/20) | 5.51 s
-[Task 12/25]  Current/Best:   12.41/  21.47 GFLOPS | Progress: (8/20) | 8.02 s
-[Task 12/25]  Current/Best:   11.13/  21.47 GFLOPS | Progress: (12/20) | 10.20 s
-[Task 12/25]  Current/Best:    4.64/  21.47 GFLOPS | Progress: (16/20) | 15.02 s
-[Task 12/25]  Current/Best:   11.57/  21.47 GFLOPS | Progress: (20/20) | 18.05 s Done.
+[Task 12/25]  Current/Best:   12.18/  19.28 GFLOPS | Progress: (4/20) | 5.19 s
+[Task 12/25]  Current/Best:    6.18/  19.28 GFLOPS | Progress: (8/20) | 7.94 s
+[Task 12/25]  Current/Best:   15.95/  19.28 GFLOPS | Progress: (12/20) | 11.18 s
+[Task 12/25]  Current/Best:   14.99/  19.28 GFLOPS | Progress: (16/20) | 13.69 s
+[Task 12/25]  Current/Best:   12.16/  19.28 GFLOPS | Progress: (20/20) | 16.28 s Done.
 
 [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 13/25]  Current/Best:   20.56/  20.56 GFLOPS | Progress: (4/20) | 6.30 s
-[Task 13/25]  Current/Best:    3.10/  20.56 GFLOPS | Progress: (8/20) | 9.67 s
-[Task 13/25]  Current/Best:   12.04/  21.47 GFLOPS | Progress: (12/20) | 13.66 s
-[Task 13/25]  Current/Best:    6.74/  22.13 GFLOPS | Progress: (16/20) | 17.64 s
-[Task 13/25]  Current/Best:   10.00/  22.13 GFLOPS | Progress: (20/20) | 20.99 s Done.
+[Task 13/25]  Current/Best:   18.94/  19.05 GFLOPS | Progress: (4/20) | 6.37 s
+[Task 13/25]  Current/Best:   15.68/  19.05 GFLOPS | Progress: (8/20) | 8.78 s
+[Task 13/25]  Current/Best:   12.75/  19.05 GFLOPS | Progress: (12/20) | 12.15 s
+[Task 13/25]  Current/Best:   18.31/  19.84 GFLOPS | Progress: (16/20) | 14.59 s
+[Task 13/25]  Current/Best:   21.66/  21.66 GFLOPS | Progress: (20/20) | 18.28 s Done.
 
 [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 14/25]  Current/Best:   11.47/  11.47 GFLOPS | Progress: (4/20) | 4.96 s
-[Task 14/25]  Current/Best:    5.38/  16.64 GFLOPS | Progress: (8/20) | 16.02 s
-[Task 14/25]  Current/Best:   15.10/  16.64 GFLOPS | Progress: (12/20) | 22.00 s
-[Task 14/25]  Current/Best:   15.92/  16.64 GFLOPS | Progress: (16/20) | 26.46 s
-[Task 14/25]  Current/Best:   13.02/  18.82 GFLOPS | Progress: (20/20) | 28.31 s Done.
+[Task 14/25]  Current/Best:   10.29/  18.34 GFLOPS | Progress: (4/20) | 4.86 s
+[Task 14/25]  Current/Best:   14.65/  18.34 GFLOPS | Progress: (8/20) | 7.19 s
+[Task 14/25]  Current/Best:   12.62/  18.34 GFLOPS | Progress: (12/20) | 12.95 s
+[Task 14/25]  Current/Best:   11.14/  18.34 GFLOPS | Progress: (16/20) | 15.48 s
+[Task 14/25]  Current/Best:   12.09/  20.44 GFLOPS | Progress: (20/20) | 20.85 s Done.
 
 [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 15/25]  Current/Best:    8.81/  10.63 GFLOPS | Progress: (4/20) | 10.92 s
-[Task 15/25]  Current/Best:   15.71/  15.71 GFLOPS | Progress: (8/20) | 18.15 s
-[Task 15/25]  Current/Best:    6.82/  15.71 GFLOPS | Progress: (12/20) | 26.03 s
-[Task 15/25]  Current/Best:    7.17/  18.02 GFLOPS | Progress: (16/20) | 27.78 s
-[Task 15/25]  Current/Best:   15.84/  19.83 GFLOPS | Progress: (20/20) | 31.33 s Done.
-
+[Task 15/25]  Current/Best:   16.65/  19.00 GFLOPS | Progress: (4/20) | 6.72 s
+[Task 15/25]  Current/Best:   14.01/  20.94 GFLOPS | Progress: (8/20) | 8.50 s
+[Task 15/25]  Current/Best:   15.36/  20.96 GFLOPS | Progress: (12/20) | 10.86 s
+[Task 15/25]  Current/Best:   16.47/  20.96 GFLOPS | Progress: (16/20) | 13.44 s
+[Task 15/25]  Current/Best:   15.44/  20.96 GFLOPS | Progress: (20/20) | 24.49 s
 [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 16/25]  Current/Best:   19.41/  19.41 GFLOPS | Progress: (4/20) | 4.83 s
-[Task 16/25]  Current/Best:    7.81/  19.41 GFLOPS | Progress: (8/20) | 6.67 s
-[Task 16/25]  Current/Best:    3.05/  19.41 GFLOPS | Progress: (12/20) | 8.57 s
-[Task 16/25]  Current/Best:   20.51/  20.51 GFLOPS | Progress: (16/20) | 11.75 s
-[Task 16/25]  Current/Best:   18.18/  20.51 GFLOPS | Progress: (20/20) | 14.26 s Done.
+[Task 16/25]  Current/Best:   17.29/  17.29 GFLOPS | Progress: (4/20) | 5.77 s
+[Task 16/25]  Current/Best:   18.35/  18.35 GFLOPS | Progress: (8/20) | 7.91 s
+[Task 16/25]  Current/Best:    6.14/  18.35 GFLOPS | Progress: (12/20) | 10.09 s
+[Task 16/25]  Current/Best:   13.27/  18.85 GFLOPS | Progress: (16/20) | 12.35 s
+[Task 16/25]  Current/Best:   14.53/  18.85 GFLOPS | Progress: (20/20) | 14.85 s Done.
 
 [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 17/25]  Current/Best:    5.52/  23.59 GFLOPS | Progress: (4/20) | 5.07 s
-[Task 17/25]  Current/Best:   22.78/  23.59 GFLOPS | Progress: (8/20) | 6.91 s
-[Task 17/25]  Current/Best:   11.79/  23.59 GFLOPS | Progress: (12/20) | 10.22 s
-[Task 17/25]  Current/Best:    7.29/  23.59 GFLOPS | Progress: (16/20) | 12.45 s
-[Task 17/25]  Current/Best:   21.90/  23.59 GFLOPS | Progress: (20/20) | 15.23 s Done.
+[Task 17/25]  Current/Best:   20.55/  20.55 GFLOPS | Progress: (4/20) | 6.07 s
+[Task 17/25]  Current/Best:    5.37/  20.55 GFLOPS | Progress: (8/20) | 9.19 s
+[Task 17/25]  Current/Best:   17.99/  20.78 GFLOPS | Progress: (12/20) | 12.12 s
+[Task 17/25]  Current/Best:    8.19/  20.78 GFLOPS | Progress: (16/20) | 15.25 s
+[Task 17/25]  Current/Best:    6.21/  20.78 GFLOPS | Progress: (20/20) | 18.26 s Done.
 
 [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 18/25]  Current/Best:    5.42/  15.83 GFLOPS | Progress: (4/20) | 8.93 s
-[Task 18/25]  Current/Best:   15.36/  15.83 GFLOPS | Progress: (8/20) | 11.33 s
-[Task 18/25]  Current/Best:    6.81/  15.83 GFLOPS | Progress: (12/20) | 18.08 s
-[Task 18/25]  Current/Best:   18.66/  18.66 GFLOPS | Progress: (16/20) | 21.02 s
-[Task 18/25]  Current/Best:    9.97/  23.94 GFLOPS | Progress: (20/20) | 23.32 s Done.
+[Task 18/25]  Current/Best:   16.11/  16.11 GFLOPS | Progress: (4/20) | 7.68 s
+[Task 18/25]  Current/Best:   10.38/  18.73 GFLOPS | Progress: (8/20) | 10.31 s
+[Task 18/25]  Current/Best:   11.96/  23.57 GFLOPS | Progress: (12/20) | 13.04 s
+[Task 18/25]  Current/Best:    2.96/  23.57 GFLOPS | Progress: (16/20) | 20.68 s
+[Task 18/25]  Current/Best:    7.83/  23.57 GFLOPS | Progress: (20/20) | 26.39 s Done.
 
 [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 19/25]  Current/Best:   21.17/  21.17 GFLOPS | Progress: (4/20) | 7.42 s
-[Task 19/25]  Current/Best:   20.07/  21.17 GFLOPS | Progress: (8/20) | 10.77 s
-[Task 19/25]  Current/Best:   20.17/  21.17 GFLOPS | Progress: (12/20) | 13.94 s
-[Task 19/25]  Current/Best:    9.32/  21.17 GFLOPS | Progress: (16/20) | 18.92 s
-[Task 19/25]  Current/Best:   10.42/  21.17 GFLOPS | Progress: (20/20) | 23.86 s Done.
+[Task 19/25]  Current/Best:   12.67/  18.47 GFLOPS | Progress: (4/20) | 6.62 s
+[Task 19/25]  Current/Best:   13.49/  18.47 GFLOPS | Progress: (8/20) | 9.64 s
+[Task 19/25]  Current/Best:    6.16/  18.47 GFLOPS | Progress: (12/20) | 12.81 s
+[Task 19/25]  Current/Best:   13.04/  18.90 GFLOPS | Progress: (16/20) | 15.64 s
+[Task 19/25]  Current/Best:    7.84/  18.90 GFLOPS | Progress: (20/20) | 20.68 s Done.
 
 [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 20/25]  Current/Best:    7.33/  12.34 GFLOPS | Progress: (4/20) | 15.05 s
-[Task 20/25]  Current/Best:   16.52/  16.52 GFLOPS | Progress: (8/20) | 18.60 s
-[Task 20/25]  Current/Best:    1.58/  16.97 GFLOPS | Progress: (12/20) | 23.99 s
-[Task 20/25]  Current/Best:   18.44/  18.44 GFLOPS | Progress: (16/20) | 26.72 s
-[Task 20/25]  Current/Best:   20.21/  20.21 GFLOPS | Progress: (20/20) | 30.75 s
+[Task 20/25]  Current/Best:   10.86/  18.12 GFLOPS | Progress: (4/20) | 5.37 s
+[Task 20/25]  Current/Best:   20.11/  20.11 GFLOPS | Progress: (8/20) | 8.14 s
+[Task 20/25]  Current/Best:   18.96/  20.11 GFLOPS | Progress: (12/20) | 11.87 s
+[Task 20/25]  Current/Best:   12.31/  20.11 GFLOPS | Progress: (16/20) | 14.92 s
+[Task 20/25]  Current/Best:   14.34/  20.11 GFLOPS | Progress: (20/20) | 26.50 s
 [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 21/25]  Current/Best:   21.15/  21.15 GFLOPS | Progress: (4/20) | 7.30 s
-[Task 21/25]  Current/Best:    9.86/  21.15 GFLOPS | Progress: (8/20) | 18.56 s
-[Task 21/25]  Current/Best:   22.11/  22.11 GFLOPS | Progress: (12/20) | 21.89 s
-[Task 21/25]  Current/Best:   12.72/  22.11 GFLOPS | Progress: (16/20) | 29.54 s
-[Task 21/25]  Current/Best:    8.10/  22.11 GFLOPS | Progress: (20/20) | 40.76 s
+[Task 21/25]  Current/Best:    5.25/  18.58 GFLOPS | Progress: (4/20) | 6.16 s
+[Task 21/25]  Current/Best:   18.34/  23.22 GFLOPS | Progress: (8/20) | 8.04 s
+[Task 21/25]  Current/Best:    9.99/  23.22 GFLOPS | Progress: (12/20) | 13.93 s
+[Task 21/25]  Current/Best:   11.44/  23.22 GFLOPS | Progress: (16/20) | 16.18 s
+[Task 21/25]  Current/Best:   18.63/  23.22 GFLOPS | Progress: (20/20) | 27.46 s
 [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 22/25]  Current/Best:    9.29/  16.51 GFLOPS | Progress: (4/20) | 5.87 s
-[Task 22/25]  Current/Best:   10.39/  16.51 GFLOPS | Progress: (8/20) | 7.69 s
-[Task 22/25]  Current/Best:    8.54/  19.41 GFLOPS | Progress: (12/20) | 10.01 s
-[Task 22/25]  Current/Best:   11.30/  21.49 GFLOPS | Progress: (16/20) | 11.83 s
-[Task 22/25]  Current/Best:   16.61/  21.49 GFLOPS | Progress: (20/20) | 13.45 s Done.
+[Task 22/25]  Current/Best:    4.44/  18.05 GFLOPS | Progress: (4/20) | 6.16 s
+[Task 22/25]  Current/Best:    6.79/  18.05 GFLOPS | Progress: (8/20) | 11.46 s
+[Task 22/25]  Current/Best:   20.23/  20.72 GFLOPS | Progress: (12/20) | 14.41 s
+[Task 22/25]  Current/Best:    9.89/  20.72 GFLOPS | Progress: (16/20) | 17.31 s
+[Task 22/25]  Current/Best:   15.06/  20.72 GFLOPS | Progress: (20/20) | 19.25 s Done.
 
 [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 23/25]  Current/Best:   18.11/  22.44 GFLOPS | Progress: (4/20) | 6.84 s
-[Task 23/25]  Current/Best:   10.10/  22.44 GFLOPS | Progress: (8/20) | 10.24 s
-[Task 23/25]  Current/Best:   19.19/  22.44 GFLOPS | Progress: (12/20) | 13.97 s
-[Task 23/25]  Current/Best:   10.75/  22.44 GFLOPS | Progress: (16/20) | 17.30 s
-[Task 23/25]  Current/Best:   11.97/  22.44 GFLOPS | Progress: (20/20) | 20.09 s Done.
-
-[Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 24/25]  Current/Best:    3.65/   3.71 GFLOPS | Progress: (4/20) | 13.48 s Done.
+[Task 23/25]  Current/Best:   19.17/  20.68 GFLOPS | Progress: (4/20) | 5.53 s
+[Task 23/25]  Current/Best:   10.41/  20.68 GFLOPS | Progress: (8/20) | 12.05 s
+[Task 23/25]  Current/Best:   10.58/  22.34 GFLOPS | Progress: (12/20) | 14.66 s
+[Task 23/25]  Current/Best:   18.73/  22.54 GFLOPS | Progress: (16/20) | 17.79 s
+[Task 23/25]  Current/Best:    6.34/  22.54 GFLOPS | Progress: (20/20) | 20.56 s Done.
+
+[Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+ Done.
  Done.
 
-[Task 24/25]  Current/Best:    2.11/   3.71 GFLOPS | Progress: (8/20) | 23.81 s
-[Task 24/25]  Current/Best:    4.18/   4.18 GFLOPS | Progress: (12/20) | 34.84 s
-[Task 24/25]  Current/Best:    8.26/  10.17 GFLOPS | Progress: (16/20) | 45.83 s
-[Task 24/25]  Current/Best:    4.08/  10.17 GFLOPS | Progress: (20/20) | 50.80 s
+[Task 24/25]  Current/Best:    5.48/   5.48 GFLOPS | Progress: (4/20) | 13.60 s
+[Task 24/25]  Current/Best:    8.57/   8.57 GFLOPS | Progress: (8/20) | 16.98 s
+[Task 24/25]  Current/Best:    7.24/   8.57 GFLOPS | Progress: (12/20) | 20.04 s
+[Task 24/25]  Current/Best:    3.26/   8.57 GFLOPS | Progress: (16/20) | 30.73 s
+[Task 24/25]  Current/Best:    5.95/   8.57 GFLOPS | Progress: (20/20) | 33.87 s
 [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 25/25]  Current/Best:    5.72/   9.56 GFLOPS | Progress: (4/20) | 4.21 s
-[Task 25/25]  Current/Best:    2.89/   9.71 GFLOPS | Progress: (8/20) | 9.47 s
-[Task 25/25]  Current/Best:    5.74/   9.71 GFLOPS | Progress: (12/20) | 11.13 s
-[Task 25/25]  Current/Best:    1.44/   9.71 GFLOPS | Progress: (16/20) | 17.17 s
-[Task 25/25]  Current/Best:    6.92/   9.71 GFLOPS | Progress: (20/20) | 24.19 s Done.
+[Task 25/25]  Current/Best:    1.55/   5.15 GFLOPS | Progress: (4/20) | 9.71 s
+[Task 25/25]  Current/Best:    4.78/   5.80 GFLOPS | Progress: (8/20) | 20.70 s
+[Task 25/25]  Current/Best:    2.98/   8.12 GFLOPS | Progress: (12/20) | 31.69 s
+[Task 25/25]  Current/Best:    6.64/   8.12 GFLOPS | Progress: (16/20) | 34.47 s
+[Task 25/25]  Current/Best:    7.61/   8.12 GFLOPS | Progress: (20/20) | 45.43 s
 </pre></div>
 </div>
 <p>The output from this tuning process will look something like this:</p>
@@ -976,6 +976,7 @@ model using optimized operators to speed up our computations.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Done.
+Done.
 </pre></div>
 </div>
 <p>Verify that the optimized model runs and produces the same results:</p>
@@ -993,7 +994,7 @@ model using optimized operators to speed up our computations.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>class=&#39;n02123045 tabby, tabby cat&#39; with probability=0.621104
-class=&#39;n02123159 tiger cat&#39; with probability=0.356378
+class=&#39;n02123159 tiger cat&#39; with probability=0.356377
 class=&#39;n02124075 Egyptian cat&#39; with probability=0.019712
 class=&#39;n02129604 tiger, Panthera tigris&#39; with probability=0.001215
 class=&#39;n04040759 radiator&#39; with probability=0.000262
@@ -1030,8 +1031,8 @@ improvement in comparing the optimized model to the unoptimized model.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;unoptimized: </span><span class="si">%s</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">unoptimized</span></a><span class="p">))</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 402.6041530506336, &#39;median&#39;: 401.1850734008476, &#39;std&#39;: 3.8736724392891415}
-unoptimized: {&#39;mean&#39;: 487.7951962596853, &#39;median&#39;: 488.0434418999357, &#39;std&#39;: 1.7243702626508126}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>optimized: {&#39;mean&#39;: 408.9835010492243, &#39;median&#39;: 408.2894904000568, &#39;std&#39;: 2.957360840474384}
+unoptimized: {&#39;mean&#39;: 493.4093647796544, &#39;median&#39;: 492.87042814830784, &#39;std&#39;: 1.429450833251269}
 </pre></div>
 </div>
 </div>
@@ -1045,7 +1046,7 @@ models.</p>
 <p>Here we presented a simple example using ResNet-50 v2 locally. However, TVM
 supports many more features including cross-compilation, remote execution and
 profiling/benchmarking.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 13 minutes  27.114 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 13 minutes  3.721 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-autotvm-relay-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/57a45d9bef1af358191e7d50043e652c/autotvm_relay_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">autotvm_relay_x86.py</span></code></a></p>
diff --git a/docs/tutorial/cross_compilation_and_rpc.html b/docs/tutorial/cross_compilation_and_rpc.html
index 59a74a3f39..e7ff1d38d8 100644
--- a/docs/tutorial/cross_compilation_and_rpc.html
+++ b/docs/tutorial/cross_compilation_and_rpc.html
@@ -548,7 +548,7 @@ device and returns the measured cost. Network overhead is excluded.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;</span><span class="si">%g</span><span class="s2"> secs/op&quot;</span> <span class="o">%</span> <span class="n">cost</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.193e-07 secs/op
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>1.159e-07 secs/op
 </pre></div>
 </div>
 </div>
diff --git a/docs/tutorial/intro_topi.html b/docs/tutorial/intro_topi.html
index b8fa97cc87..f74bc75525 100644
--- a/docs/tutorial/intro_topi.html
+++ b/docs/tutorial/intro_topi.html
@@ -518,7 +518,7 @@ class Module:
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/ir.html#tvm.ir.Array" title="tvm.ir.Array" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">sg</span><span class="o">.</span><span class="n">stages</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0x1af8a720)), stage(b, placeholder(b, 0x1d098870)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attr [...]
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[stage(a, placeholder(a, 0xae08760)), stage(b, placeholder(b, 0x1ada0170)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax1, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;), T.iter_var(ax2, T.Range(0, 10), &quot;DataPar&quot;, &quot;&quot;)], reduce_axis=[], tag=broadcast, attrs [...]
 </pre></div>
 </div>
 <p>We can test the correctness by comparing with <code class="code docutils literal notranslate"><span class="pre">numpy</span></code> result as follows</p>
diff --git a/docs/tutorial/sg_execution_times.html b/docs/tutorial/sg_execution_times.html
index 99cf80d761..22bf3acd31 100644
--- a/docs/tutorial/sg_execution_times.html
+++ b/docs/tutorial/sg_execution_times.html
@@ -350,7 +350,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-tutorial-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>17:28.044</strong> total execution time for <strong>tutorial</strong> files:</p>
+<p><strong>16:34.063</strong> total execution time for <strong>tutorial</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -359,46 +359,46 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_relay_x86.html#sphx-glr-tutorial-autotvm-relay-x86-py"><span class="std std-ref">Compiling and Optimizing a Model with the Python Interface (AutoTVM)</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_relay_x86.py</span></code>)</p></td>
-<td><p>13:27.114</p></td>
+<td><p>13:03.721</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="auto_scheduler_matmul_x86.html#sphx-glr-tutorial-auto-scheduler-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Auto-scheduling</span></a> (<code class="docutils literal notranslate"><span class="pre">auto_scheduler_matmul_x86.py</span></code>)</p></td>
-<td><p>01:50.538</p></td>
+<td><p>01:25.982</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_expr_get_started.html#sphx-glr-tutorial-tensor-expr-get-started-py"><span class="std std-ref">Working with Operators Using Tensor Expression</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_expr_get_started.py</span></code>)</p></td>
-<td><p>01:00.236</p></td>
+<td><p>01:00.225</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="relay_quick_start.html#sphx-glr-tutorial-relay-quick-start-py"><span class="std std-ref">Quick Start Tutorial for Compiling Deep Learning Models</span></a> (<code class="docutils literal notranslate"><span class="pre">relay_quick_start.py</span></code>)</p></td>
-<td><p>00:41.336</p></td>
+<td><p>00:40.535</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="autotvm_matmul_x86.html#sphx-glr-tutorial-autotvm-matmul-x86-py"><span class="std std-ref">Optimizing Operators with Schedule Templates and AutoTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">autotvm_matmul_x86.py</span></code>)</p></td>
-<td><p>00:26.732</p></td>
+<td><p>00:21.563</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="intro_topi.html#sphx-glr-tutorial-intro-topi-py"><span class="std std-ref">Introduction to TOPI</span></a> (<code class="docutils literal notranslate"><span class="pre">intro_topi.py</span></code>)</p></td>
-<td><p>00:00.996</p></td>
+<td><p>00:00.982</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tensor_ir_blitz_course.html#sphx-glr-tutorial-tensor-ir-blitz-course-py"><span class="std std-ref">Blitz Course to TensorIR</span></a> (<code class="docutils literal notranslate"><span class="pre">tensor_ir_blitz_course.py</span></code>)</p></td>
-<td><p>00:00.871</p></td>
+<td><p>00:00.851</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="cross_compilation_and_rpc.html#sphx-glr-tutorial-cross-compilation-and-rpc-py"><span class="std std-ref">Cross Compilation and RPC</span></a> (<code class="docutils literal notranslate"><span class="pre">cross_compilation_and_rpc.py</span></code>)</p></td>
-<td><p>00:00.221</p></td>
+<td><p>00:00.204</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="uma.html#sphx-glr-tutorial-uma-py"><span class="std std-ref">Making your Hardware Accelerator TVM-ready with UMA</span></a> (<code class="docutils literal notranslate"><span class="pre">uma.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="tvmc_python.html#sphx-glr-tutorial-tvmc-python-py"><span class="std std-ref">Getting Starting using TVMC Python: a high-level API for TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_python.py</span></code>)</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="tvmc_command_line_driver.html#sphx-glr-tutorial-tvmc-command-line-driver-py"><span class="std std-ref">Compiling and Optimizing a Model with TVMC</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_command_line_driver.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tvmc_command_line_driver.html#sphx-glr-tutorial-tvmc-command-line-driver-py"><span class="std std-ref">Compiling and Optimizing a Model with TVMC</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_command_line_driver.py</span></code>)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="tvmc_python.html#sphx-glr-tutorial-tvmc-python-py"><span class="std std-ref">Getting Starting using TVMC Python: a high-level API for TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">tvmc_python.py</span></code>)</p></td>
 <td><p>00:00.000</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
diff --git a/docs/tutorial/tensor_expr_get_started.html b/docs/tutorial/tensor_expr_get_started.html
index 6e8c7d3d9e..d7b82e091e 100644
--- a/docs/tutorial/tensor_expr_get_started.html
+++ b/docs/tutorial/tensor_expr_get_started.html
@@ -560,7 +560,7 @@ helper function to run a profile of the TVM generated code.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.000007
-naive: 0.000007
+naive: 0.000008
 </pre></div>
 </div>
 </div>
@@ -615,7 +615,7 @@ compile and run this new schedule with the parallel operation applied:</p>
 <span class="n">evaluate_addition</span><span class="p">(</span><span class="n">fadd_parallel</span><span class="p">,</span> <a href="../reference/api/python/target.html#tvm.target.Target" title="tvm.target.Target" class="sphx-glr-backref-module-tvm-target sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">tgt</span></a><span class="p">,</span> <span class="s2">&quot;parallel&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.h [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallel: 0.000007
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallel: 0.000008
 </pre></div>
 </div>
 </div>
@@ -691,10 +691,10 @@ class Module:
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Operator                  Timing             Performance
-   numpy    7.179829990491271e-06                    1.0
-   naive              6.6888e-06      0.9316098025800646
-parallel              6.9881e-06      0.9732960264038019
-  vector             3.92135e-05       5.461619572041826
+   numpy    7.042039942461997e-06                    1.0
+   naive              7.8677e-06       1.117247284066006
+parallel              8.1401e-06      1.1559292572194793
+  vector             3.92583e-05       5.574847674930219
 </pre></div>
 </div>
 <div class="admonition-code-specialization admonition">
@@ -1010,7 +1010,7 @@ matrix multiplication.</p>
 <span class="n">answer</span> <span class="o">=</span> <span class="n">numpy</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a</span><span class="o">.</span><span class="n">numpy</span><span class="p">(),</span> <span class="n">b</span><span class="o">.</span><span class="n">numpy</span><span class="p">())</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.017677
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018719
 </pre></div>
 </div>
 <p>Now we write a basic matrix multiplication using TVM TE and verify that it
@@ -1051,7 +1051,7 @@ optimizations.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.427272
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>none: 3.416357
 </pre></div>
 </div>
 <p>Let’s take a look at the intermediate representation of the operator and
@@ -1115,7 +1115,7 @@ schedule.</p>
 <span class="n">evaluate_operation</span><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">s</span></a><span class="p">,</span> <span class="p">[</span><a href="../reference/api/python/te.html#tvm.te.Tensor" title="tvm.te.Tensor" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.298237
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>blocking: 0.297736
 </pre></div>
 </div>
 <p>By reordering the computation to take advantage of caching, you should see a
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.278766
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vectorization: 0.283784
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1213,7 +1213,7 @@ more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.116643
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>loop permutation: 0.117506
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1283,7 +1283,7 @@ optimized schedule.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.106191
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>array packing: 0.106528
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1349,7 +1349,7 @@ to `C</cite> when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.111985
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>block caching: 0.111828
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1406,7 +1406,7 @@ of thread-level parallelization.</p>
 <span class="nb">print</span><span class="p">(</span><a href="../reference/api/python/driver.html#tvm.lower" title="tvm.lower" class="sphx-glr-backref-module-tvm sphx-glr-backref-type-py-function"><span class="n">tvm</span><span class="o">.</span><span class="n">lower</span></a><span class="p">(</span><a href="../reference/api/python/te.html#tvm.te.Schedule" title="tvm.te.Schedule" class="sphx-glr-backref-module-tvm-te sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.132418
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>parallelization: 0.132464
 # from tvm.script import ir as I
 # from tvm.script import tir as T
 
@@ -1459,13 +1459,13 @@ working, we can compare the results.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>        Operator                  Timing             Performance
-            none            3.4272720055                     1.0
-        blocking            0.2982374362     0.08701889891476254
-   vectorization            0.2787663073     0.08133766647428126
-loop permutation     0.11664279019999998     0.03403371253078675
-   array packing     0.10619090550000002     0.03098409035804206
-   block caching     0.11198485169999998     0.03267463204562974
- parallelization     0.13241790800000003     0.03863653301736749
+            none            3.4163570219                     1.0
+        blocking            0.2977356527     0.08715004046456917
+   vectorization            0.2837844282     0.08306638515261905
+loop permutation             0.117506043     0.03439512973812358
+   array packing            0.1065281419    0.031181794296415363
+   block caching            0.1118275401     0.03273297825231607
+ parallelization             0.132463913    0.038773439705177666
 </pre></div>
 </div>
 <p>Note that the outputs on the web page reflect the running times on a
@@ -1497,7 +1497,7 @@ is</p>
 you can build generic templates of the matrix multiplication and other
 operations with tunable parameters that allows you to automatically optimize
 the computation for specific platforms.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  0.236 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  0.225 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-tensor-expr-get-started-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../_downloads/40a01cffb015a67aaec0fad7e27cf80d/tensor_expr_get_started.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tensor_expr_get_started.py</span></code></a></p>