You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2022/09/30 13:48:52 UTC

[tvm-site] branch asf-site updated: deploying docs (apache/tvm@d4bf9ecf5524d265916ac7b860b0027f5eee5c49)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 7033f7b865 deploying docs (apache/tvm@d4bf9ecf5524d265916ac7b860b0027f5eee5c49)
7033f7b865 is described below

commit 7033f7b86595261b69955ee0faa66abe32eb5012
Author: tvm-bot <95...@users.noreply.github.com>
AuthorDate: Fri Sep 30 13:48:43 2022 +0000

    deploying docs (apache/tvm@d4bf9ecf5524d265916ac7b860b0027f5eee5c49)
---
 docs/_images/sphx_glr_micro_train_001.png          |  Bin 335230 -> 327199 bytes
 docs/_images/sphx_glr_micro_train_thumb.png        |  Bin 23974 -> 22934 bytes
 .../how_to/compile_models/from_darknet.rst.txt     |    2 +-
 .../how_to/compile_models/from_keras.rst.txt       |    2 +-
 .../how_to/compile_models/from_mxnet.rst.txt       |    2 +-
 .../how_to/compile_models/from_oneflow.rst.txt     |    2 +-
 .../how_to/compile_models/from_pytorch.rst.txt     |    2 +-
 .../how_to/compile_models/from_tensorflow.rst.txt  |    2 +-
 .../compile_models/sg_execution_times.rst.txt      |   22 +-
 .../deploy_models/deploy_model_on_android.rst.txt  |    2 +-
 .../deploy_object_detection_pytorch.rst.txt        |    4 +-
 .../deploy_models/deploy_prequantized.rst.txt      |    6 +-
 .../deploy_prequantized_tflite.rst.txt             |    4 +-
 .../how_to/deploy_models/deploy_quantized.rst.txt  |    2 +-
 .../deploy_models/deploy_ssd_gluoncv.rst.txt       |    4 +-
 .../deploy_models/sg_execution_times.rst.txt       |   20 +-
 .../extend_tvm/bring_your_own_datatypes.rst.txt    |    2 +-
 .../how_to/extend_tvm/sg_execution_times.rst.txt   |   10 +-
 .../how_to/extend_tvm/use_pass_instrument.rst.txt  |   16 +-
 .../optimize_operators/opt_conv_cuda.rst.txt       |    2 +-
 .../optimize_operators/opt_conv_tensorcore.rst.txt |    2 +-
 .../how_to/optimize_operators/opt_gemm.rst.txt     |   16 +-
 .../optimize_operators/sg_execution_times.rst.txt  |    8 +-
 .../sg_execution_times.rst.txt                     |   14 +-
 .../tune_conv2d_layer_cuda.rst.txt                 | 1620 ++++++--------------
 .../tune_network_cuda.rst.txt                      |    2 +-
 .../tune_network_x86.rst.txt                       |    4 +-
 .../tune_sparse_x86.rst.txt                        |   28 +-
 .../tune_with_autotvm/sg_execution_times.rst.txt   |    6 +-
 .../tune_with_autotvm/tune_conv2d_cuda.rst.txt     |  314 ++--
 .../tune_with_autotvm/tune_relay_cuda.rst.txt      |    2 +-
 .../work_with_microtvm/micro_autotune.rst.txt      |   16 +-
 .../how_to/work_with_microtvm/micro_train.rst.txt  |   18 +-
 .../work_with_microtvm/sg_execution_times.rst.txt  |   10 +-
 .../work_with_relay/sg_execution_times.rst.txt     |    8 +-
 .../how_to/work_with_schedules/intrin_math.rst.txt |    2 +-
 .../work_with_schedules/sg_execution_times.rst.txt |   14 +-
 .../how_to/work_with_schedules/tensorize.rst.txt   |    2 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |    6 +-
 .../vta/tutorials/autotvm/tune_relay_vta.rst.txt   |    2 +-
 .../frontend/deploy_classification.rst.txt         |    2 +-
 .../tutorials/frontend/deploy_detection.rst.txt    |    2 +-
 .../tutorials/frontend/sg_execution_times.rst.txt  |    6 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |    6 +-
 .../topic/vta/tutorials/sg_execution_times.rst.txt |    6 +-
 .../tutorial/auto_scheduler_matmul_x86.rst.txt     |    6 +-
 docs/_sources/tutorial/autotvm_matmul_x86.rst.txt  |   20 +-
 docs/_sources/tutorial/autotvm_relay_x86.rst.txt   |   56 +-
 .../tutorial/cross_compilation_and_rpc.rst.txt     |    2 +-
 docs/_sources/tutorial/intro_topi.rst.txt          |    4 +-
 docs/_sources/tutorial/relay_quick_start.rst.txt   |    2 +-
 docs/_sources/tutorial/sg_execution_times.rst.txt  |   26 +-
 .../tutorial/tensor_expr_get_started.rst.txt       |   43 +-
 docs/commit_hash                                   |    2 +-
 docs/genindex.html                                 |    2 +
 docs/how_to/compile_models/from_darknet.html       |    2 +-
 docs/how_to/compile_models/from_keras.html         |    2 +-
 docs/how_to/compile_models/from_mxnet.html         |    2 +-
 docs/how_to/compile_models/from_oneflow.html       |   15 +-
 docs/how_to/compile_models/from_pytorch.html       |    7 +-
 docs/how_to/compile_models/from_tensorflow.html    |    2 +-
 docs/how_to/compile_models/sg_execution_times.html |   30 +-
 .../deploy_models/deploy_model_on_android.html     |    2 +-
 .../deploy_object_detection_pytorch.html           |   18 +-
 docs/how_to/deploy_models/deploy_prequantized.html |    6 +-
 .../deploy_models/deploy_prequantized_tflite.html  |    4 +-
 docs/how_to/deploy_models/deploy_quantized.html    |    2 +-
 docs/how_to/deploy_models/deploy_ssd_gluoncv.html  |   36 +-
 docs/how_to/deploy_models/sg_execution_times.html  |   20 +-
 .../extend_tvm/bring_your_own_datatypes.html       |    2 +-
 docs/how_to/extend_tvm/sg_execution_times.html     |   10 +-
 docs/how_to/extend_tvm/use_pass_instrument.html    |   16 +-
 docs/how_to/optimize_operators/opt_conv_cuda.html  |    2 +-
 .../optimize_operators/opt_conv_tensorcore.html    |    2 +-
 docs/how_to/optimize_operators/opt_gemm.html       |   16 +-
 .../optimize_operators/sg_execution_times.html     |    8 +-
 .../sg_execution_times.html                        |   14 +-
 .../tune_conv2d_layer_cuda.html                    | 1620 ++++++--------------
 .../tune_with_autoscheduler/tune_network_cuda.html |    2 +-
 .../tune_with_autoscheduler/tune_network_x86.html  |    4 +-
 .../tune_with_autoscheduler/tune_sparse_x86.html   |   28 +-
 .../tune_with_autotvm/sg_execution_times.html      |    6 +-
 .../how_to/tune_with_autotvm/tune_conv2d_cuda.html |  314 ++--
 docs/how_to/tune_with_autotvm/tune_relay_cuda.html |    2 +-
 docs/how_to/work_with_microtvm/micro_autotune.html |   16 +-
 docs/how_to/work_with_microtvm/micro_train.html    |   16 +-
 .../work_with_microtvm/sg_execution_times.html     |   10 +-
 .../how_to/work_with_relay/sg_execution_times.html |    8 +-
 docs/how_to/work_with_schedules/intrin_math.html   |    2 +-
 .../work_with_schedules/sg_execution_times.html    |   14 +-
 docs/how_to/work_with_schedules/tensorize.html     |    2 +-
 docs/objects.inv                                   |  Bin 23545 -> 23554 bytes
 docs/reference/api/doxygen/builder_8h_source.html  |    2 +-
 .../doxygen/classtvm_1_1CompilationConfigNode.html |    2 +-
 docs/reference/api/doxygen/classtvm_1_1Target.html |    4 +-
 .../classtvm_1_1TargetKindNode-members.html        |    8 +-
 .../api/doxygen/classtvm_1_1TargetKindNode.html    |   22 +-
 .../classtvm_1_1TargetKindNode__coll__graph.svg    |    2 +-
 .../classtvm_1_1TargetKindNode__inherit__graph.svg |    2 +-
 .../classtvm_1_1TargetKindRegEntry-members.html    |    4 +-
 .../doxygen/classtvm_1_1TargetKindRegEntry.html    |   32 +-
 ...classtvm_1_1TargetKindRegEntry__coll__graph.svg |   18 +-
 .../doxygen/classtvm_1_1TargetNode-members.html    |   59 +-
 .../api/doxygen/classtvm_1_1TargetNode.html        |   24 +-
 .../classtvm_1_1TargetNode__coll__graph.svg        |  333 ++--
 .../classtvm_1_1TargetNode__inherit__graph.svg     |  117 +-
 .../api/doxygen/classtvm_1_1VirtualDevice.html     |    2 +-
 .../api/doxygen/classtvm_1_1VirtualDeviceNode.html |    2 +-
 docs/reference/api/doxygen/codegen_8h_source.html  |    2 +-
 .../api/doxygen/compilation__config_8h_source.html |    2 +-
 .../api/doxygen/cuda_2dense_8h_source.html         |    2 +-
 .../api/doxygen/cuda_2injective_8h_source.html     |    2 +-
 .../api/doxygen/cuda_2pooling_8h_source.html       |    2 +-
 .../api/doxygen/cuda_2reduction_8h_source.html     |    2 +-
 .../api/doxygen/cuda_2softmax_8h_source.html       |    2 +-
 docs/reference/api/doxygen/database_8h_source.html |    2 +-
 .../api/doxygen/extracted__task_8h_source.html     |    2 +-
 docs/reference/api/doxygen/functions_d.html        |    4 +-
 docs/reference/api/doxygen/functions_func_g.html   |    7 +-
 docs/reference/api/doxygen/functions_func_s.html   |   10 +-
 docs/reference/api/doxygen/functions_func_t.html   |    6 +-
 docs/reference/api/doxygen/functions_func_u.html   |    2 +-
 docs/reference/api/doxygen/functions_g.html        |    5 +-
 docs/reference/api/doxygen/functions_k.html        |    4 +-
 docs/reference/api/doxygen/functions_s.html        |   10 +-
 docs/reference/api/doxygen/functions_t.html        |    8 +-
 docs/reference/api/doxygen/functions_vars_d.html   |    4 +-
 .../api/doxygen/generic_2default_8h_source.html    |    2 +-
 .../api/doxygen/generic_2extern_8h_source.html     |    2 +-
 .../api/doxygen/generic_2injective_8h_source.html  |    2 +-
 .../api/doxygen/interpreter_8h_source.html         |    2 +-
 .../api/doxygen/op__strategy_8h_source.html        |    2 +-
 .../doxygen/relay_2op__attr__types_8h_source.html  |    2 +-
 .../api/doxygen/rocm_2dense_8h_source.html         |    2 +-
 .../api/doxygen/rocm_2injective_8h_source.html     |    2 +-
 .../api/doxygen/rocm_2pooling_8h_source.html       |    2 +-
 .../api/doxygen/rocm_2reduction_8h_source.html     |    2 +-
 .../api/doxygen/rocm_2softmax_8h_source.html       |    2 +-
 docs/reference/api/doxygen/search/all_10.js        |    2 +-
 docs/reference/api/doxygen/search/all_13.js        |    4 +-
 docs/reference/api/doxygen/search/all_14.js        |   16 +-
 docs/reference/api/doxygen/search/all_15.js        |    8 +-
 docs/reference/api/doxygen/search/all_16.js        |    2 +-
 docs/reference/api/doxygen/search/all_18.js        |    2 +-
 docs/reference/api/doxygen/search/all_5.js         |    3 +-
 docs/reference/api/doxygen/search/all_8.js         |    1 +
 docs/reference/api/doxygen/search/all_c.js         |    2 +-
 docs/reference/api/doxygen/search/all_e.js         |    2 +-
 docs/reference/api/doxygen/search/all_f.js         |    2 +-
 docs/reference/api/doxygen/search/functions_12.js  |    2 +-
 docs/reference/api/doxygen/search/functions_13.js  |    8 +-
 docs/reference/api/doxygen/search/functions_14.js  |    4 +-
 docs/reference/api/doxygen/search/functions_15.js  |    2 +-
 docs/reference/api/doxygen/search/functions_7.js   |    1 +
 docs/reference/api/doxygen/search/functions_d.js   |    2 +-
 docs/reference/api/doxygen/search/functions_e.js   |    2 +-
 docs/reference/api/doxygen/search/variables_4.js   |    3 +-
 .../api/doxygen/search__task_8h_source.html        |    2 +-
 docs/reference/api/doxygen/tag_8h_source.html      |    2 +-
 docs/reference/api/doxygen/target_8h_source.html   |   23 +-
 docs/reference/api/doxygen/target__kind_8h.html    |    4 +-
 .../api/doxygen/target__kind_8h_source.html        |    4 +-
 .../api/doxygen/virtual__device_8h_source.html     |    4 +-
 docs/reference/api/doxygen/x86_2bnn_8h_source.html |    2 +-
 .../api/doxygen/x86_2default_8h_source.html        |    2 +-
 .../api/doxygen/x86_2injective_8h_source.html      |    2 +-
 docs/reference/api/python/auto_scheduler.html      |    4 +-
 docs/reference/api/python/target.html              |   23 +-
 .../api/typedoc/classes/bytestreamreader.html      |   12 +-
 .../api/typedoc/classes/cachedcallstack.html       |   34 +-
 docs/reference/api/typedoc/classes/dldatatype.html |   12 +-
 docs/reference/api/typedoc/classes/dldevice.html   |   10 +-
 .../reference/api/typedoc/classes/environment.html |   12 +-
 docs/reference/api/typedoc/classes/ffilibrary.html |   20 +-
 .../api/typedoc/classes/graphexecutor.html         |   16 +-
 docs/reference/api/typedoc/classes/instance.html   |   40 +-
 docs/reference/api/typedoc/classes/memory.html     |   34 +-
 docs/reference/api/typedoc/classes/module.html     |   10 +-
 docs/reference/api/typedoc/classes/ndarray.html    |   22 +-
 .../api/typedoc/classes/packedfunccell.html        |    6 +-
 docs/reference/api/typedoc/classes/rpcserver.html  |   14 +-
 docs/reference/api/typedoc/classes/scalar.html     |    6 +-
 .../api/typedoc/classes/webgpucontext.html         |   12 +-
 docs/reference/api/typedoc/enums/argtypecode.html  |   30 +-
 .../api/typedoc/enums/aynccallbackcode.html        |    4 +-
 .../api/typedoc/enums/dldatatypecode.html          |    8 +-
 .../api/typedoc/enums/rpcserverstate.html          |   12 +-
 docs/reference/api/typedoc/enums/sizeof.html       |   18 +-
 docs/reference/api/typedoc/index.html              |  112 +-
 .../api/typedoc/interfaces/disposable.html         |    2 +-
 .../api/typedoc/interfaces/functioninfo.html       |    6 +-
 .../api/typedoc/interfaces/libraryprovider.html    |    4 +-
 docs/searchindex.js                                |    2 +-
 .../vta/tutorials/autotvm/sg_execution_times.html  |    6 +-
 .../vta/tutorials/autotvm/tune_relay_vta.html      |    2 +-
 .../tutorials/frontend/deploy_classification.html  |    2 +-
 .../vta/tutorials/frontend/deploy_detection.html   |    2 +-
 .../vta/tutorials/frontend/sg_execution_times.html |    6 +-
 .../vta/tutorials/optimize/sg_execution_times.html |    6 +-
 docs/topic/vta/tutorials/sg_execution_times.html   |    6 +-
 docs/tutorial/auto_scheduler_matmul_x86.html       |    6 +-
 docs/tutorial/autotvm_matmul_x86.html              |   20 +-
 docs/tutorial/autotvm_relay_x86.html               |  266 ++--
 docs/tutorial/cross_compilation_and_rpc.html       |    2 +-
 docs/tutorial/intro_topi.html                      |    4 +-
 docs/tutorial/relay_quick_start.html               |    2 +-
 docs/tutorial/sg_execution_times.html              |   30 +-
 docs/tutorial/tensor_expr_get_started.html         |   39 +-
 208 files changed, 2627 insertions(+), 3750 deletions(-)

diff --git a/docs/_images/sphx_glr_micro_train_001.png b/docs/_images/sphx_glr_micro_train_001.png
index 4730ebaecb..9acba7fd3b 100644
Binary files a/docs/_images/sphx_glr_micro_train_001.png and b/docs/_images/sphx_glr_micro_train_001.png differ
diff --git a/docs/_images/sphx_glr_micro_train_thumb.png b/docs/_images/sphx_glr_micro_train_thumb.png
index 4f63c99e35..fb0f49ab60 100644
Binary files a/docs/_images/sphx_glr_micro_train_thumb.png and b/docs/_images/sphx_glr_micro_train_thumb.png differ
diff --git a/docs/_sources/how_to/compile_models/from_darknet.rst.txt b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
index 1d949c775f..26d5ee7cfb 100644
--- a/docs/_sources/how_to/compile_models/from_darknet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
@@ -315,7 +315,7 @@ The process is no different from other examples.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  9.719 seconds)
+   **Total running time of the script:** ( 1 minutes  8.901 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_darknet.py:
diff --git a/docs/_sources/how_to/compile_models/from_keras.rst.txt b/docs/_sources/how_to/compile_models/from_keras.rst.txt
index 3b5236b8f0..96566a3e76 100644
--- a/docs/_sources/how_to/compile_models/from_keras.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_keras.rst.txt
@@ -228,7 +228,7 @@ Look up prediction top 1 index in 1000 class synset.
  .. code-block:: none
 
     Relay top-1 id: 285, class name: Egyptian cat
-
    1/1 [==============================] - ETA: 0s
    1/1 [==============================] - 1s 990ms/step
+
    1/1 [==============================] - ETA: 0s
    1/1 [==============================] - 1s 946ms/step
     Keras top-1 id: 285, class name: Egyptian cat
 
 
diff --git a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
index ee399f9825..869b4a1851 100644
--- a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
@@ -115,7 +115,7 @@ In this section, we download a pretrained imagenet model and classify an image.
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip54f42959-1d21-4eb1-ae6d-40e9d77ca8b4 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+    Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipa8b93490-dbe1-4848-9780-b3f14fdec2b8 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
     x (1, 3, 224, 224)
 
 
diff --git a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
index 732e6af660..3f1dd6a000 100644
--- a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
@@ -116,7 +116,7 @@ Load a pretrained OneFlow model and save model
  .. code-block:: none
 
     Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
-
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     19%|#9        | 7.99M/41.5M [00:00<00:00, 66.2MB/s]
     39%|###8      | 16.0M/41.5M [00:00<00:00, 60.6MB/s]
     58%|#####7    | 24.0M/41.5M [00:00<00:00, 59.7MB/s]
     77%|#######7  | 32.1M/41.5M [00:00<00:00, 67.3MB/s]
     96%|#########6| 40.0M/41.5M [00:00<00:00, 69.0MB/s]
    100%|##########| 41.5M/41.5M [00:00<00:00, 68.1MB/s]
+
      0%|          | 0.00/41.5M [00:00<?, ?B/s]
     15%|#5        | 6.33M/41.5M [00:00<00:00, 42.0MB/s]
     25%|##4       | 10.3M/41.5M [00:00<00:00, 38.7MB/s]
     35%|###5      | 14.7M/41.5M [00:00<00:00, 41.5MB/s]
     45%|####5     | 18.7M/41.5M [00:00<00:00, 37.9MB/s]
     58%|#####7    | 24.0M/41.5M [00:00<00:00, 35.2MB/s]
     71%|#######1  | 29.5M/41.5M [00:00<00:00, 41.1MB/s]
     81%|########  | 33.6M/41.5M [00:00<00:00, 37.3MB/s]
     92%|#########2| 38.3M/41.5M [00:01<00:00, 31.0MB/s]
    100%|##########| 41.5M/41.5M [00:01<00:00, 36.8MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
index f1b916bec2..d54a4287c7 100644
--- a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
@@ -94,7 +94,7 @@ Load a pretrained PyTorch model
  .. code-block:: none
 
     Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
      6%|6         | 2.87M/44.7M [00:00<00:01, 30.0MB/s]
     13%|#2        | 5.73M/44.7M [00:00<00:01, 28.9MB/s]
     62%|######1   | 27.6M/44.7M [00:00<00:00, 119MB/s] 
    100%|##########| 44.7M/44.7M [00:00<00:00, 124MB/s]
+
      0%|          | 0.00/44.7M [00:00<?, ?B/s]
     41%|####1     | 18.5M/44.7M [00:00<00:00, 194MB/s]
     95%|#########4| 42.4M/44.7M [00:00<00:00, 227MB/s]
    100%|##########| 44.7M/44.7M [00:00<00:00, 225MB/s]
 
 
 
diff --git a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
index 378cd45a0c..7245042394 100644
--- a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
@@ -416,7 +416,7 @@ Run the corresponding model on tensorflow
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  10.002 seconds)
+   **Total running time of the script:** ( 1 minutes  7.680 seconds)
 
 
 .. _sphx_glr_download_how_to_compile_models_from_tensorflow.py:
diff --git a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
index 5797b5a72b..78492e048e 100644
--- a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
 
 Computation times
 =================
-**05:34.019** total execution time for **how_to_compile_models** files:
+**05:30.009** total execution time for **how_to_compile_models** files:
 
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:10.002 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:08.901 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)       | 01:09.719 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:07.680 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 00:45.432 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)         | 00:45.725 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:30.015 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)       | 00:31.234 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:28.326 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)           | 00:26.967 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:25.986 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:25.706 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)         | 00:24.868 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)         | 00:24.825 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:21.952 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)       | 00:21.475 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:15.222 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)           | 00:15.045 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.495 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)             | 00:02.450 | 0.0 MB |
 +-----------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
index f4003b40a5..c43b816b7c 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
@@ -434,7 +434,7 @@ Execute on TVM
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      15.9375      15.9108      16.0574      15.8540       0.0662   
+      15.6265      15.5624      15.9529      15.4771       0.1493   
                
 
 
diff --git a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
index ccfbe45823..ddc2496e34 100644
--- a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
@@ -123,7 +123,7 @@ Load pre-trained maskrcnn from torchvision and do tracing
  .. code-block:: none
 
     Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-
      0%|          | 0.00/170M [00:00<?, ?B/s]
      8%|8         | 14.3M/170M [00:00<00:01, 146MB/s]
     17%|#6        | 28.2M/170M [00:00<00:01, 110MB/s]
     31%|###       | 51.8M/170M [00:00<00:00, 163MB/s]
     40%|####      | 68.5M/170M [00:00<00:00, 165MB/s]
     54%|#####3    | 91.6M/170M [00:00<00:00, 191MB/s]
     68%|######7   | 115M/170M [00:00<00:00, 207MB/s] 
     81%|########  | 137M/170M [00:00<00:00, 215MB/s]
     96%|#########6| 164M/170M [00:00<00:00, 235MB/s]
    100%|##########| 170M/170M [00:00<00:00, 201MB/s]
+
      0%|          | 0.00/170M [00:00<?, ?B/s]
     12%|#1        | 20.1M/170M [00:00<00:00, 210MB/s]
     27%|##7       | 46.5M/170M [00:00<00:00, 250MB/s]
     43%|####3     | 73.2M/170M [00:00<00:00, 264MB/s]
     59%|#####8    | 99.6M/170M [00:00<00:00, 269MB/s]
     74%|#######3  | 126M/170M [00:00<00:00, 270MB/s] 
     90%|########9 | 152M/170M [00:00<00:00, 273MB/s]
    100%|##########| 170M/170M [00:00<00:00, 267MB/s]
     /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torch/nn/functional.py:3878: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
       for i in range(dim)
     /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/detection/anchor_utils.py:127: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
@@ -292,7 +292,7 @@ Get boxes with score larger than 0.9
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 3 minutes  12.305 seconds)
+   **Total running time of the script:** ( 3 minutes  4.179 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
index 78811858d8..086615cc2a 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
@@ -232,7 +232,7 @@ training. Other models require a full post training calibration.
  .. code-block:: none
 
     Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
-
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 163MB/s]
+
      0%|          | 0.00/13.6M [00:00<?, ?B/s]
    100%|##########| 13.6M/13.6M [00:00<00:00, 172MB/s]
 
 
 
@@ -414,7 +414,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      90.3444      90.2771      92.3565      90.1384       0.2529   
+      90.1937      90.0900      94.7577      89.9182       0.5024   
                
 
 
@@ -463,7 +463,7 @@ TODO
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  19.794 seconds)
+   **Total running time of the script:** ( 1 minutes  17.022 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_prequantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
index 74c9a05bc4..58ce610241 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
@@ -432,7 +432,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      120.8931     120.8245     127.0069     119.9038      0.7323   
+      119.7762     119.8061     129.1444     118.2173      1.0593   
                
 
 
@@ -469,7 +469,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  3.025 seconds)
+   **Total running time of the script:** ( 2 minutes  0.227 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_prequantized_tflite.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
index f9c5032431..20d2e94a22 100644
--- a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
@@ -253,7 +253,7 @@ We create a Relay VM to build and execute the model.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  23.944 seconds)
+   **Total running time of the script:** ( 1 minutes  28.095 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_quantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt b/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
index 178817beb6..966b3654f3 100644
--- a/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
@@ -166,7 +166,7 @@ Convert and compile model for CPU.
             data: None
       input_sym_arg_type = in_param.infer_type()[0]
     Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_512_resnet50_v1_voc-9c8b225a.zip...
-
      0%|          | 0/132723 [00:00<?, ?KB/s]
      4%|4         | 5329/132723 [00:00<00:02, 53286.75KB/s]
     10%|9         | 12826/132723 [00:00<00:01, 66037.15KB/s]
     15%|#5        | 20113/132723 [00:00<00:01, 69154.19KB/s]
     21%|##1       | 28057/132723 [00:00<00:01, 73213.35KB/s]
     27%|##6       | 35698/132723 [00:00<00:01, 74363.37KB/s]
     33%|###2      | 43453/132723 [00:00<00:01, 75443.86KB/s]
     39%|###8      | 51222/132723 [00:00<00:01, 76176.73KB/s]
     44%|####4     | 58912/132723 [00:00<00:00, 76405.54KB/s]
     50%|#####     | 66651/132723 [00:00<00:00, 76711.53KB/s]
     56%|#####5    | 74323/132723 [00:01<00:00, 76576.71KB/s]
     62%|######1   | 82086/132723 [00:01<00:00, 76896.55KB/s]
     68%|######7   | 89776/132723 [00:01<00:00, 76762.06KB/s]
     73%|#######3  | 97483/132723 [00:01<00:00, 76840.30KB/s]
     79%|#######9  | 105223/132723 [00:01<00:00, 77007.19KB/s]
     85%|########5 | 112924/132723 [00:01<00:00, 76825.72KB/s]
     91%|#########
  | 120713/132723 [00:01<00:00, 77142.62KB/s]
     97%|#########6| 128428/132723 [00:01<00:00, 77043.95KB/s]
    100%|##########| 132723/132723 [00:01<00:00, 75548.17KB/s]
+
      0%|          | 0/132723 [00:00<?, ?KB/s]
      5%|5         | 6645/132723 [00:00<00:01, 66438.50KB/s]
     12%|#1        | 15327/132723 [00:00<00:01, 78414.71KB/s]
     18%|#8        | 24055/132723 [00:00<00:01, 82459.28KB/s]
     25%|##4       | 32752/132723 [00:00<00:01, 84238.26KB/s]
     31%|###1      | 41367/132723 [00:00<00:01, 84925.29KB/s]
     38%|###7      | 50050/132723 [00:00<00:00, 85571.07KB/s]
     44%|####4     | 58731/132723 [00:00<00:00, 85973.23KB/s]
     51%|#####     | 67442/132723 [00:00<00:00, 86332.44KB/s]
     57%|#####7    | 76110/132723 [00:00<00:00, 86438.98KB/s]
     64%|######3   | 84830/132723 [00:01<00:00, 86672.21KB/s]
     70%|#######   | 93565/132723 [00:01<00:00, 86878.59KB/s]
     77%|#######7  | 102312/132723 [00:01<00:00, 87055.80KB/s]
     84%|########3 | 111047/132723 [00:01<00:00, 87141.67KB/s]
     90%|######### | 119762/132723 [00:01<00:00, 87043.86KB/s]
     97%|#########6| 128467/132723 [00:01<00:00, 86989.52KB/s]
    100%|#######
 ###| 132723/132723 [00:01<00:00, 85544.87KB/s]
 
 
 
@@ -242,7 +242,7 @@ Display result
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  54.556 seconds)
+   **Total running time of the script:** ( 2 minutes  45.483 seconds)
 
 
 .. _sphx_glr_download_how_to_deploy_models_deploy_ssd_gluoncv.py:
diff --git a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
index 43635584e1..6818bc1721 100644
--- a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
@@ -5,24 +5,24 @@
 
 Computation times
 =================
-**12:19.331** total execution time for **how_to_deploy_models** files:
+**11:58.541** total execution time for **how_to_deploy_models** files:
 
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:12.305 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:04.179 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)                           | 02:54.556 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)                           | 02:45.483 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 02:03.025 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)           | 02:00.227 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 01:23.944 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)                               | 01:28.095 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:19.794 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)                         | 01:17.022 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:35.579 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)                 | 00:34.460 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:25.277 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``)                       | 00:24.746 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:24.844 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)                       | 00:24.323 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.007 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)                                     | 00:00.006 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
index 08423710e1..5f6c58f007 100644
--- a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
@@ -472,7 +472,7 @@ First let us define two helper functions to get the mobilenet model and a cat im
 
  .. code-block:: none
 
-    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipfdcaa7e9-2907-4fc5-bfcc-97b116dae2b5 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+    Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipe3eb3a30-6b5e-486b-9aa9-716a2133a859 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 
 
 
diff --git a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
index e2a480c98c..b38189c5f0 100644
--- a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:45.430** total execution time for **how_to_extend_tvm** files:
+**00:43.893** total execution time for **how_to_extend_tvm** files:
 
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:42.016 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:40.614 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.395 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)           | 00:02.299 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:01.011 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)                     | 00:00.973 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.008 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)       | 00:00.007 | 0.0 MB |
 +-------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
index 41472eb567..4933610fd7 100644
--- a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
@@ -216,10 +216,10 @@ profile the execution time of each passes.
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 6849us [6849us] (46.60%; 46.60%)
-    FoldScaleAxis: 7847us [5us] (53.40%; 53.40%)
-            FoldConstant: 7841us [1594us] (53.36%; 99.93%)
-                    InferType: 6247us [6247us] (42.51%; 79.67%)
+    InferType: 6744us [6744us] (46.47%; 46.47%)
+    FoldScaleAxis: 7768us [5us] (53.53%; 53.53%)
+            FoldConstant: 7763us [1585us] (53.49%; 99.93%)
+                    InferType: 6178us [6178us] (42.57%; 79.58%)
 
 
 
@@ -258,10 +258,10 @@ Refer to following sections and :py:func:`tvm.instrument.pass_instrument` for th
  .. code-block:: none
 
     Printing results of timing profile...
-    InferType: 6321us [6321us] (44.57%; 44.57%)
-    FoldScaleAxis: 7862us [5us] (55.43%; 55.43%)
-            FoldConstant: 7857us [1612us] (55.40%; 99.94%)
-                    InferType: 6246us [6246us] (44.04%; 79.49%)
+    InferType: 6206us [6206us] (44.64%; 44.64%)
+    FoldScaleAxis: 7696us [4us] (55.36%; 55.36%)
+            FoldConstant: 7692us [1600us] (55.33%; 99.94%)
+                    InferType: 6092us [6092us] (43.82%; 79.20%)
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
index 234c59ca47..d9d658cc54 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
@@ -340,7 +340,7 @@ latency of convolution.
 
  .. code-block:: none
 
-    Convolution: 54.347423 ms
+    Convolution: 54.205726 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
index bf03c87748..58680af053 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
@@ -671,7 +671,7 @@ be able to run on our build server
 
  .. code-block:: none
 
-    conv2d with tensor core: 6.862215 ms
+    conv2d with tensor core: 6.835648 ms
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
index 09c2fb71f5..6a2007856b 100644
--- a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
@@ -143,8 +143,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 
  .. code-block:: none
 
-    Numpy running time: 0.018952
-    Baseline: 3.437101
+    Numpy running time: 0.017697
+    Baseline: 3.190508
 
 
 
@@ -239,7 +239,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 
  .. code-block:: none
 
-    Opt1: 0.312217
+    Opt1: 0.296542
 
 
 
@@ -342,7 +342,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
 
  .. code-block:: none
 
-    Opt2: 0.344336
+    Opt2: 0.331487
 
 
 
@@ -438,7 +438,7 @@ the access pattern for A matrix is more cache friendly.
 
  .. code-block:: none
 
-    Opt3: 0.117854
+    Opt3: 0.113773
 
 
 
@@ -563,7 +563,7 @@ flattening.
 
  .. code-block:: none
 
-    Opt4: 0.109428
+    Opt4: 0.109522
 
 
 
@@ -685,7 +685,7 @@ write to C when all the block results are ready.
 
  .. code-block:: none
 
-    Opt5: 0.111098
+    Opt5: 0.111170
 
 
 
@@ -810,7 +810,7 @@ Furthermore, we can also utilize multi-core processors to do the thread-level pa
 
  .. code-block:: none
 
-    Opt6: 0.147946
+    Opt6: 0.146536
 
 
 
diff --git a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
index 3798d2cbe3..d71912a991 100644
--- a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:34.980** total execution time for **how_to_optimize_operators** files:
+**00:33.725** total execution time for **how_to_optimize_operators** files:
 
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:32.720 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)                       | 00:31.358 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:01.227 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:01.284 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.032 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)             | 00:01.082 | 0.0 MB |
 +-----------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
index 7ef0bed4a9..57a21cdee4 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
@@ -5,18 +5,18 @@
 
 Computation times
 =================
-**06:42.821** total execution time for **how_to_tune_with_autoscheduler** files:
+**06:43.662** total execution time for **how_to_tune_with_autoscheduler** files:
 
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 03:28.877 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 03:25.791 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:31.029 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)             | 01:29.019 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 00:59.813 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)           | 00:58.633 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:21.762 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)               | 00:29.554 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:10.804 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)           | 00:10.527 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:10.536 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)             | 00:10.138 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
index 5074da922f..e57d3fa47e 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
@@ -240,571 +240,266 @@ cooperative fetching, unrolling and operator fusion.
                  compute: Buffer(compute_2: Pointer(float32), float32, [25088], [])}
       buffer_map = {data_1: data, kernel_1: kernel, bias_1: bias, compute_1: compute}
       preflattened_buffer_map = {data_1: data_3: Buffer(data_2, float32, [1, 512, 7, 7], []), kernel_1: kernel_3: Buffer(kernel_2, float32, [512, 512, 3, 3], []), bias_1: bias_3: Buffer(bias_2, float32, [1, 512, 1, 1], []), compute_1: compute_3: Buffer(compute_2, float32, [1, 512, 7, 7], [])} {
-      attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 8;
-      allocate(conv2d_nchw: Pointer(local float32), float32, [14]), storage_scope = local;
-      allocate(pad_temp.shared: Pointer(shared float32), float32, [324]), storage_scope = shared;
-      allocate(kernel.shared: Pointer(shared float32), float32, [2304]), storage_scope = shared;
-      attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 224 {
-        conv2d_nchw_1: Buffer(conv2d_nchw, float32, [14], [], scope="local", align=32)[0] = 0f32
-        conv2d_nchw_1[1] = 0f32
+      attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 32;
+      allocate(conv2d_nchw: Pointer(local float32), float32, [16]), storage_scope = local;
+      allocate(pad_temp.shared: Pointer(shared float32), float32, [252]), storage_scope = shared;
+      allocate(kernel.shared: Pointer(shared float32), float32, [192]), storage_scope = shared;
+      attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 49 {
+        conv2d_nchw_1: Buffer(conv2d_nchw, float32, [4], [], scope="local", align=8)[0] = 0f32
         conv2d_nchw_1[2] = 0f32
-        conv2d_nchw_1[3] = 0f32
         conv2d_nchw_1[4] = 0f32
-        conv2d_nchw_1[5] = 0f32
         conv2d_nchw_1[6] = 0f32
-        conv2d_nchw_1[7] = 0f32
         conv2d_nchw_1[8] = 0f32
-        conv2d_nchw_1[9] = 0f32
         conv2d_nchw_1[10] = 0f32
-        conv2d_nchw_1[11] = 0f32
         conv2d_nchw_1[12] = 0f32
+        conv2d_nchw_1[14] = 0f32
+        conv2d_nchw_1[1] = 0f32
+        conv2d_nchw_1[3] = 0f32
+        conv2d_nchw_1[5] = 0f32
+        conv2d_nchw_1[7] = 0f32
+        conv2d_nchw_1[9] = 0f32
+        conv2d_nchw_1[11] = 0f32
         conv2d_nchw_1[13] = 0f32
+        conv2d_nchw_1[15] = 0f32
         for (rc.outer.outer: int32, 0, 128) {
-          attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 224 {
-            if @tir.likely((threadIdx.x_1 < 162), dtype=bool) {
-              pad_temp.shared_1: Buffer(pad_temp.shared, float32, [324], [], scope="shared")[(threadIdx.x_1*2)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1*2), 81)) && (floormod((threadIdx.x_1*2), 81) < 72)) && (1 <= floormod((threadIdx.x_1*2), 9))) && (floormod((threadIdx.x_1*2), 9) < 8)), data[(((((rc.outer.outer*196) + (floordiv((threadIdx.x_1*2), 81)*49)) + (floordiv(floormod((threadIdx.x_1*2), 81), 9)*7)) + floormod((threadIdx.x_1*2), 9)) - 8)], 0f32, dtype=float32)
-            }
-            if @tir.likely((threadIdx.x_1 < 162), dtype=bool) {
-              pad_temp.shared_1[((threadIdx.x_1*2) + 1)] = @tir.if_then_else(((((9 <= floormod(((threadIdx.x_1*2) + 1), 81)) && (floormod(((threadIdx.x_1*2) + 1), 81) < 72)) && (1 <= floormod(((threadIdx.x_1*2) + 1), 9))) && (floormod(((threadIdx.x_1*2) + 1), 9) < 8)), data[(((((rc.outer.outer*196) + (floordiv(((threadIdx.x_1*2) + 1), 81)*49)) + (floordiv(floormod(((threadIdx.x_1*2) + 1), 81), 9)*7)) + floormod(((threadIdx.x_1*2) + 1), 9)) - 8)], 0f32, dtype=float32)
+          for (ry.outer.outer: int32, 0, 3) {
+            let cse_var_2: int32 = (rc.outer.outer*36)
+            let cse_var_1: int32 = (ry.outer.outer*3)
+             {
+              attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 49 {
+                if @tir.likely((threadIdx.x_1 < 42), dtype=bool) {
+                  pad_temp.shared_1: Buffer(pad_temp.shared, float32, [252], [], scope="shared")[(threadIdx.x_1*6)] = @tir.if_then_else(((((1 <= (floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer)) && ((floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer) < 8)) && (1 <= floormod((threadIdx.x_1*6), 9))) && (floormod((threadIdx.x_1*6), 9) < 8)), data[(((((rc.outer.outer*196) + (floordiv((threadIdx.x_1*2), 3)*7)) + (ry.outer.outer*7)) + floormod((threadIdx.x_1*6), 9))  [...]
+                }
+                if @tir.likely((threadIdx.x_1 < 42), dtype=bool) {
+                  pad_temp.shared_1[((threadIdx.x_1*6) + 1)] = @tir.if_then_else(((((1 <= (floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer)) && ((floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer) < 8)) && (1 <= floormod(((threadIdx.x_1*6) + 1), 9))) && (floormod(((threadIdx.x_1*6) + 1), 9) < 8)), data[(((((rc.outer.outer*196) + (floordiv((threadIdx.x_1*2), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1*6) + 1), 9)) - 8)], 0f32, dtype=float32)
+                }
+                if @tir.likely((threadIdx.x_1 < 42), dtype=bool) {
+                  pad_temp.shared_1[((threadIdx.x_1*6) + 2)] = @tir.if_then_else(((((1 <= (floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer)) && ((floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer) < 8)) && (1 <= floormod(((threadIdx.x_1*6) + 2), 9))) && (floormod(((threadIdx.x_1*6) + 2), 9) < 8)), data[(((((rc.outer.outer*196) + (floordiv((threadIdx.x_1*2), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1*6) + 2), 9)) - 8)], 0f32, dtype=float32)
+                }
+                if @tir.likely((threadIdx.x_1 < 42), dtype=bool) {
+                  pad_temp.shared_1[((threadIdx.x_1*6) + 3)] = @tir.if_then_else(((((1 <= (floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer)) && ((floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer) < 8)) && (1 <= floormod(((threadIdx.x_1*6) + 3), 9))) && (floormod(((threadIdx.x_1*6) + 3), 9) < 8)), data[(((((rc.outer.outer*196) + (floordiv(((threadIdx.x_1*2) + 1), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1*6) + 3), 9)) - 8)], 0f32, dtype= [...]
+                }
+                if @tir.likely((threadIdx.x_1 < 42), dtype=bool) {
+                  pad_temp.shared_1[((threadIdx.x_1*6) + 4)] = @tir.if_then_else(((((1 <= (floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer)) && ((floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer) < 8)) && (1 <= floormod(((threadIdx.x_1*6) + 4), 9))) && (floormod(((threadIdx.x_1*6) + 4), 9) < 8)), data[(((((rc.outer.outer*196) + (floordiv(((threadIdx.x_1*2) + 1), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1*6) + 4), 9)) - 8)], 0f32, dtype= [...]
+                }
+                if @tir.likely((threadIdx.x_1 < 42), dtype=bool) {
+                  pad_temp.shared_1[((threadIdx.x_1*6) + 5)] = @tir.if_then_else(((((1 <= (floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer)) && ((floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer) < 8)) && (1 <= floormod(((threadIdx.x_1*6) + 5), 9))) && (floormod(((threadIdx.x_1*6) + 5), 9) < 8)), data[(((((rc.outer.outer*196) + (floordiv(((threadIdx.x_1*2) + 1), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1*6) + 5), 9)) - 8)], 0f32, dtype= [...]
+                }
+              }
+              attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 49;
+              kernel.shared_1: Buffer(kernel.shared, float32, [192], [], scope="shared")[threadIdx.x_2] = kernel[((((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 12)*4608)) + cse_var_2) + (floordiv(floormod(threadIdx.x_2, 12), 3)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
+              attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 49;
+              kernel.shared_1[(threadIdx.x_2 + 49)] = kernel[((((((blockIdx.x*73728) + (floordiv((threadIdx.x_2 + 49), 12)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 1), 12), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 1), 3))]
+              attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 49;
+              kernel.shared_1[(threadIdx.x_2 + 98)] = kernel[((((((blockIdx.x*73728) + (floordiv((threadIdx.x_2 + 98), 12)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 2), 12), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 2), 3))]
+              attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 49;
+              if @tir.likely((threadIdx.x_2 < 45), dtype=bool) {
+                kernel.shared_1[(threadIdx.x_2 + 147)] = kernel[((((((blockIdx.x*73728) + (floordiv((threadIdx.x_2 + 147), 12)*4608)) + cse_var_2) + (floormod((floordiv(threadIdx.x_2, 3) + 1), 4)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
+              }
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[0]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[24]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[48]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[72]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[96]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[120]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[144]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[168]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[12]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[36]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[60]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[84]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[108]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[132]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[156]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[180]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[3]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[27]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[51]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[75]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[99]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[123]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[147]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[171]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[15]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[39]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[63]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[87]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[111]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[135]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[159]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[183]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[6]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[30]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[54]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[78]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[102]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[126]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[150]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[174]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[18]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[42]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[66]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[90]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[114]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[138]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[162]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[186]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[9]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[33]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[57]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[81]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[105]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[129]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[153]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[177]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[21]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[45]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[69]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[93]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[117]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[141]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[165]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[189]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[1]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[25]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[49]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[73]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[97]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[121]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[145]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[169]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[13]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[37]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[61]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[85]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[109]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[133]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[157]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[181]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[4]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[28]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[52]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[76]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[100]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[124]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[148]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[172]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[16]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[40]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[64]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[88]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[112]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[136]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[160]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[184]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[7]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[31]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[55]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[79]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[103]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[127]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[151]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[175]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[19]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[43]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[67]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[91]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[115]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[139]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[163]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[187]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[10]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[34]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[58]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[82]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[106]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[130]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[154]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[178]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[22]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[46]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[70]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[94]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[118]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[142]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[166]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[190]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[2]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[26]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[50]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[74]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[98]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[122]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[146]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[170]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[14]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[38]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[62]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[86]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[110]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[134]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[158]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[182]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[5]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[29]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[53]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[77]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[101]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[125]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[149]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[173]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[17]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[41]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[65]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[89]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[113]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[137]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[161]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[185]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[8]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[32]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[56]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[80]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[104]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[128]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[152]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[176]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[20]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[44]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[68]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[92]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[116]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[140]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[164]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[188]))
+              conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[11]))
+              conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[35]))
+              conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[59]))
+              conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[83]))
+              conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[107]))
+              conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[131]))
+              conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[155]))
+              conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[179]))
+              conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[23]))
+              conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[47]))
+              conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[71]))
+              conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[95]))
+              conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[119]))
+              conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[143]))
+              conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[167]))
+              conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[191]))
             }
           }
-          attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 224 {
-            kernel.shared_1: Buffer(kernel.shared, float32, [2304], [], scope="shared")[(threadIdx.x_2*6)] = kernel[((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6))]
-            kernel.shared_1[((threadIdx.x_2*6) + 1)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 1)]
-            kernel.shared_1[((threadIdx.x_2*6) + 2)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 2)]
-            kernel.shared_1[((threadIdx.x_2*6) + 3)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 3)]
-            kernel.shared_1[((threadIdx.x_2*6) + 4)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 4)]
-            kernel.shared_1[((threadIdx.x_2*6) + 5)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 5)]
-          }
-          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 224 {
-            if @tir.likely((threadIdx.x_2 < 160), dtype=bool) {
-              kernel.shared_1[((threadIdx.x_2*6) + 1344)] = kernel[((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 4), 12)*3))]
-            }
-            if @tir.likely((threadIdx.x_2 < 160), dtype=bool) {
-              kernel.shared_1[((threadIdx.x_2*6) + 1345)] = kernel[(((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 4), 12)*3)) + 1)]
-            }
-            if @tir.likely((threadIdx.x_2 < 160), dtype=bool) {
-              kernel.shared_1[((threadIdx.x_2*6) + 1346)] = kernel[(((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 4), 12)*3)) + 2)]
-            }
-            if @tir.likely((threadIdx.x_2 < 160), dtype=bool) {
-              kernel.shared_1[((threadIdx.x_2*6) + 1347)] = kernel[((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 5), 12)*3))]
-            }
-            if @tir.likely((threadIdx.x_2 < 160), dtype=bool) {
-              kernel.shared_1[((threadIdx.x_2*6) + 1348)] = kernel[(((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 5), 12)*3)) + 1)]
-            }
-            if @tir.likely((threadIdx.x_2 < 160), dtype=bool) {
-              kernel.shared_1[((threadIdx.x_2*6) + 1349)] = kernel[(((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 5), 12)*3)) + 2)]
-            }
-          }
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[floormod(threadIdx.x, 7)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 9)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[floormod(threadIdx.x, 7)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 9)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 9)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 9)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 72)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 72)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 81)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 90)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 81)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 90)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 90)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 144)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 90)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 144)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 144)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 153)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 144)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 153)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 162)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 171)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 162)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 171)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 171)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 225)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 171)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 225)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 225)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 234)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 225)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 234)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 243)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 243)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 306)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 306)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 306)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 306)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 1)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 10)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 1)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 10)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 10)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 64)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 10)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 64)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 64)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 73)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 64)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 73)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 82)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 91)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 82)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 91)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 91)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 145)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 91)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 145)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 145)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 154)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 145)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 154)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 163)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 172)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 163)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 172)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 172)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 226)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 172)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 226)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 226)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 235)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 226)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 235)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 244)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 253)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 244)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 253)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 253)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 307)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 253)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 307)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 307)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 316)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 307)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 316)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 2)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 11)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 2)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 11)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 11)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 65)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 11)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 65)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 65)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 74)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 65)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 74)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 83)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 92)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 83)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 92)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 92)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 146)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 92)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 146)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 146)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 155)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 146)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 155)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 164)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 173)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 164)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 173)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 173)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 227)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 173)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 227)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 227)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 236)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 227)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 236)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 245)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 254)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 245)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 254)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 254)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 308)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 254)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 308)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 308)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 317)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 308)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 317)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
         }
         for (i1.inner: int32, 0, 2) {
-          for (i2.inner: int32, 0, 7) {
-            compute[(((((blockIdx.x*3136) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (i2.inner*7)) + floormod(threadIdx.x, 7))] = max((conv2d_nchw_1[((i1.inner*7) + i2.inner)] + bias[(((blockIdx.x*64) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
-          }
+          compute[(((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x)] = max((conv2d_nchw_1[i1.inner] + bias[((blockIdx.x*16) + i1.inner)]), 0f32)
+          compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 98)] = max((conv2d_nchw_1[(i1.inner + 2)] + bias[(((blockIdx.x*16) + i1.inner) + 2)]), 0f32)
+          compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 196)] = max((conv2d_nchw_1[(i1.inner + 4)] + bias[(((blockIdx.x*16) + i1.inner) + 4)]), 0f32)
+          compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 294)] = max((conv2d_nchw_1[(i1.inner + 6)] + bias[(((blockIdx.x*16) + i1.inner) + 6)]), 0f32)
+          compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 392)] = max((conv2d_nchw_1[(i1.inner + 8)] + bias[(((blockIdx.x*16) + i1.inner) + 8)]), 0f32)
+          compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 490)] = max((conv2d_nchw_1[(i1.inner + 10)] + bias[(((blockIdx.x*16) + i1.inner) + 10)]), 0f32)
+          compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 588)] = max((conv2d_nchw_1[(i1.inner + 12)] + bias[(((blockIdx.x*16) + i1.inner) + 12)]), 0f32)
+          compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 686)] = max((conv2d_nchw_1[(i1.inner + 14)] + bias[(((blockIdx.x*16) + i1.inner) + 14)]), 0f32)
         }
       }
     }
@@ -859,7 +554,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 0.323 ms
+    Execution time of this operator: 0.363 ms
 
 
 
@@ -909,11 +604,11 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
     conv2d_nchw_nn_o_o_o_o, conv2d_nchw_nn_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_o_i, factor=1)
     conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=2)
     conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=1)
-    conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=32)
-    conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=1)
-    conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=7)
+    conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=1)
+    conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=8)
+    conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=1)
     conv2d_nchw_yy_o_o_i, conv2d_nchw_yy_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_i, factor=1)
-    conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=1)
+    conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=7)
     conv2d_nchw_yy_o_o_o_o, conv2d_nchw_yy_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_o_i, factor=1)
     conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=1)
     conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=1)
@@ -921,7 +616,7 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
     conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=1)
     conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=4)
     conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=1)
-    conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=3)
+    conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=1)
     conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=1)
     conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=1)
     conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=3)
@@ -930,10 +625,10 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
     compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
     compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
     compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=2)
-    compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=32)
-    compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
-    compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=7)
-    compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=1)
+    compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=1)
+    compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=8)
+    compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
+    compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
     compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
     compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=1)
     compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=7)
@@ -954,16 +649,16 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
     compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused = s[compute].fuse(compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i)
     s[compute].bind(compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused, te.thread_axis("threadIdx.x"))
     kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
-    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=6)
+    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
     s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=224)
+    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=49)
     s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
     pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
-    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=2)
+    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=6)
     s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=224)
+    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=49)
     s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
-    s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 1024)
+    s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 512)
     s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "unroll_explicit", True)
 
     CUDA source code:
@@ -981,566 +676,257 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
       #define int64_t long long
       #define uint64_t unsigned long long
     #endif
-    extern "C" __global__ void __launch_bounds__(224) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
-      float conv2d_nchw[14];
-      __shared__ float pad_temp_shared[324];
-      __shared__ float kernel_shared[2304];
+    extern "C" __global__ void __launch_bounds__(49) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
+      float conv2d_nchw[16];
+      __shared__ float pad_temp_shared[252];
+      __shared__ float kernel_shared[192];
       conv2d_nchw[0] = 0.000000e+00f;
-      conv2d_nchw[1] = 0.000000e+00f;
       conv2d_nchw[2] = 0.000000e+00f;
-      conv2d_nchw[3] = 0.000000e+00f;
       conv2d_nchw[4] = 0.000000e+00f;
-      conv2d_nchw[5] = 0.000000e+00f;
       conv2d_nchw[6] = 0.000000e+00f;
-      conv2d_nchw[7] = 0.000000e+00f;
       conv2d_nchw[8] = 0.000000e+00f;
-      conv2d_nchw[9] = 0.000000e+00f;
       conv2d_nchw[10] = 0.000000e+00f;
-      conv2d_nchw[11] = 0.000000e+00f;
       conv2d_nchw[12] = 0.000000e+00f;
+      conv2d_nchw[14] = 0.000000e+00f;
+      conv2d_nchw[1] = 0.000000e+00f;
+      conv2d_nchw[3] = 0.000000e+00f;
+      conv2d_nchw[5] = 0.000000e+00f;
+      conv2d_nchw[7] = 0.000000e+00f;
+      conv2d_nchw[9] = 0.000000e+00f;
+      conv2d_nchw[11] = 0.000000e+00f;
       conv2d_nchw[13] = 0.000000e+00f;
+      conv2d_nchw[15] = 0.000000e+00f;
       for (int rc_outer_outer = 0; rc_outer_outer < 128; ++rc_outer_outer) {
-        __syncthreads();
-        if (((int)threadIdx.x) < 162) {
-          pad_temp_shared[(((int)threadIdx.x) * 2)] = (((((9 <= ((((int)threadIdx.x) * 2) % 81)) && (((((int)threadIdx.x) * 2) % 81) < 72)) && (1 <= ((((int)threadIdx.x) * 2) % 9))) && (((((int)threadIdx.x) * 2) % 9) < 8)) ? data[(((((rc_outer_outer * 196) + (((((int)threadIdx.x) * 2) / 81) * 49)) + ((((((int)threadIdx.x) * 2) % 81) / 9) * 7)) + ((((int)threadIdx.x) * 2) % 9)) - 8)] : 0.000000e+00f);
-        }
-        if (((int)threadIdx.x) < 162) {
-          pad_temp_shared[((((int)threadIdx.x) * 2) + 1)] = (((((9 <= (((((int)threadIdx.x) * 2) + 1) % 81)) && ((((((int)threadIdx.x) * 2) + 1) % 81) < 72)) && (1 <= (((((int)threadIdx.x) * 2) + 1) % 9))) && ((((((int)threadIdx.x) * 2) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 196) + ((((((int)threadIdx.x) * 2) + 1) / 81) * 49)) + (((((((int)threadIdx.x) * 2) + 1) % 81) / 9) * 7)) + (((((int)threadIdx.x) * 2) + 1) % 9)) - 8)] : 0.000000e+00f);
-        }
-        kernel_shared[(((int)threadIdx.x) * 6)] = kernel[((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6))];
-        kernel_shared[((((int)threadIdx.x) * 6) + 1)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 1)];
-        kernel_shared[((((int)threadIdx.x) * 6) + 2)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 2)];
-        kernel_shared[((((int)threadIdx.x) * 6) + 3)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 3)];
-        kernel_shared[((((int)threadIdx.x) * 6) + 4)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 4)];
-        kernel_shared[((((int)threadIdx.x) * 6) + 5)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 5)];
-        if (((int)threadIdx.x) < 160) {
-          kernel_shared[((((int)threadIdx.x) * 6) + 1344)] = kernel[((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 4) % 12) * 3))];
-        }
-        if (((int)threadIdx.x) < 160) {
-          kernel_shared[((((int)threadIdx.x) * 6) + 1345)] = kernel[(((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 4) % 12) * 3)) + 1)];
-        }
-        if (((int)threadIdx.x) < 160) {
-          kernel_shared[((((int)threadIdx.x) * 6) + 1346)] = kernel[(((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 4) % 12) * 3)) + 2)];
-        }
-        if (((int)threadIdx.x) < 160) {
-          kernel_shared[((((int)threadIdx.x) * 6) + 1347)] = kernel[((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 5) % 12) * 3))];
-        }
-        if (((int)threadIdx.x) < 160) {
-          kernel_shared[((((int)threadIdx.x) * 6) + 1348)] = kernel[(((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 5) % 12) * 3)) + 1)];
-        }
-        if (((int)threadIdx.x) < 160) {
-          kernel_shared[((((int)threadIdx.x) * 6) + 1349)] = kernel[(((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 5) % 12) * 3)) + 2)];
+        for (int ry_outer_outer = 0; ry_outer_outer < 3; ++ry_outer_outer) {
+          __syncthreads();
+          if (((int)threadIdx.x) < 42) {
+            pad_temp_shared[(((int)threadIdx.x) * 6)] = (((((1 <= ((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer)) && (((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer) < 8)) && (1 <= ((((int)threadIdx.x) * 6) % 9))) && (((((int)threadIdx.x) * 6) % 9) < 8)) ? data[(((((rc_outer_outer * 196) + (((((int)threadIdx.x) * 2) / 3) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) * 6) % 9)) - 8)] : 0.000000e+00f);
+          }
+          if (((int)threadIdx.x) < 42) {
+            pad_temp_shared[((((int)threadIdx.x) * 6) + 1)] = (((((1 <= ((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer)) && (((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer) < 8)) && (1 <= (((((int)threadIdx.x) * 6) + 1) % 9))) && ((((((int)threadIdx.x) * 6) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 196) + (((((int)threadIdx.x) * 2) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 1) % 9)) - 8)] : 0.000000e+00f);
+          }
+          if (((int)threadIdx.x) < 42) {
+            pad_temp_shared[((((int)threadIdx.x) * 6) + 2)] = (((((1 <= ((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer)) && (((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer) < 8)) && (1 <= (((((int)threadIdx.x) * 6) + 2) % 9))) && ((((((int)threadIdx.x) * 6) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 196) + (((((int)threadIdx.x) * 2) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 2) % 9)) - 8)] : 0.000000e+00f);
+          }
+          if (((int)threadIdx.x) < 42) {
+            pad_temp_shared[((((int)threadIdx.x) * 6) + 3)] = (((((1 <= (((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer)) && ((((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer) < 8)) && (1 <= (((((int)threadIdx.x) * 6) + 3) % 9))) && ((((((int)threadIdx.x) * 6) + 3) % 9) < 8)) ? data[(((((rc_outer_outer * 196) + ((((((int)threadIdx.x) * 2) + 1) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 3) % 9)) - 8)] : 0.000000e+00f);
+          }
+          if (((int)threadIdx.x) < 42) {
+            pad_temp_shared[((((int)threadIdx.x) * 6) + 4)] = (((((1 <= (((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer)) && ((((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer) < 8)) && (1 <= (((((int)threadIdx.x) * 6) + 4) % 9))) && ((((((int)threadIdx.x) * 6) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 196) + ((((((int)threadIdx.x) * 2) + 1) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 4) % 9)) - 8)] : 0.000000e+00f);
+          }
+          if (((int)threadIdx.x) < 42) {
+            pad_temp_shared[((((int)threadIdx.x) * 6) + 5)] = (((((1 <= (((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer)) && ((((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer) < 8)) && (1 <= (((((int)threadIdx.x) * 6) + 5) % 9))) && ((((((int)threadIdx.x) * 6) + 5) % 9) < 8)) ? data[(((((rc_outer_outer * 196) + ((((((int)threadIdx.x) * 2) + 1) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 5) % 9)) - 8)] : 0.000000e+00f);
+          }
+          kernel_shared[((int)threadIdx.x)] = kernel[((((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 12) * 4608)) + (rc_outer_outer * 36)) + (((((int)threadIdx.x) % 12) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
+          kernel_shared[(((int)threadIdx.x) + 49)] = kernel[((((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 49) / 12) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) + 1) % 12) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+          kernel_shared[(((int)threadIdx.x) + 98)] = kernel[((((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 98) / 12) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) + 2) % 12) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+          if (((int)threadIdx.x) < 45) {
+            kernel_shared[(((int)threadIdx.x) + 147)] = kernel[((((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 147) / 12) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) / 3) + 1) & 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
+          }
+          __syncthreads();
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[0]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[24]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[48]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[72]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[96]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[120]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[144]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[168]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[12]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[36]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[60]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[84]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[108]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[132]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[156]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[180]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[3]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[27]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[51]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[75]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[99]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[123]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[147]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[171]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[15]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[39]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[63]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[87]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[111]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[135]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[159]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[183]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[6]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[30]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[54]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[78]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[102]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[126]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[150]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[174]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[18]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[42]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[66]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[90]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[114]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[138]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[162]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[186]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[9]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[33]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[57]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[81]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[105]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[129]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[153]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[177]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[21]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[45]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[69]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[93]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[117]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[141]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[165]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[189]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[1]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[25]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[49]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[73]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[97]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[121]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[145]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[169]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[13]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[37]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[61]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[85]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[109]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[133]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[157]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[181]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[4]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[28]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[52]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[76]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[100]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[124]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[148]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[172]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[16]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[40]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[64]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[88]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[112]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[136]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[160]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[184]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[7]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[31]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[55]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[79]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[103]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[127]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[151]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[175]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[19]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[43]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[67]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[91]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[115]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[139]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[163]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[187]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[10]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[34]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[58]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[82]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[106]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[130]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[154]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[178]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[22]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[46]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[70]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[94]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[118]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[142]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[166]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[190]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[2]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[26]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[50]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[74]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[98]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[122]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[146]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[170]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[14]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[38]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[62]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[86]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[110]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[134]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[158]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[182]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[5]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[29]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[53]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[77]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[101]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[125]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[149]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[173]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[17]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[41]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[65]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[89]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[113]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[137]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[161]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[185]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[8]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[32]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[56]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[80]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[104]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[128]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[152]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[176]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[20]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[44]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[68]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[92]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[116]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[140]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[164]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[188]));
+          conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[11]));
+          conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[35]));
+          conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[59]));
+          conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[83]));
+          conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[107]));
+          conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[131]));
+          conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[155]));
+          conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[179]));
+          conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[23]));
+          conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[47]));
+          conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[71]));
+          conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[95]));
+          conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[119]));
+          conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[143]));
+          conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[167]));
+          conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[191]));
         }
-        __syncthreads();
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((int)threadIdx.x) % 7)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 9)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((int)threadIdx.x) % 7)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 9)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 9)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 63)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 9)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 63)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 63)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 72)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 63)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 72)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 81)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 90)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 81)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 90)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 90)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 144)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 90)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 144)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 144)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 153)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 144)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 153)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 162)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 171)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 162)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 171)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 171)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 225)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 171)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 225)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 225)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 234)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 225)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 234)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 243)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 252)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 243)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 252)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 252)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 306)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 252)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 306)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 306)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 315)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 306)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 315)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 1)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 10)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 1)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 10)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 10)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 64)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 10)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 64)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 64)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 73)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 64)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 73)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 82)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 91)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 82)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 91)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 91)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 145)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 91)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 145)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 145)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 154)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 145)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 154)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 163)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 172)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 163)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 172)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 172)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 226)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 172)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 226)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 226)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 235)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 226)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 235)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 244)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 253)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 244)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 253)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 253)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 307)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 253)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 307)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 307)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 316)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 307)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 316)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 2)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 11)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 2)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 11)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 11)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 65)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 11)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 65)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 65)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 74)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 65)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 74)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 83)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 92)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 83)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 92)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 92)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 146)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 92)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 146)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 146)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 155)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 146)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 155)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 164)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 173)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 164)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 173)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 173)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 227)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 173)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 227)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 227)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 236)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 227)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 236)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 245)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 254)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 245)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 254)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 254)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 308)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 254)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 308)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-        conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-        conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-        conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-        conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-        conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-        conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 308)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-        conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 317)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-        conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-        conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-        conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-        conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-        conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-        conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 308)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-        conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 317)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
       }
       for (int i1_inner = 0; i1_inner < 2; ++i1_inner) {
-        for (int i2_inner = 0; i2_inner < 7; ++i2_inner) {
-          compute[(((((((int)blockIdx.x) * 3136) + ((((int)threadIdx.x) / 7) * 98)) + (i1_inner * 49)) + (i2_inner * 7)) + (((int)threadIdx.x) % 7))] = max((conv2d_nchw[((i1_inner * 7) + i2_inner)] + bias[(((((int)blockIdx.x) * 64) + ((((int)threadIdx.x) / 7) * 2)) + i1_inner)]), 0.000000e+00f);
-        }
+        compute[(((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x))] = max((conv2d_nchw[i1_inner] + bias[((((int)blockIdx.x) * 16) + i1_inner)]), 0.000000e+00f);
+        compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 98)] = max((conv2d_nchw[(i1_inner + 2)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 2)]), 0.000000e+00f);
+        compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 196)] = max((conv2d_nchw[(i1_inner + 4)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 4)]), 0.000000e+00f);
+        compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 294)] = max((conv2d_nchw[(i1_inner + 6)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 6)]), 0.000000e+00f);
+        compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 392)] = max((conv2d_nchw[(i1_inner + 8)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 8)]), 0.000000e+00f);
+        compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 490)] = max((conv2d_nchw[(i1_inner + 10)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 10)]), 0.000000e+00f);
+        compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 588)] = max((conv2d_nchw[(i1_inner + 12)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 12)]), 0.000000e+00f);
+        compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 686)] = max((conv2d_nchw[(i1_inner + 14)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 14)]), 0.000000e+00f);
       }
     }
 
@@ -1602,7 +988,7 @@ In the example below we resume the status and do more 5 trials.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 3 minutes  28.877 seconds)
+   **Total running time of the script:** ( 3 minutes  25.791 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
index fe32da94ee..823fb5a52f 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
@@ -643,7 +643,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-       8.2600       8.2629       8.2651       8.2518       0.0058   
+       8.1702       8.1746       8.1787       8.1573       0.0093   
                
 
 
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
index 1488be6a3b..fb1345b792 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
@@ -662,7 +662,7 @@ so we can read the log file and load the best schedules.
     Evaluate inference time cost...
     Execution time summary:
      mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)  
-      756.5330     756.6696     756.9334     755.9959      0.3947   
+      744.2726     744.0238     745.1500     743.6439      0.6395   
                
 
 
@@ -690,7 +690,7 @@ Other Tips
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  31.029 seconds)
+   **Total running time of the script:** ( 1 minutes  29.019 seconds)
 
 
 .. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_x86.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
index c7888db6f0..a81ba78b9b 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
@@ -397,21 +397,23 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
                  placeholder_4: Buffer(placeholder_14: Pointer(float32), float32, [65536], []),
                  compute: Buffer(compute_2: Pointer(float32), float32, [65536], [])}
       buffer_map = {placeholder_5: placeholder, placeholder_6: placeholder_1, placeholder_7: placeholder_2, placeholder_8: placeholder_3, placeholder_9: placeholder_4, compute_1: compute}
-      preflattened_buffer_map = {placeholder_9: placeholder_15: Buffer(placeholder_14, float32, [128, 512], []), compute_1: compute_3: Buffer(compute_2, float32, [128, 512], []), placeholder_5: placeholder_16: Buffer(placeholder_10, float32, [128, 256], []), placeholder_6: placeholder_17: Buffer(placeholder_11, float32, [4916, 16, 1], []), placeholder_8: placeholder_18: Buffer(placeholder_13, int32, [33], []), placeholder_7: placeholder_19: Buffer(placeholder_12, int32, [4916], [])} {
+      preflattened_buffer_map = {placeholder_8: placeholder_15: Buffer(placeholder_13, int32, [33], []), placeholder_9: placeholder_16: Buffer(placeholder_14, float32, [128, 512], []), placeholder_6: placeholder_17: Buffer(placeholder_11, float32, [4916, 16, 1], []), placeholder_5: placeholder_18: Buffer(placeholder_10, float32, [128, 256], []), compute_1: compute_3: Buffer(compute_2, float32, [128, 512], []), placeholder_7: placeholder_19: Buffer(placeholder_12, int32, [4916], [])} {
       for (i0.outer.i1.outer.fused: int32, 0, 64) "parallel" {
         allocate(compute_4: Pointer(global float32), float32, [1024]), storage_scope = global {
-          for (nb_j.inner: int32, 0, 2) {
-            for (i.inner.init: int32, 0, 32) {
-              for (j.init: int32, 0, 16) {
-                compute_5: Buffer(compute_4, float32, [1024], [])[(((i.inner.init*32) + (nb_j.inner*16)) + j.init)] = 0f32
+          for (i.outer.inner: int32, 0, 4) {
+            for (nb_j.inner: int32, 0, 2) {
+              for (i.inner.init: int32, 0, 8) {
+                for (j.init: int32, 0, 16) {
+                  compute_5: Buffer(compute_4, float32, [1024], [])[((((i.outer.inner*256) + (i.inner.init*32)) + (nb_j.inner*16)) + j.init)] = 0f32
+                }
               }
-            }
-            for (elem_idx: int32, 0, let cse_var_1: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner) in (placeholder_3[(cse_var_1 + 1)] - placeholder_3[cse_var_1])) {
-              for (i.inner: int32, 0, 32) {
-                for (j: int32, 0, 16) {
-                  let cse_var_3: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner)
-                  let cse_var_2: int32 = (((i.inner*32) + (nb_j.inner*16)) + j)
-                  compute_5[cse_var_2] = (compute_5[cse_var_2] + (placeholder_1[(((placeholder_3[cse_var_3]*16) + (elem_idx*16)) + j)]*max(placeholder[(((floordiv(i0.outer.i1.outer.fused, 16)*8192) + (i.inner*256)) + placeholder_2[(placeholder_3[cse_var_3] + elem_idx)])], 0f32)))
+              for (elem_idx: int32, 0, let cse_var_1: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner) in (placeholder_3[(cse_var_1 + 1)] - placeholder_3[cse_var_1])) {
+                for (i.inner: int32, 0, 8) {
+                  for (j: int32, 0, 16) {
+                    let cse_var_3: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner)
+                    let cse_var_2: int32 = ((((i.outer.inner*256) + (i.inner*32)) + (nb_j.inner*16)) + j)
+                    compute_5[cse_var_2] = (compute_5[cse_var_2] + (placeholder_1[(((placeholder_3[cse_var_3]*16) + (elem_idx*16)) + j)]*max(placeholder[((((floordiv(i0.outer.i1.outer.fused, 16)*8192) + (i.outer.inner*2048)) + (i.inner*256)) + placeholder_2[(placeholder_3[cse_var_3] + elem_idx)])], 0f32)))
+                  }
                 }
               }
             }
@@ -474,7 +476,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 1.620 ms
+    Execution time of this operator: 1.567 ms
 
 
 
diff --git a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
index 80c0a1ba04..47d7db8b8b 100644
--- a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
 
 Computation times
 =================
-**00:38.569** total execution time for **how_to_tune_with_autotvm** files:
+**00:52.134** total execution time for **how_to_tune_with_autotvm** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:38.533 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)           | 00:52.099 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.021 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)               | 00:00.020 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)             | 00:00.005 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
index 2ae390434f..90f4330254 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
@@ -277,9 +277,7 @@ for this template
     waiting for device...
     device available
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 41.90/41.90     result: MeasureResult(costs=(0.005524936,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.104245662689209, timestamp=1664521849.1920621) [('tile_f', [-1, 4, 8, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4160669
-    No: 2   GFLOPS: 209.95/209.95   result: MeasureResult(costs=(0.0011026573379310344,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.8389084339141846, timestamp=1664521850.0930169)      [('tile_f', [-1, 1, 16, 8]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 16, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7052038
-    No: 3   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+    No: 1   GFLOPS: 0.00/0.00       result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -401,8 +399,9 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 2, 8]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 128]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,5409063
-    No: 4   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 32, 16]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 32, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4179304
+    No: 2   GFLOPS: 8.88/8.88       result: MeasureResult(costs=(0.02607909575,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.145456790924072, timestamp=1664540728.8273628)       [('tile_f', [-1, 32, 2, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8752769
+    No: 3   GFLOPS: 0.00/8.88       result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -524,8 +523,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 2, 8, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 32, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8271420
-    No: 5   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 128, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 8, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4885809
+    No: 4   GFLOPS: 0.00/8.88       result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -647,8 +646,9 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 8, 1, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 32, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2217327
-    No: 6   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 4, 1, 4]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,499282
+    No: 5   GFLOPS: 2.37/8.88       result: MeasureResult(costs=(0.09773433675,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.024892807006836, timestamp=1664540733.2792308)       [('tile_f', [-1, 8, 4, 16]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 1, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,1778658
+    No: 6   GFLOPS: 0.00/8.88       result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -770,9 +770,10 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 128, 2, 2]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 32]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3627871
-    No: 7   GFLOPS: 1.32/209.95     result: MeasureResult(costs=(0.1760204925,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.549057245254517, timestamp=1664521857.5830307)        [('tile_f', [-1, 8, 4, 4]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6620578
-    No: 8   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 4, 1, 64]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 256, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1190402
+    No: 7   GFLOPS: 3.47/8.88       result: MeasureResult(costs=(0.06671308875,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.086146831512451, timestamp=1664540735.3611345)       [('tile_f', [-1, 1, 4, 64]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6782807
+    No: 8   GFLOPS: 21.12/21.12     result: MeasureResult(costs=(0.0109592353,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3600950241088867, timestamp=1664540736.1205916)       [('tile_f', [-1, 32, 4, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4876984
+    No: 9   GFLOPS: 0.00/21.12      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -894,8 +895,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 128, 4]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 128]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,3277035
-    No: 9   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 2, 4, 64]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 32, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8269568
+    No: 10  GFLOPS: 0.00/21.12      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -1017,132 +1018,162 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 32, 2, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 64, 4]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2606629
-    No: 10  GFLOPS: 4.00/209.95     result: MeasureResult(costs=(0.0578777045,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.9959490299224854, timestamp=1664521859.7966175)       [('tile_f', [-1, 2, 8, 16]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 32, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,1761720
-    No: 11  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
-      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
-        func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
-      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
-        func = build(s, args, target_host=task.target_host, runtime=runtime)
-      File "/workspace/python/tvm/driver/build_module.py", line 227, in build
-        input_mod = lower(inputs, args, name=name, binds=binds)
-      File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
-        return ffi.lower_schedule(inp, args, name, binds, simple_mode)
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 4, 2, 64]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 64, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6218946
+    No: 11  GFLOPS: 0.00/21.12      result: Traceback (most recent call last):
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 738, in __call__
+        yield remote, remote.load_module(os.path.split(build_result.filename)[1])
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 702, in run_through_rpc
+        costs = time_f(*args).results
+      File "/workspace/python/tvm/runtime/module.py", line 357, in evaluator
+        blob = feval(*args)
       File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
-      File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 262, in tvm._ffi._cy3.core.FuncCall
+      File "tvm/_ffi/_cython/./packed_func.pxi", line 251, in tvm._ffi._cy3.core.FuncCall3
       File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
     tvm._ffi.base.TVMError: Traceback (most recent call last):
-      24: TVMFuncCall
+      4: TVMFuncCall
             at ../src/runtime/c_runtime_api.cc:477
-      23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
-            at ../include/tvm/runtime/packed_func.h:1217
-      22: Call
-            at ../include/tvm/runtime/packed_func.h:1213
-      21: operator()
-            at ../include/tvm/runtime/packed_func.h:1731
-      20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
-            at ../include/tvm/runtime/packed_func.h:1671
-      19: run<>
-            at ../include/tvm/runtime/packed_func.h:1631
-      18: run<tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1631
-      17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1631
-      16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1631
-      15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1631
-      14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1646
-      13: operator()
-            at ../src/driver/driver_api.cc:379
-      12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
-            at ../src/driver/driver_api.cc:365
-      11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
-            at ../src/driver/driver_api.cc:260
-      10: tvm::transform::Pass::operator()(tvm::IRModule) const
-            at ../src/ir/transform.cc:258
-      9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
-            at ../src/ir/transform.cc:274
-      8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
-            at ../src/ir/transform.cc:453
-      7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
-            at ../src/ir/transform.cc:274
-      6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
-            at ../src/tir/ir/transform.cc:100
-      5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
-            at ../include/tvm/runtime/packed_func.h:1750
-      4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
-            at ../include/tvm/runtime/packed_func.h:1694
-      3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
-            at ../include/tvm/runtime/packed_func.h:1618
-      2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+      3: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
             at ../include/tvm/runtime/packed_func.h:1217
-      1: Call
-            at ../include/tvm/runtime/packed_func.h:1213
-      0: operator()
-            at ../src/runtime/c_runtime_api.cc:534
-      File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
-      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
-        raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
+      2: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+            at ../src/runtime/rpc/rpc_module.cc:129
+      1: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)
+            at ../src/runtime/rpc/rpc_endpoint.cc:1009
+      0: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)
+            at ../src/runtime/rpc/rpc_endpoint.cc:801
+      File "../src/runtime/rpc/rpc_endpoint.cc", line 801
+    TVMError: 
+    ---------------------------------------------------------------
+    An error occurred during the execution of TVM.
+    For more information, please see: https://tvm.apache.org/docs/errors.html
+    ---------------------------------------------------------------
+      Check failed: (code == RPCCode::kReturn) is false: code=kShutdown
+
+    During handling of the above exception, another exception occurred:
 
     Traceback (most recent call last):
-      24: TVMFuncCall
-            at ../src/runtime/c_runtime_api.cc:477
-      23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
-            at ../include/tvm/runtime/packed_func.h:1217
-      22: Call
-            at ../include/tvm/runtime/packed_func.h:1213
-      21: operator()
-            at ../include/tvm/runtime/packed_func.h:1731
-      20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
-            at ../include/tvm/runtime/packed_func.h:1671
-      19: run<>
-            at ../include/tvm/runtime/packed_func.h:1631
-      18: run<tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1631
-      17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1631
-      16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1631
-      15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1631
-      14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
-            at ../include/tvm/runtime/packed_func.h:1646
-      13: operator()
-            at ../src/driver/driver_api.cc:379
-      12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
-            at ../src/driver/driver_api.cc:365
-      11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
-            at ../src/driver/driver_api.cc:260
-      10: tvm::transform::Pass::operator()(tvm::IRModule) const
-            at ../src/ir/transform.cc:258
-      9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
-            at ../src/ir/transform.cc:274
-      8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
-            at ../src/ir/transform.cc:453
-      7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
-            at ../src/ir/transform.cc:274
-      6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
-            at ../src/tir/ir/transform.cc:100
-      5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
-            at ../include/tvm/runtime/packed_func.h:1750
-      4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
-            at ../include/tvm/runtime/packed_func.h:1694
-      3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 702, in run_through_rpc
+        costs = time_f(*args).results
+      File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
+        self.gen.throw(type, value, traceback)
+      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 742, in __call__
+        remote.remove(build_result.filename)
+      File "/workspace/python/tvm/rpc/client.py", line 143, in remove
+        self._remote_funcs["remove"] = self.get_function("tvm.rpc.server.remove")
+      File "/workspace/python/tvm/rpc/client.py", line 71, in get_function
+        return self._sess.get_function(name)
+      File "/workspace/python/tvm/runtime/module.py", line 171, in get_function
+        self.handle, c_str(name), ctypes.c_int(query_imports), ctypes.byref(ret_handle)
+      File "/workspace/python/tvm/_ffi/base.py", line 348, in check_call
+        raise get_last_ffi_error()
+    tvm._ffi.base.TVMError: Traceback (most recent call last):
+      52: 0xffffffffffffffff
+      51: _start
+      50: __libc_start_main
+      49: _Py_UnixMain
+      48: 0x0000000000650da0
+      47: 0x0000000000650afa
+      46: _PyFunction_FastCallDict
+      45: _PyEval_EvalCodeWithName
+      44: _PyEval_EvalFrameDefault
+      43: _PyFunction_FastCallKeywords
+      42: _PyEval_EvalCodeWithName
+      41: _PyEval_EvalFrameDefault
+      40: _PyMethodDef_RawFastCallKeywords
+      39: 0x0000000000546369
+      38: _PyEval_EvalCodeWithName
+      37: _PyEval_EvalFrameDefault
+      36: _PyFunction_FastCallKeywords
+      35: _PyEval_EvalCodeWithName
+      34: _PyEval_EvalFrameDefault
+      33: _PyFunction_FastCallDict
+      32: _PyEval_EvalCodeWithName
+      31: _PyEval_EvalFrameDefault
+      30: _PyObject_FastCallDict
+      29: 0x00000000004c06e1
+      28: _PyFunction_FastCallDict
+      27: _PyEval_EvalFrameDefault
+      26: _PyMethodDescr_FastCallKeywords
+      25: 0x00000000005dcb58
+      24: 0x00000000005dc83f
+      23: 0x00000000004ba127
+      22: _PyEval_EvalFrameDefault
+      21: _PyFunction_FastCallKeywords
+      20: _PyEval_EvalFrameDefault
+      19: _PyFunction_FastCallKeywords
+      18: _PyEval_EvalFrameDefault
+      17: _PyFunction_FastCallKeywords
+      16: _PyEval_EvalCodeWithName
+      15: _PyEval_EvalFrameDefault
+      14: 0x0000000000537c30
+      13: _PyObject_FastCallKeywords
+      12: 0x00007f0d086adfa2
+      11: _ctypes_callproc
+      10: ffi_call
+      9: ffi_call_unix64
+      8: TVMModGetFunction
+            at ../src/runtime/c_runtime_api.cc:408
+      7: tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)
+            at ../src/runtime/module.cc:66
+      6: tvm::runtime::RPCModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)
+            at ../src/runtime/rpc/rpc_module.cc:181
+      5: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+            at ../src/runtime/rpc/rpc_endpoint.cc:1004
+      4: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(tvm::runtime::RPCCode, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+            at ../src/runtime/rpc/rpc_endpoint.h:211
+      3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(int&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
             at ../include/tvm/runtime/packed_func.h:1618
       2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
             at ../include/tvm/runtime/packed_func.h:1217
       1: Call
             at ../include/tvm/runtime/packed_func.h:1213
       0: operator()
-            at ../src/runtime/c_runtime_api.cc:534
-      File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
-      File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
-        raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 1, 128]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 16]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,5743310
-    No: 12  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+            at ../src/runtime/rpc/rpc_endpoint.cc:681
+      File "../src/runtime/rpc/rpc_endpoint.cc", line 681
+    TVMError: 
+    ---------------------------------------------------------------
+    An error occurred during the execution of TVM.
+    For more information, please see: https://tvm.apache.org/docs/errors.html
+    ---------------------------------------------------------------
+      Check failed: (code == RPCCode::kReturn) is false: code=1
+
+    Traceback (most recent call last):
+      52: 0xffffffffffffffff
+      51: _start
+      50: __libc_start_main
+      49: _Py_UnixMain
+      48: 0x0000000000650da0
+      47: 0x0000000000650afa
+      46: _PyFunction_FastCallDict
+      45: _PyEval_EvalCodeWithName
+      44: _PyEval_EvalFrameDefault
+      43: _PyFunction_FastCallKeywords
+      42: _PyEval_EvalCodeWithName
+      41: _PyEval_EvalFrameDefault
+      40: _PyMethodDef_RawFastCallKeywords
+      39: 0x0000000000546369
+      38: _PyEval_EvalCodeWithName
+      37: _PyEval_EvalFrameDefault
+      36: _PyFunction_FastCallKeywords
+      35: _PyEval_EvalCodeWithName
+      34: _PyEval_EvalFrameDefault
+      33: _PyFunction_FastCallDict
+      32: _PyEval_EvalCodeWithName
+      31: _PyEval_EvalFrameDefault
+      30: _PyObject_FastCallDict
+      29: 0x00000000004c06e1
+      28: _PyFunction_FastCallDict
+      27: _PyEval_EvalFrameDefault
+      26: _PyMethodDescr_FastCallKeywords
+      25: 0x00000000005dcb58
+      24: 0x00000000005dc83f
+      23: 0x00000000004ba127
+      22: _PyEval_EvalFrameDefault
+      21: _PyFunction_FastCallKeywords
+      20: _PyEval_EvalFrameDefault
+      19: _PyFunction_FastCall      [('tile_f', [-1, 1, 1, 64]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 16]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1091180
+    No: 12  GFLOPS: 10.22/21.12     result: MeasureResult(costs=(0.02264849083333333,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.971567392349243, timestamp=1664540744.7623453) [('tile_f', [-1, 2, 4, 4]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 2, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9364416
+    No: 13  GFLOPS: 0.00/21.12      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -1264,10 +1295,10 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 2, 16, 4]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 32]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,921707
-    No: 13  GFLOPS: 5.01/209.95     result: MeasureResult(costs=(0.046250613999999995,), error_no=MeasureErrorNo.NO_ERROR, all_cost=5.1030497550964355, timestamp=1664521865.0797503)       [('tile_f', [-1, 2, 1, 8]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9102637
-    No: 14  GFLOPS: 71.47/209.95    result: MeasureResult(costs=(0.0032390244838709672,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.8225221633911133, timestamp=1664521865.7329972)      [('tile_f', [-1, 4, 8, 1]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7454729
-    No: 15  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 8, 4, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 128, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,254615
+    No: 14  GFLOPS: 1.22/21.12      result: MeasureResult(costs=(0.1892443235,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.4586076736450195, timestamp=1664540749.4042463)       [('tile_f', [-1, 8, 1, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 32]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,5568038
+    No: 15  GFLOPS: 54.18/54.18     result: MeasureResult(costs=(0.004272896125000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.2073407173156738, timestamp=1664540750.0702045)       [('tile_f', [-1, 2, 1, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 32, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,19085
+    No: 16  GFLOPS: 0.00/54.18      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -1389,8 +1420,9 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 32, 4, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 256, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,257644
-    No: 16  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 1, 256]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 32, 16]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6914156
+    No: 17  GFLOPS: 7.45/54.18      result: MeasureResult(costs=(0.031059663749999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=10.050727844238281, timestamp=1664540760.2964559)       [('tile_f', [-1, 4, 1, 2]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 64]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8871117
+    No: 18  GFLOPS: 0.00/54.18      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -1512,8 +1544,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 32, 1, 16]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 128]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2503549
-    No: 17  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 128, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 128]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,955117
+    No: 19  GFLOPS: 0.00/54.18      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -1635,9 +1667,8 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 128, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,647777
-    No: 18  GFLOPS: 265.53/265.53   result: MeasureResult(costs=(0.0008718405652173913,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.2890162467956543, timestamp=1664521867.6697655)      [('tile_f', [-1, 1, 1, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3775556
-    No: 19  GFLOPS: 0.00/265.53     result: Traceback (most recent call last):
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 2, 128, 1]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 64, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8152590
+    No: 20  GFLOPS: 0.00/54.18      result: Traceback (most recent call last):
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 588, in __call__
         func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 540, in _build_func_common
@@ -1759,8 +1790,7 @@ for this template
       File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
       File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
         raise InstantiationError("Skipped because of invalid gpu kernel")
-    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 1, 2, 128]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4553553
-    No: 20  GFLOPS: 1.00/265.53     result: MeasureResult(costs=(0.23153633099999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.421393156051636, timestamp=1664521871.0406532) [('tile_f', [-1, 4, 4, 32]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1356936
+    tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [('tile_f', [-1, 4, 32, 1]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,5136162
 
 
 
@@ -1815,9 +1845,9 @@ and measure running time.
     Finish loading 20 records
 
     Best config:
-    [('tile_f', [-1, 1, 1, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3775556
+    [('tile_f', [-1, 2, 1, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 32, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,19085
     Finish loading 20 records
-    Time cost of this operator: 0.001233
+    Time cost of this operator: 0.004598
 
 
 
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_relay_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_relay_cuda.rst.txt
index 8ec1cf4fbe..522e367a2d 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_relay_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_relay_cuda.rst.txt
@@ -197,7 +197,7 @@ Before tuning, we apply some configurations.
 
  .. code-block:: none
 
-    /workspace/python/tvm/target/target.py:389: UserWarning: Try specifying cuda arch by adding 'arch=sm_xx' to your target.
+    /workspace/python/tvm/target/target.py:393: UserWarning: Try specifying cuda arch by adding 'arch=sm_xx' to your target.
       warnings.warn("Try specifying cuda arch by adding 'arch=sm_xx' to your target.")
 
 
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
index 546da8484a..41ae563d87 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
@@ -327,10 +327,10 @@ Timing the untuned program
     ########## Build without Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  310.9     98.719   (1, 2, 10, 10, 3)  2       1        [310.9]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       3.063     0.973    (1, 6, 10, 10)     1       1        [3.063]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.97      0.308    (1, 1, 10, 10, 3)  1       1        [0.97]            
-    Total_time                                    -                                             314.933   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  311.0     98.724   (1, 2, 10, 10, 3)  2       1        [311.0]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       3.03      0.962    (1, 6, 10, 10)     1       1        [3.03]            
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.99      0.314    (1, 1, 10, 10, 3)  1       1        [0.99]            
+    Total_time                                    -                                             315.021   -        -                  -       -        -                 
 
 
 
@@ -394,10 +394,10 @@ Timing the tuned program
     ########## Build with Autotuning ##########
     Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)  
     ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------  
-    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.7     97.456   (1, 6, 10, 10, 1)  2       1        [102.7]           
-    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.829     1.736    (1, 6, 10, 10)     1       1        [1.829]           
-    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.852     0.809    (1, 3, 10, 10, 1)  1       1        [0.852]           
-    Total_time                                    -                                             105.381   -        -                  -       -        -                 
+    tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.2     97.408   (1, 6, 10, 10, 1)  2       1        [102.2]           
+    tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.767     1.684    (1, 6, 10, 10)     1       1        [1.767]           
+    tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.952     0.908    (1, 1, 10, 10, 3)  1       1        [0.952]           
+    Total_time                                    -                                             104.919   -        -                  -       -        -                 
 
 
 
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
index b504052a47..3a700e654f 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
@@ -225,7 +225,7 @@ take about **2 minutes** to download the Stanford Cars, while COCO 2017 validati
  .. code-block:: none
 
 
-    '/tmp/tmp_yfqyeec/images/random'
+    '/tmp/tmp34kfzoxj/images/random'
 
 
 
@@ -316,7 +316,7 @@ objects to other stuff? We can display some examples from our datasets using ``m
 
 
 .. image-sg:: /how_to/work_with_microtvm/images/sphx_glr_micro_train_001.png
-   :alt: [1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0]
+   :alt: [0.0, 1.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0]
    :srcset: /how_to/work_with_microtvm/images/sphx_glr_micro_train_001.png
    :class: sphx-glr-single-img
 
@@ -325,8 +325,8 @@ objects to other stuff? We can display some examples from our datasets using ``m
 
  .. code-block:: none
 
-    /tmp/tmp_yfqyeec/images/target contains 8144 images
-    /tmp/tmp_yfqyeec/images/random contains 5000 images
+    /tmp/tmp34kfzoxj/images/target contains 8144 images
+    /tmp/tmp34kfzoxj/images/random contains 5000 images
 
 
 
@@ -501,13 +501,13 @@ the time on our validation set).
  .. code-block:: none
 
     Epoch 1/3
-    328/328 - 47s - loss: 0.2058 - accuracy: 0.9242 - val_loss: 0.1113 - val_accuracy: 0.9592 - 47s/epoch - 143ms/step
+    328/328 - 46s - loss: 0.2255 - accuracy: 0.9233 - val_loss: 0.1192 - val_accuracy: 0.9592 - 46s/epoch - 141ms/step
     Epoch 2/3
-    328/328 - 43s - loss: 0.0924 - accuracy: 0.9652 - val_loss: 0.1031 - val_accuracy: 0.9660 - 43s/epoch - 132ms/step
+    328/328 - 43s - loss: 0.1065 - accuracy: 0.9597 - val_loss: 0.0886 - val_accuracy: 0.9705 - 43s/epoch - 130ms/step
     Epoch 3/3
-    328/328 - 43s - loss: 0.0545 - accuracy: 0.9791 - val_loss: 0.1109 - val_accuracy: 0.9637 - 43s/epoch - 131ms/step
+    328/328 - 43s - loss: 0.0617 - accuracy: 0.9773 - val_loss: 0.0985 - val_accuracy: 0.9694 - 43s/epoch - 130ms/step
 
-    <keras.callbacks.History object at 0x7f023043f410>
+    <keras.callbacks.History object at 0x7f4b83feb150>
 
 
 
@@ -864,7 +864,7 @@ Arduino tutorial for how to do that `on GitHub <https://github.com/guberti/tvm-a
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 4 minutes  11.116 seconds)
+   **Total running time of the script:** ( 4 minutes  28.534 seconds)
 
 
 .. _sphx_glr_download_how_to_work_with_microtvm_micro_train.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
index 4851d3cf6e..7892d6393e 100644
--- a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
@@ -5,16 +5,16 @@
 
 Computation times
 =================
-**05:13.055** total execution time for **how_to_work_with_microtvm** files:
+**05:29.707** total execution time for **how_to_work_with_microtvm** files:
 
 +---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)               | 04:11.116 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``)               | 04:28.534 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)         | 00:49.489 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)         | 00:48.247 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)                   | 00:08.678 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``)                   | 00:09.282 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)             | 00:03.770 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)             | 00:03.642 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)             | 00:00.001 | 0.0 MB |
 +---------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
index 37b8944a34..26ca84dbd0 100644
--- a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
 
 Computation times
 =================
-**00:43.734** total execution time for **how_to_work_with_relay** files:
+**00:42.975** total execution time for **how_to_work_with_relay** files:
 
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:31.847 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:31.430 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:10.357 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)           | 00:10.049 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.523 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)                             | 00:01.489 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``)                 | 00:00.007 | 0.0 MB |
 +----------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
index aa3e01d10b..53c417aeeb 100644
--- a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
@@ -261,7 +261,7 @@ The following example customizes CUDA lowering rule for :code:`exp`.
  .. code-block:: none
 
 
-    <function my_cuda_math_rule at 0x7f01d0e8ae60>
+    <function my_cuda_math_rule at 0x7f4b1f4ae8c0>
 
 
 
diff --git a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
index a483e52ecb..8ebae30f47 100644
--- a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
@@ -5,20 +5,20 @@
 
 Computation times
 =================
-**00:06.758** total execution time for **how_to_work_with_schedules** files:
+**00:07.591** total execution time for **how_to_work_with_schedules** files:
 
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:04.456 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)                 | 00:05.309 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:01.021 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)                     | 00:00.986 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.550 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)                     | 00:00.567 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.532 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)                               | 00:00.537 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.117 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)                     | 00:00.112 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.040 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.039 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)                               | 00:00.027 | 0.0 MB |
 +------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt b/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt
index bb3cd0019f..edef34fb2d 100644
--- a/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt
@@ -347,7 +347,7 @@ The importing needs to happen before the tensorized GEMV being executed.
                  C: Buffer(C_2: Pointer(float32), float32, [524288], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C}
       preflattened_buffer_map = {A_1: A_3: Buffer(A_2, float32, [1024, 64], []), B_1: B_3: Buffer(B_2, float32, [512, 64], []), C_1: C_3: Buffer(C_2, float32, [1024, 512], [])} {
-      attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmpesn7feqn/input0.cc'\nsource_filename = \"/tmp/tmpesn7feqn/input0.cc\"\ntarget datalayout = \"e-m:e-i64:64-f80:128-n8:16:32:64-S128\"\ntarget triple = \"x86_64-pc-linux-gnu\"\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n  %7 = alloca float*, align 8\n  %8 = alloca float*, align 8\n  %9 = alloca floa [...]
+      attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmpmwsga47u/input0.cc'\nsource_filename = \"/tmp/tmpmwsga47u/input0.cc\"\ntarget datalayout = \"e-m:e-i64:64-f80:128-n8:16:32:64-S128\"\ntarget triple = \"x86_64-pc-linux-gnu\"\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n  %7 = alloca float*, align 8\n  %8 = alloca float*, align 8\n  %9 = alloca floa [...]
       for (i, 0, 1024) {
         for (j.outer: int32, 0, 32) {
           @tir.call_extern("gemv_update", @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), C_2, ((i*512) + (j.outer*16)), 16, 2, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), A_2, (i*64), 64, 1, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), B_2, (j.outer*1024), 1024, 1, dtype=handle), 16, 64, 64, dtype=int32)
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
index 683e010819..c6f8d5f938 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:25.806** total execution time for **topic_vta_tutorials_autotvm** files:
+**00:24.406** total execution time for **topic_vta_tutorials_autotvm** files:
 
 +---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:25.799 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:24.400 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)     | 00:00.007 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)     | 00:00.006 | 0.0 MB |
 +---------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/tune_relay_vta.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/tune_relay_vta.rst.txt
index a82d08b5d6..ef00d71070 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/tune_relay_vta.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/tune_relay_vta.rst.txt
@@ -536,7 +536,7 @@ Finally, we launch tuning jobs and evaluate the end-to-end performance.
  .. code-block:: none
 
     Extract tasks...
-    /workspace/python/tvm/target/target.py:273: UserWarning: target_host parameter is going to be deprecated. Please pass in tvm.target.Target(target, host=target_host) instead.
+    /workspace/python/tvm/target/target.py:277: UserWarning: target_host parameter is going to be deprecated. Please pass in tvm.target.Target(target, host=target_host) instead.
       "target_host parameter is going to be deprecated. "
     Extracted 10 conv2d tasks:
     (1, 56, 56, 64, 64, 3, 3, 1, 1, 1, 1)
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
index 3533254f5c..82b1c4d548 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -289,7 +289,7 @@ The compilation steps are:
       DeprecationWarning,
     /workspace/vta/tutorials/frontend/deploy_classification.py:213: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the  new recommended usage.
       relay_prog, target=tvm.target.Target(target, host=env.target_host), params=params
-    resnet18_v1 inference graph built in 27.34s!
+    resnet18_v1 inference graph built in 25.91s!
 
 
 
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
index 9477bcbeac..3f60ee19a8 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
@@ -333,7 +333,7 @@ The compilation steps are:
 
     /workspace/python/tvm/relay/build_module.py:348: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
       DeprecationWarning,
-    yolov3-tiny inference graph built in 19.17s!
+    yolov3-tiny inference graph built in 18.32s!
 
 
 
diff --git a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
index 93263a8a02..2a1b714df5 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**01:38.886** total execution time for **topic_vta_tutorials_frontend** files:
+**01:36.969** total execution time for **topic_vta_tutorials_frontend** files:
 
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 00:51.332 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)           | 00:50.844 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 00:47.554 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 00:46.125 | 0.0 MB |
 +------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
index 6fa5e6c76f..70e8303b51 100644
--- a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:03.042** total execution time for **topic_vta_tutorials_optimize** files:
+**00:03.024** total execution time for **topic_vta_tutorials_optimize** files:
 
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.637 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)         | 00:02.614 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.405 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.410 | 0.0 MB |
 +--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
index 158c1481e4..e67e8194d7 100644
--- a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
 
 Computation times
 =================
-**00:00.741** total execution time for **topic_vta_tutorials** files:
+**00:00.755** total execution time for **topic_vta_tutorials** files:
 
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.396 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.397 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.346 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.358 | 0.0 MB |
 +---------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
index 8c65748c37..55ba9024fb 100644
--- a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
@@ -207,8 +207,8 @@ trials, we can load the best schedule from the log file and apply it.
 
  .. code-block:: none
 
-    *E
 
+    .T
 
 
 
@@ -333,7 +333,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 94.372 ms
+    Execution time of this operator: 92.678 ms
 
 
 
@@ -451,7 +451,7 @@ operations.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  3.881 seconds)
+   **Total running time of the script:** ( 1 minutes  4.962 seconds)
 
 
 .. _sphx_glr_download_tutorial_auto_scheduler_matmul_x86.py:
diff --git a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
index fb47df79cf..5796702837 100644
--- a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
@@ -462,16 +462,16 @@ reduce variance, we take 5 measurements and average them.
     waiting for device...
     device available
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 2.10/2.10       result: MeasureResult(costs=(0.127643009,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.1997056007385254, timestamp=1664520546.7910216)        [('tile_y', [-1, 128]), ('tile_x', [-1, 4])],None,27
-    No: 2   GFLOPS: 12.39/12.39     result: MeasureResult(costs=(0.0216655158,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5449838638305664, timestamp=1664520547.3485987)       [('tile_y', [-1, 128]), ('tile_x', [-1, 256])],None,87
-    No: 3   GFLOPS: 12.43/12.43     result: MeasureResult(costs=(0.021592132,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.489682674407959, timestamp=1664520548.6116724) [('tile_y', [-1, 2]), ('tile_x', [-1, 512])],None,91
-    No: 4   GFLOPS: 3.21/12.43      result: MeasureResult(costs=(0.08370996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.466071367263794, timestamp=1664520550.1154666)  [('tile_y', [-1, 512]), ('tile_x', [-1, 16])],None,49
-    No: 5   GFLOPS: 2.82/12.43      result: MeasureResult(costs=(0.0951237492,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.6726715564727783, timestamp=1664520551.9072914)       [('tile_y', [-1, 1]), ('tile_x', [-1, 16])],None,40
-    No: 6   GFLOPS: 2.40/12.43      result: MeasureResult(costs=(0.1120731246,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.924961805343628, timestamp=1664520554.5994174)        [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
-    No: 7   GFLOPS: 3.47/12.43      result: MeasureResult(costs=(0.07731000699999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.3852460384368896, timestamp=1664520556.7447188)        [('tile_y', [-1, 8]), ('tile_x', [-1, 8])],None,33
-    No: 8   GFLOPS: 9.81/12.43      result: MeasureResult(costs=(0.0273725446,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6162815093994141, timestamp=1664520557.387177)        [('tile_y', [-1, 2]), ('tile_x', [-1, 32])],None,51
-    No: 9   GFLOPS: 1.31/12.43      result: MeasureResult(costs=(0.20548261980000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.3982110023498535, timestamp=1664520560.9079657)        [('tile_y', [-1, 2]), ('tile_x', [-1, 1])],None,1
-    No: 10  GFLOPS: 1.18/12.43      result: MeasureResult(costs=(0.2274523124,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.754526138305664, timestamp=1664520564.7125807)        [('tile_y', [-1, 16]), ('tile_x', [-1, 1])],None,4
+    No: 1   GFLOPS: 1.77/1.77       result: MeasureResult(costs=(0.15186273819999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.569329261779785, timestamp=1664539450.3607802) [('tile_y', [-1, 4]), ('tile_x', [-1, 1])],None,2
+    No: 2   GFLOPS: 10.05/10.05     result: MeasureResult(costs=(0.026703480800000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.654710054397583, timestamp=1664539450.9974444)        [('tile_y', [-1, 512]), ('tile_x', [-1, 128])],None,79
+    No: 3   GFLOPS: 3.14/10.05      result: MeasureResult(costs=(0.085575174,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.5224878787994385, timestamp=1664539453.254927) [('tile_y', [-1, 256]), ('tile_x', [-1, 8])],None,38
+    No: 4   GFLOPS: 0.88/10.05      result: MeasureResult(costs=(0.30670460320000004,), error_no=MeasureErrorNo.NO_ERROR, all_cost=5.034608602523804, timestamp=1664539459.0377066) [('tile_y', [-1, 512]), ('tile_x', [-1, 2])],None,19
+    No: 5   GFLOPS: 3.28/10.05      result: MeasureResult(costs=(0.0818603304,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.466125249862671, timestamp=1664539460.6972318)        [('tile_y', [-1, 32]), ('tile_x', [-1, 8])],None,35
+    No: 6   GFLOPS: 11.24/11.24     result: MeasureResult(costs=(0.0238920254,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.609149694442749, timestamp=1664539461.9563024)        [('tile_y', [-1, 1]), ('tile_x', [-1, 64])],None,60
+    No: 7   GFLOPS: 2.79/11.24      result: MeasureResult(costs=(0.0962204566,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.6881623268127441, timestamp=1664539463.6596882)       [('tile_y', [-1, 4]), ('tile_x', [-1, 4])],None,22
+    No: 8   GFLOPS: 10.19/11.24     result: MeasureResult(costs=(0.026351381400000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6101758480072021, timestamp=1664539464.2869136)       [('tile_y', [-1, 2]), ('tile_x', [-1, 64])],None,61
+    No: 9   GFLOPS: 1.79/11.24      result: MeasureResult(costs=(0.15036336060000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.5112602710723877, timestamp=1664539466.9145868)        [('tile_y', [-1, 2]), ('tile_x', [-1, 2])],None,11
+    No: 10  GFLOPS: 2.22/11.24      result: MeasureResult(costs=(0.1208848052,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.0459158420562744, timestamp=1664539469.014316)        [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
 
 
 
diff --git a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
index bf58abb813..ba20f174ba 100644
--- a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
@@ -320,7 +320,7 @@ standard deviation.
 
  .. code-block:: none
 
-    {'mean': 510.8923493299993, 'median': 510.9241215499992, 'std': 1.064284793019263}
+    {'mean': 509.19756300999325, 'median': 509.0108063000116, 'std': 1.0381112545995472}
 
 
 
@@ -554,30 +554,30 @@ the tuning data to.
 
  .. code-block:: none
 
-
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:    9.64/  23.47 GFLOPS | Progress: (4/20) | 6.79 s
    [Task  1/25]  Current/Best:   22.48/  23.47 GFLOPS | Progress: (8/20) | 12.54 s
    [Task  1/25]  Current/Best:   14.84/  23.47 GFLOPS | Progress: (12/20) | 15.64 s
    [Task  1/25]  Current/Best:   15.98/  23.47 GFLOPS | Progress: (16/20) | 17.63 s
    [Task  1/25]  Current/Best:   14.93/  23.49 GFLOPS | Progress: (20/20) | 19.55 s Done.
-
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:    3.31/  16.84 GFLOPS | Progress: (4/20) | 2.85 s
    [Task  2/25]  Current/Best:    6.63/  16.84 GFLOPS | Progress: (8/20) | 4.05 s
    [Task  2/25]  Current/Best:   10.61/  16.84 GFLOPS | Progress: (12/20) | 5.26 s
    [Task  2/25]  Current/Best:   17.18/  19.74 GFLOPS | Progress: (16/20) | 6.33 s
    [Task  2/25]  Current/Best:   21.97/  21.97 GFLOPS | Progress: (20/20) | 8.13 s Done.
-
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:    6.64/  23.41 GFLOPS | Progress: (4/20) | 3.88 s
    [Task  3/25]  Current/Best:   23.68/  23.68 GFLOPS | Progress: (8/20) | 6.96 s
    [Task  3/25]  Current/Best:   14.12/  23.68 GFLOPS | Progress: (12/20) | 9.51 s
    [Task  3/25]  Current/Best:    3.16/  23.68 GFLOPS | Progress: (16/20) | 11.93 s
    [Task  3/25]  Current/Best:   20.80/  24.12 GFLOPS | Progress: (20/20) | 13.36 s Done.
-
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:    8.40/  15.60 GFLOPS | Progress: (4/20) | 4.57 s
    [Task  4/25]  Current/Best:   12.28/  15.60 GFLOPS | Progress: (8/20) | 10.41 s
    [Task  4/25]  Current/Best:   12.26/  16.65 GFLOPS | Progress: (12/20) | 13.00 s
    [Task  4/25]  Current/Best:   16.92/  19.76 GFLOPS | Progress: (16/20) | 16.11 s
    [Task  4/25]  Current/Best:   17.94/  19.76 GFLOPS | Progress: (20/20) | 17.99 s Done.
-
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:    4.42/  16.27 GFLOPS | Progress: (4/20) | 3.28 s
    [Task  5/25]  Current/Best:    6.85/  19.31 GFLOPS | Progress: (8/20) | 4.80 s
    [Task  5/25]  Current/Best:   13.70/  19.31 GFLOPS | Progress: (12/20) | 6.90 s
    [Task  5/25]  Current/Best:   13.09/  19.31 GFLOPS | Progress: (16/20) | 8.43 s
    [Task  5/25]  Current/Best:    1.33/  19.31 GFLOPS | Progress: (20/20) | 10.73 s Done.
-
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:    8.40/  19.85 GFLOPS | Progress: (4/20) | 3.58 s
    [Task  6/25]  Current/Best:   15.90/  19.98 GFLOPS | Progress: (8/20) | 6.02 s
    [Task  6/25]  Current/Best:   20.14/  20.14 GFLOPS | Progress: (12/20) | 7.75 s
    [Task  6/25]  Current/Best:   13.88/  20.14 GFLOPS | Progress: (16/20) | 10.06 s
    [Task  6/25]  Current/Best:    6.46/  20.14 GFLOPS | Progress: (20/20) | 16.91 s Done.
-
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:    7.02/  21.66 GFLOPS | Progress: (4/20) | 3.35 s
    [Task  7/25]  Current/Best:   15.14/  21.66 GFLOPS | Progress: (8/20) | 7.07 s
    [Task  7/25]  Current/Best:    6.22/  21.66 GFLOPS | Progress: (12/20) | 10.06 s
    [Task  7/25]  Current/Best:   20.82/  21.66 GFLOPS | Progress: (16/20) | 11.71 s
    [Task  7/25]  Current/Best:   16.39/  21.66 GFLOPS | Progress: (20/20) | 13.52 s Done.
-
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:    8.90/  18.79 GFLOPS | Progress: (4/20) | 8.48 s
    [Task  8/25]  Current/Best:   14.92/  18.79 GFLOPS | Progress: (8/20) | 11.12 s
    [Task  8/25]  Current/Best:   14.01/  18.79 GFLOPS | Progress: (12/20) | 13.03 s
    [Task  8/25]  Current/Best:   12.84/  18.79 GFLOPS | Progress: (16/20) | 15.92 s
    [Task  8/25]  Current/Best:   14.04/  18.79 GFLOPS | Progress: (20/20) | 18.95 s Done.
-
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:    6.62/  19.06 GFLOPS | Progress: (4/20) | 3.24 s
    [Task  9/25]  Current/Best:   10.73/  19.06 GFLOPS | Progress: (8/20) | 9.11 s
    [Task  9/25]  Current/Best:    4.54/  19.41 GFLOPS | Progress: (12/20) | 17.71 s
    [Task  9/25]  Current/Best:   13.08/  19.41 GFLOPS | Progress: (16/20) | 20.28 s
    [Task  9/25]  Current/Best:    6.23/  19.41 GFLOPS | Progress: (20/20) | 22.18 s Done.
-
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   18.04/  18.04 GFLOPS | Progress: (4/20) | 3.26 s
    [Task 10/25]  Current/Best:    5.14/  18.04 GFLOPS | Progress: (8/20) | 5.03 s
    [Task 10/25]  Current/Best:    9.50/  18.04 GFLOPS | Progress: (12/20) | 7.47 s
    [Task 10/25]  Current/Best:   18.15/  18.15 GFLOPS | Progress: (16/20) | 9.16 s
    [Task 10/25]  Current/Best:    5.07/  18.15 GFLOPS | Progress: (20/20) | 11.00 s Done.
-
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:    6.85/  19.23 GFLOPS | Progress: (4/20) | 3.40 s
    [Task 11/25]  Current/Best:    8.96/  19.24 GFLOPS | Progress: (8/20) | 5.68 s
    [Task 11/25]  Current/Best:   10.88/  19.24 GFLOPS | Progress: (12/20) | 9.36 s
    [Task 11/25]  Current/Best:   11.49/  19.24 GFLOPS | Progress: (16/20) | 11.64 s
    [Task 11/25]  Current/Best:    9.06/  19.24 GFLOPS | Progress: (20/20) | 14.52 s Done.
-
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   19.05/  19.05 GFLOPS | Progress: (4/20) | 3.53 s
    [Task 12/25]  Current/Best:   18.07/  19.05 GFLOPS | Progress: (8/20) | 9.00 s
    [Task 12/25]  Current/Best:   14.44/  19.05 GFLOPS | Progress: (12/20) | 11.26 s
    [Task 12/25]  Current/Best:   14.28/  19.05 GFLOPS | Progress: (16/20) | 13.29 s
    [Task 12/25]  Current/Best:   14.23/  19.05 GFLOPS | Progress: (20/20) | 15.76 s Done.
-
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:   11.94/  21.60 GFLOPS | Progress: (4/20) | 4.26 s
    [Task 13/25]  Current/Best:    6.21/  21.60 GFLOPS | Progress: (8/20) | 7.37 s
    [Task 13/25]  Current/Best:    8.52/  21.91 GFLOPS | Progress: (12/20) | 9.52 s
    [Task 13/25]  Current/Best:   18.91/  21.91 GFLOPS | Progress: (16/20) | 11.89 s
    [Task 13/25]  Current/Best:   20.56/  22.33 GFLOPS | Progress: (20/20) | 13.72 s Done.
-
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   11.63/  16.73 GFLOPS | Progress: (4/20) | 3.96 s
    [Task 14/25]  Current/Best:   18.15/  18.15 GFLOPS | Progress: (8/20) | 5.07 s
    [Task 14/25]  Current/Best:    3.84/  18.15 GFLOPS | Progress: (12/20) | 12.85 s
    [Task 14/25]  Current/Best:    5.07/  18.15 GFLOPS | Progress: (16/20) | 18.83 s
    [Task 14/25]  Current/Best:   14.52/  18.15 GFLOPS | Progress: (20/20) | 22.77 s
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:   10.41/  12.35 GFLOPS | Progress: (4/20) | 3.51 s
    [Task 15/25]  Current/Best:   14.65/  18.72 GFLOPS | Progress: (8/20) | 4.57 s Done.
-
    [Task 15/25]  Current/Best:   12.12/  19.06 GFLOPS | Progress: (12/20) | 6.58 s
    [Task 15/25]  Current/Best:    5.14/  19.06 GFLOPS | Progress: (16/20) | 13.64 s
    [Task 15/25]  Current/Best:   10.15/  19.06 GFLOPS | Progress: (20/20) | 15.85 s
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   15.74/  15.74 GFLOPS | Progress: (4/20) | 3.49 s
    [Task 16/25]  Current/Best:   16.98/  18.60 GFLOPS | Progress: (8/20) | 5.25 s
    [Task 16/25]  Current/Best:   12.18/  18.79 GFLOPS | Progress: (12/20) | 6.44 s
    [Task 16/25]  Current/Best:    6.25/  18.79 GFLOPS | Progress: (16/20) | 7.91 s
    [Task 16/25]  Current/Best:   15.26/  18.79 GFLOPS | Progress: (20/20) | 9.30 s Done.
-
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:   11.78/  18.82 GFLOPS | Progress: (4/20) | 5.76 s
    [Task 17/25]  Current/Best:   15.23/  21.54 GFLOPS | Progress: (8/20) | 7.87 s
    [Task 17/25]  Current/Best:   11.51/  21.54 GFLOPS | Progress: (12/20) | 10.62 s
    [Task 17/25]  Current/Best:   10.27/  21.54 GFLOPS | Progress: (16/20) | 12.94 s
    [Task 17/25]  Current/Best:   11.28/  23.24 GFLOPS | Progress: (20/20) | 15.90 s Done.
-
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:   12.72/  18.29 GFLOPS | Progress: (4/20) | 3.32 s
    [Task 18/25]  Current/Best:   15.80/  18.29 GFLOPS | Progress: (8/20) | 4.93 s
    [Task 18/25]  Current/Best:   10.31/  18.29 GFLOPS | Progress: (12/20) | 10.75 s
    [Task 18/25]  Current/Best:   11.54/  18.29 GFLOPS | Progress: (16/20) | 14.83 s
    [Task 18/25]  Current/Best:   15.18/  18.78 GFLOPS | Progress: (20/20) | 18.07 s Done.
-
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:    9.59/  14.22 GFLOPS | Progress: (4/20) | 4.15 s
    [Task 19/25]  Current/Best:    2.56/  21.45 GFLOPS | Progress: (8/20) | 8.29 s
    [Task 19/25]  Current/Best:    8.06/  21.45 GFLOPS | Progress: (12/20) | 12.01 s
    [Task 19/25]  Current/Best:   20.14/  21.45 GFLOPS | Progress: (16/20) | 14.18 s
    [Task 19/25]  Current/Best:    5.24/  21.45 GFLOPS | Progress: (20/20) | 19.13 s Done.
-
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:   16.75/  16.75 GFLOPS | Progress: (4/20) | 3.49 s
    [Task 20/25]  Current/Best:   15.15/  16.75 GFLOPS | Progress: (8/20) | 6.62 s
    [Task 20/25]  Current/Best:    4.05/  16.89 GFLOPS | Progress: (12/20) | 10.11 s
    [Task 20/25]  Current/Best:    7.85/  16.89 GFLOPS | Progress: (16/20) | 10.91 s
    [Task 20/25]  Current/Best:    8.32/  16.89 GFLOPS | Progress: (20/20) | 19.57 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:    2.78/  10.69 GFLOPS | Progress: (4/20) | 6.75 s Done.
+
    [Task  1/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  1/25]  Current/Best:   16.85/  16.85 GFLOPS | Progress: (4/20) | 8.09 s
    [Task  1/25]  Current/Best:    6.98/  16.85 GFLOPS | Progress: (8/20) | 11.65 s
    [Task  1/25]  Current/Best:    9.51/  16.85 GFLOPS | Progress: (12/20) | 14.47 s
    [Task  1/25]  Current/Best:    5.27/  16.85 GFLOPS | Progress: (16/20) | 17.57 s
    [Task  1/25]  Current/Best:   23.02/  23.02 GFLOPS | Progress: (20/20) | 19.40 s Done.
+
    [Task  2/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  2/25]  Current/Best:   14.00/  21.66 GFLOPS | Progress: (4/20) | 2.55 s
    [Task  2/25]  Current/Best:    9.25/  21.66 GFLOPS | Progress: (8/20) | 3.67 s
    [Task  2/25]  Current/Best:   11.56/  21.66 GFLOPS | Progress: (12/20) | 5.67 s
    [Task  2/25]  Current/Best:   14.83/  21.66 GFLOPS | Progress: (16/20) | 7.49 s
    [Task  2/25]  Current/Best:   12.09/  21.66 GFLOPS | Progress: (20/20) | 9.17 s Done.
+
    [Task  3/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  3/25]  Current/Best:   10.27/  22.78 GFLOPS | Progress: (4/20) | 3.33 s
    [Task  3/25]  Current/Best:   12.86/  23.45 GFLOPS | Progress: (8/20) | 5.29 s
    [Task  3/25]  Current/Best:   23.57/  23.57 GFLOPS | Progress: (12/20) | 7.73 s
    [Task  3/25]  Current/Best:   12.45/  23.57 GFLOPS | Progress: (16/20) | 9.62 s
    [Task  3/25]  Current/Best:   24.21/  24.21 GFLOPS | Progress: (20/20) | 11.41 s Done.
+
    [Task  4/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  4/25]  Current/Best:    9.82/  19.77 GFLOPS | Progress: (4/20) | 6.96 s
    [Task  4/25]  Current/Best:    8.03/  20.72 GFLOPS | Progress: (8/20) | 8.46 s
    [Task  4/25]  Current/Best:   21.05/  21.05 GFLOPS | Progress: (12/20) | 10.50 s
    [Task  4/25]  Current/Best:   20.61/  21.05 GFLOPS | Progress: (16/20) | 11.82 s
    [Task  4/25]  Current/Best:   11.89/  21.05 GFLOPS | Progress: (20/20) | 13.98 s Done.
+
    [Task  5/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  5/25]  Current/Best:    5.90/  16.51 GFLOPS | Progress: (4/20) | 3.17 s
    [Task  5/25]  Current/Best:   22.24/  22.24 GFLOPS | Progress: (8/20) | 4.92 s
    [Task  5/25]  Current/Best:   11.78/  22.24 GFLOPS | Progress: (12/20) | 6.68 s
    [Task  5/25]  Current/Best:   13.06/  22.24 GFLOPS | Progress: (16/20) | 8.49 s
    [Task  5/25]  Current/Best:    3.38/  22.24 GFLOPS | Progress: (20/20) | 10.95 s Done.
+
    [Task  6/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  6/25]  Current/Best:   11.46/  17.87 GFLOPS | Progress: (4/20) | 3.95 s
    [Task  6/25]  Current/Best:    5.97/  19.95 GFLOPS | Progress: (8/20) | 6.07 s
    [Task  6/25]  Current/Best:   12.10/  19.95 GFLOPS | Progress: (12/20) | 8.88 s
    [Task  6/25]  Current/Best:   10.42/  19.95 GFLOPS | Progress: (16/20) | 16.14 s
    [Task  6/25]  Current/Best:   10.69/  21.19 GFLOPS | Progress: (20/20) | 18.69 s Done.
+
    [Task  7/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  7/25]  Current/Best:   11.86/  18.12 GFLOPS | Progress: (4/20) | 3.78 s
    [Task  7/25]  Current/Best:   16.62/  18.12 GFLOPS | Progress: (8/20) | 5.75 s
    [Task  7/25]  Current/Best:    8.50/  19.53 GFLOPS | Progress: (12/20) | 7.78 s
    [Task  7/25]  Current/Best:   12.97/  19.53 GFLOPS | Progress: (16/20) | 9.71 s
    [Task  7/25]  Current/Best:   13.24/  23.39 GFLOPS | Progress: (20/20) | 11.43 s Done.
+
    [Task  8/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  8/25]  Current/Best:   13.31/  16.18 GFLOPS | Progress: (4/20) | 4.08 s
    [Task  8/25]  Current/Best:   15.94/  16.18 GFLOPS | Progress: (8/20) | 6.14 s
    [Task  8/25]  Current/Best:    4.11/  19.97 GFLOPS | Progress: (12/20) | 8.70 s
    [Task  8/25]  Current/Best:    7.63/  19.97 GFLOPS | Progress: (16/20) | 12.90 s
    [Task  8/25]  Current/Best:    6.56/  19.97 GFLOPS | Progress: (20/20) | 14.66 s Done.
+
    [Task  9/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task  9/25]  Current/Best:   14.07/  14.96 GFLOPS | Progress: (4/20) | 12.31 s
    [Task  9/25]  Current/Best:   12.70/  21.02 GFLOPS | Progress: (8/20) | 15.54 s
    [Task  9/25]  Current/Best:   12.46/  21.02 GFLOPS | Progress: (12/20) | 17.01 s
    [Task  9/25]  Current/Best:   17.44/  21.02 GFLOPS | Progress: (16/20) | 18.80 s
    [Task  9/25]  Current/Best:   16.29/  21.02 GFLOPS | Progress: (20/20) | 20.68 s
    [Task 10/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 10/25]  Current/Best:   12.61/  13.99 GFLOPS | Progress: (4/20) | 3.74 s
    [Task 10/25]  Current/Best:    8.65/  13.99 GFLOPS | Progress: (8/20) | 6.22 s
    [Task 10/25]  Current/Best:    6.24/  15.37 GFLOPS | Progress: (12/20) | 7.88 s
    [Task 10/25]  Current/Best:   12.98/  18.09 GFLOPS | Progress: (16/20) | 9.31 s
    [Task 10/25]  Current/Best:   13.17/  18.30 GFLOPS | Progress: (20/20
 ) | 12.28 s Done.
+
    [Task 11/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 11/25]  Current/Best:    6.19/  21.68 GFLOPS | Progress: (4/20) | 3.44 s
    [Task 11/25]  Current/Best:   18.81/  24.14 GFLOPS | Progress: (8/20) | 4.91 s
    [Task 11/25]  Current/Best:    6.26/  24.14 GFLOPS | Progress: (12/20) | 7.21 s
    [Task 11/25]  Current/Best:   21.23/  24.14 GFLOPS | Progress: (16/20) | 9.11 s
    [Task 11/25]  Current/Best:    3.13/  24.14 GFLOPS | Progress: (20/20) | 12.06 s Done.
+
    [Task 12/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 12/25]  Current/Best:   11.91/  18.33 GFLOPS | Progress: (4/20) | 3.58 s
    [Task 12/25]  Current/Best:    9.38/  18.33 GFLOPS | Progress: (8/20) | 6.16 s
    [Task 12/25]  Current/Best:   10.66/  18.33 GFLOPS | Progress: (12/20) | 12.45 s
    [Task 12/25]  Current/Best:   17.37/  18.33 GFLOPS | Progress: (16/20) | 17.23 s
    [Task 12/25]  Current/Best:   10.87/  18.33 GFLOPS | Progress: (20/20) | 21.97 s Done.
+
    [Task 13/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 13/25]  Current/Best:   11.44/  13.94 GFLOPS | Progress: (4/20) | 4.20 s
    [Task 13/25]  Current/Best:   15.27/  15.80 GFLOPS | Progress: (8/20) | 7.64 s
    [Task 13/25]  Current/Best:    9.97/  19.31 GFLOPS | Progress: (12/20) | 11.05 s
    [Task 13/25]  Current/Best:   18.95/  19.31 GFLOPS | Progress: (16/20) | 14.11 s
    [Task 13/25]  Current/Best:    3.01/  20.49 GFLOPS | Progress: (20/20) | 16.70 s Done.
+
    [Task 14/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 14/25]  Current/Best:   10.68/  17.92 GFLOPS | Progress: (4/20) | 3.83 s
    [Task 14/25]  Current/Best:   16.14/  17.92 GFLOPS | Progress: (8/20) | 6.77 s
    [Task 14/25]  Current/Best:   14.03/  17.92 GFLOPS | Progress: (12/20) | 11.02 s
    [Task 14/25]  Current/Best:   11.87/  17.92 GFLOPS | Progress: (16/20) | 13.68 s Done.
+
    [Task 14/25]  Current/Best:   16.42/  20.46 GFLOPS | Progress: (20/20) | 15.98 s Done.
+
    [Task 15/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 15/25]  Current/Best:   16.67/  20.02 GFLOPS | Progress: (4/20) | 7.87 s
    [Task 15/25]  Current/Best:    1.73/  20.41 GFLOPS | Progress: (8/20) | 10.04 s
    [Task 15/25]  Current/Best:   22.22/  22.22 GFLOPS | Progress: (12/20) | 11.20 s
    [Task 15/25]  Current/Best:    4.98/  22.22 GFLOPS | Progress: (16/20) | 12.74 s
    [Task 15/25]  Current/Best:   18.45/  22.22 GFLOPS | Progress: (20/20) | 14.59 s Done.
+
    [Task 16/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 16/25]  Current/Best:   14.66/  14.66 GFLOPS | Progress: (4/20) | 3.47 s
    [Task 16/25]  Current/Best:   19.50/  19.50 GFLOPS | Progress: (8/20) | 4.87 s
    [Task 16/25]  Current/Best:   18.79/  19.50 GFLOPS | Progress: (12/20) | 9.12 s
    [Task 16/25]  Current/Best:    6.56/  19.50 GFLOPS | Progress: (16/20) | 10.65 s
    [Task 16/25]  Current/Best:   15.06/  19.50 GFLOPS | Progress: (20/20) | 11.85 s Done.
+
    [Task 17/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 17/25]  Current/Best:    3.10/  11.68 GFLOPS | Progress: (4/20) | 4.91 s
    [Task 17/25]  Current/Best:    3.10/  18.22 GFLOPS | Progress: (8/20) | 7.28 s
    [Task 17/25]  Current/Best:   16.23/  23.52 GFLOPS | Progress: (12/20) | 9.30 s
    [Task 17/25]  Current/Best:   14.99/  23.52 GFLOPS | Progress: (16/20) | 12.43 s
    [Task 17/25]  Current/Best:    9.84/  23.52 GFLOPS | Progress: (20/20) | 14.30 s Done.
+
    [Task 18/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 18/25]  Current/Best:    2.96/  15.65 GFLOPS | Progress: (4/20) | 4.11 s
    [Task 18/25]  Current/Best:   15.93/  19.36 GFLOPS | Progress: (8/20) | 6.44 s
    [Task 18/25]  Current/Best:   12.24/  19.36 GFLOPS | Progress: (12/20) | 10.86 s
    [Task 18/25]  Current/Best:    9.52/  21.41 GFLOPS | Progress: (16/20) | 13.01 s
    [Task 18/25]  Current/Best:   22.02/  22.02 GFLOPS | Progress: (20/20) | 16.79 s Done.
+
    [Task 19/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 19/25]  Current/Best:   17.01/  17.01 GFLOPS | Progress: (4/20) | 5.50 s
    [Task 19/25]  Current/Best:   15.18/  20.79 GFLOPS | Progress: (8/20) | 8.45 s
    [Task 19/25]  Current/Best:    2.70/  20.79 GFLOPS | Progress: (12/20) | 12.33 s
    [Task 19/25]  Current/Best:    3.10/  20.79 GFLOPS | Progress: (16/20) | 14.87 s
    [Task 19/25]  Current/Best:   19.62/  20.79 GFLOPS | Progress: (20/20) | 17.47 s Done.
+
    [Task 20/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 20/25]  Current/Best:   19.55/  20.89 GFLOPS | Progress: (4/20) | 2.96 s
    [Task 20/25]  Current/Best:    5.37/  20.89 GFLOPS | Progress: (8/20) | 5.54 s
    [Task 20/25]  Current/Best:   14.49/  20.89 GFLOPS | Progress: (12/20) | 9.17 s
    [Task 20/25]  Current/Best:   12.01/  20.89 GFLOPS | Progress: (16/20) | 11.10 s
    [Task 20/25]  Current/Best:    8.50/  21.48 GFLOPS | Progress: (20/20) | 13.59 s
    [Task 21/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 21/25]  Current/Best:    9.64/  15.66 GFLOPS | Progress: (4/20) | 2.85 s
    [Task 21/25]  Current/Best:    6.51/  16.26 GFLOPS | Progress: (8/20) | 4.23 s
    [Task 21/25]  Current/Best:   16.82/  22.48 GFLOPS | Progress: (12/20) | 6.08 s
    [Task 21/25]  Current/Best:    8.31/  22.48 GFLOPS | Progress: (16/20) | 7.57 s
    [Task 21/25]  Current/Best:   10.76/  22.48 GFLOPS | Progress: (20/20) |
  9.12 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:    8.18/  11.77 GFLOPS | Progress: (4/20) | 3.70 s
    [Task 22/25]  Current/Best:   12.00/  15.93 GFLOPS | Progress: (8/20) | 5.26 s
    [Task 22/25]  Current/Best:   12.98/  15.93 GFLOPS | Progress: (12/20) | 6.72 s
    [Task 22/25]  Current/Best:   10.64/  21.47 GFLOPS | Progress: (16/20) | 8.19 s
    [Task 22/25]  Current/Best:    2.31/  21.47 GFLOPS | Progress: (20/20) | 10.48 s Done.
+
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   23.58/  23.58 GFLOPS | Progress: (4/20) | 6.10 s
    [Task 23/25]  Current/Best:    4.46/  23.58 GFLOPS | Progress: (8/20) | 8.71 s
    [Task 23/25]  Current/Best:    9.43/  23.58 GFLOPS | Progress: (12/20) | 11.45 s
    [Task 23/25]  Current/Best:   23.91/  23.91 GFLOPS | Progress: (16/20) | 15.09 s
    [Task 23/25]  Current/Best:    1.55/  23.91 GFLOPS | Progress: (20/20) | 18.42 s Done.
+
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    3.63/   6.27 GFLOPS | Progress: (4/20) | 8.18 s
    [Task 24/25]  Current/Best:    1.54/   6.85 GFLOPS | Progress: (8/20) | 18.93 s Done.
      Done.
-
    [Task 21/25]  Current/Best:    7.10/  20.81 GFLOPS | Progress: (8/20) | 9.00 s
    [Task 21/25]  Current/Best:   13.17/  20.81 GFLOPS | Progress: (12/20) | 10.90 s
    [Task 21/25]  Current/Best:   18.25/  20.81 GFLOPS | Progress: (16/20) | 13.73 s
    [Task 21/25]  Current/Best:   20.95/  20.95 GFLOPS | Progress: (20/20) | 14.93 s
    [Task 22/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 22/25]  Current/Best:   19.99/  19.99 GFLOPS | Progress: (4/20) | 4.09 s
    [Task 22/25]  Current/Best:   16.28/  19.99 GFLOPS | Progress: (8/20) | 5.71 s
    [Task 22/25]  Current/Best:   12.10/  19.99 GFLOPS | Progress: (12/20) | 9.38 s
    [Task 22/25]  Current/Best:   14.46/  19.99 GFLOPS | Progress: (16/20) | 11.45 s
    [Task 22/25]  Current/Best:    4.78/  19.99 GFLOPS | Progress: (20/20) | 13.18 s Done.
-
    [Task 23/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 23/25]  Current/Best:   11.96/  18.35 GFLOPS | Progress: (4/20) | 4.18 s
    [Task 23/25]  Current/Best:   10.88/  22.01 GFLOPS | Progress: (8/20) | 6.40 s
    [Task 23/25]  Current/Best:    5.32/  22.01 GFLOPS | Progress: (12/20) | 14.25 s
    [Task 23/25]  Current/Best:   18.41/  22.01 GFLOPS | Progress: (16/20) | 21.02 s
    [Task 23/25]  Current/Best:   20.49/  22.16 GFLOPS | Progress: (20/20) | 26.20 s Done.
-
    [Task 24/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 24/25]  Current/Best:    3.21/   6.86 GFLOPS | Progress: (4/20) | 9.70 s
    [Task 24/25]  Current/Best:    1.56/   6.86 GFLOPS | Progress: (8/20) | 21.17 s
    [Task 24/25]  Current/Best:    3.41/   6.86 GFLOPS | Progress: (12/20) | 32.84 s
    [Task 24/25]  Current/Best:    3.61/   7.13 GFLOPS | Progress: (16/20) | 43.58 s Done.
-
    [Task 24/25]  Current/Best:    8.06/   8.06 GFLOPS | Progress: (20/20) | 54.03 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    4.42/   8.10 GFLOPS | Progress: (4/20) | 3.96 s
    [Task 25/25]  Current/Best:    2.97/   8.10 GFLOPS | Progress: (8/20) | 14.63 s
    [Task 25/25]  Current/Best:    9.06/   9.06 GFLOPS | Progress: (12/20) | 25.40 s
    [Task 25/25]  Current/Best:    1.54/   9.06 GFLOPS | Progress: (16/20) | 31.00 s
    [Task 25/25]  Current/Best:    6.13/   9.06 GFLOPS | Progress: (20/20) | 41.76 s
+
    [Task 24/25]  Current/Best:    2.07/   8.30 GFLOPS | Progress: (12/20) | 23.31 s
    [Task 24/25]  Current/Best:    5.72/   8.30 GFLOPS | Progress: (16/20) | 34.06 s
    [Task 24/25]  Current/Best:    7.01/   9.48 GFLOPS | Progress: (20/20) | 35.78 s
    [Task 25/25]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/20) | 0.00 s
    [Task 25/25]  Current/Best:    9.84/   9.84 GFLOPS | Progress: (4/20) | 11.77 s
    [Task 25/25]  Current/Best:    8.49/  10.06 GFLOPS | Progress: (8/20) | 15.05 s
    [Task 25/25]  Current/Best:    5.79/  10.22 GFLOPS | Progress: (12/20) | 18.70 s
    [Task 25/25]  Current/Best:    1.55/  10.22 GFLOPS | Progress: (16/20) | 20.05 s
    [Task 25/25]  Current/Best:    5.69/  10.22 GFLOPS | Progress: (20/20) | 30.76 s
 
 
 
@@ -673,7 +673,7 @@ Verify that the optimized model runs and produces the same results:
 
  .. code-block:: none
 
-    class='n02123045 tabby, tabby cat' with probability=0.621103
+    class='n02123045 tabby, tabby cat' with probability=0.621102
     class='n02123159 tiger cat' with probability=0.356379
     class='n02124075 Egyptian cat' with probability=0.019712
     class='n02129604 tiger, Panthera tigris' with probability=0.001215
@@ -731,8 +731,8 @@ improvement in comparing the optimized model to the unoptimized model.
 
  .. code-block:: none
 
-    optimized: {'mean': 411.619885099999, 'median': 411.85912204999795, 'std': 0.863958546324752}
-    unoptimized: {'mean': 510.8923493299993, 'median': 510.9241215499992, 'std': 1.064284793019263}
+    optimized: {'mean': 394.6161773400081, 'median': 394.53298935002294, 'std': 0.7581375836557911}
+    unoptimized: {'mean': 509.19756300999325, 'median': 509.0108063000116, 'std': 1.0381112545995472}
 
 
 
@@ -755,7 +755,7 @@ profiling/benchmarking.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 11 minutes  14.861 seconds)
+   **Total running time of the script:** ( 10 minutes  5.518 seconds)
 
 
 .. _sphx_glr_download_tutorial_autotvm_relay_x86.py:
diff --git a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
index de74183bb9..f2ac1cde9d 100644
--- a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
@@ -282,7 +282,7 @@ device and returns the measured cost. Network overhead is excluded.
 
  .. code-block:: none
 
-    1.233e-07 secs/op
+    1.259e-07 secs/op
 
 
 
diff --git a/docs/_sources/tutorial/intro_topi.rst.txt b/docs/_sources/tutorial/intro_topi.rst.txt
index 185d5bfb26..19c36330bf 100644
--- a/docs/_sources/tutorial/intro_topi.rst.txt
+++ b/docs/_sources/tutorial/intro_topi.rst.txt
@@ -213,7 +213,7 @@ we can schedule the following series of operations ending with :code:`topi.sum`
 
  .. code-block:: none
 
-    /workspace/python/tvm/target/target.py:389: UserWarning: Try specifying cuda arch by adding 'arch=sm_xx' to your target.
+    /workspace/python/tvm/target/target.py:393: UserWarning: Try specifying cuda arch by adding 'arch=sm_xx' to your target.
       warnings.warn("Try specifying cuda arch by adding 'arch=sm_xx' to your target.")
     @main = primfn(a_1: handle, b_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
@@ -263,7 +263,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
 
  .. code-block:: none
 
-    [stage(a, placeholder(a, 0x1fb9d5a0)), stage(b, placeholder(b, 0x116f9390)), stage(T_add, compute(T_add, body=[(a[ax0, ax1, ax2] + b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min=0, ext=10))], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[(a[ax0, ax1, ax2]*b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(mi [...]
+    [stage(a, placeholder(a, 0xab3c290)), stage(b, placeholder(b, 0xe2d8fe0)), stage(T_add, compute(T_add, body=[(a[ax0, ax1, ax2] + b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min=0, ext=10))], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[(a[ax0, ax1, ax2]*b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min= [...]
 
 
 
diff --git a/docs/_sources/tutorial/relay_quick_start.rst.txt b/docs/_sources/tutorial/relay_quick_start.rst.txt
index 81b1a0ddad..f292a73060 100644
--- a/docs/_sources/tutorial/relay_quick_start.rst.txt
+++ b/docs/_sources/tutorial/relay_quick_start.rst.txt
@@ -257,7 +257,7 @@ in this example. Then the machine code will be generated as the module library.
 
  .. code-block:: none
 
-    /workspace/python/tvm/target/target.py:389: UserWarning: Try specifying cuda arch by adding 'arch=sm_xx' to your target.
+    /workspace/python/tvm/target/target.py:393: UserWarning: Try specifying cuda arch by adding 'arch=sm_xx' to your target.
       warnings.warn("Try specifying cuda arch by adding 'arch=sm_xx' to your target.")
 
 
diff --git a/docs/_sources/tutorial/sg_execution_times.rst.txt b/docs/_sources/tutorial/sg_execution_times.rst.txt
index 820cc69f9b..166c136036 100644
--- a/docs/_sources/tutorial/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorial/sg_execution_times.rst.txt
@@ -5,32 +5,32 @@
 
 Computation times
 =================
-**14:19.325** total execution time for **tutorial** files:
+**13:07.511** total execution time for **tutorial** files:
 
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 11:14.861 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)                 | 10:05.518 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:03.881 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:04.962 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 01:01.163 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)     | 00:57.811 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:33.548 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)                 | 00:32.776 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:23.955 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)               | 00:25.044 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:01.020 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.696 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)                               | 00:00.717 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)       | 00:00.531 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.173 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.164 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_introduction.py` (``introduction.py``)                           | 00:00.005 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_introduction.py` (``introduction.py``)                           | 00:00.004 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_uma.py` (``uma.py``)                                             | 00:00.001 | 0.0 MB |
-+------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)   | 00:00.001 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_uma.py` (``uma.py``)                                             | 00:00.002 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_install.py` (``install.py``)                                     | 00:00.001 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
+| :ref:`sphx_glr_tutorial_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)   | 00:00.001 | 0.0 MB |
++------------------------------------------------------------------------------------------+-----------+--------+
 | :ref:`sphx_glr_tutorial_tvmc_python.py` (``tvmc_python.py``)                             | 00:00.001 | 0.0 MB |
 +------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
index 09534c2322..f40a9edfa6 100644
--- a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
@@ -501,10 +501,10 @@ We can now compare the different schedules
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                   numpy    6.7491699996935495e-06                   1.0
-                   naive              6.6866e-06      0.9907292304540571
-                parallel              6.9262e-06      1.0262298920184982
-                  vector              2.4535e-05      3.6352618175440874
+                   numpy    6.935409996913222e-06                    1.0
+                   naive    6.653199999999999e-06     0.9593088228325611
+                parallel              6.8854e-06       0.992789179452191
+                  vector    2.4577299999999996e-05    3.5437414674747045
 
 
 
@@ -925,7 +925,7 @@ matrix multiplication.
 
  .. code-block:: none
 
-    Numpy running time: 0.018646
+    Numpy running time: 0.018141
 
 
 
@@ -983,7 +983,7 @@ optimizations.
 
  .. code-block:: none
 
-    none: 3.415424
+    none: 3.184717
 
 
 
@@ -1086,7 +1086,7 @@ schedule.
 
  .. code-block:: none
 
-    blocking: 0.297603
+    blocking: 0.295068
 
 
 
@@ -1182,7 +1182,7 @@ already cache friendly from our previous optimizations.
 
  .. code-block:: none
 
-    vectorization: 0.338439
+    vectorization: 0.336969
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1256,7 +1256,7 @@ more cache friendly.
 
  .. code-block:: none
 
-    loop permutation: 0.118918
+    loop permutation: 0.114711
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1355,7 +1355,7 @@ optimized schedule.
 
  .. code-block:: none
 
-    array packing: 0.109809
+    array packing: 0.108464
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1448,7 +1448,7 @@ to `C` when all the block results are ready.
 
  .. code-block:: none
 
-    block caching: 0.110933
+    block caching: 0.110800
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1534,7 +1534,7 @@ of thread-level parallelization.
 
  .. code-block:: none
 
-    parallelization: 0.147084
+    parallelization: 0.145984
     @main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
       buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1615,13 +1615,13 @@ working, we can compare the results.
  .. code-block:: none
 
                 Operator                  Timing             Performance
-                    none      3.4154243923000003                     1.0
-                blocking            0.2976031614     0.08713504596118123
-           vectorization     0.33843891680000004     0.09909132158305223
-        loop permutation            0.1189182698    0.034818006824598034
-           array packing            0.1098086183     0.03215079758391406
-           block caching            0.1109328605      0.0324799637638285
-         parallelization            0.1470843008    0.043064721658485065
+                    none            3.1847171831                     1.0
+                blocking            0.2950679063     0.09265121181428777
+           vectorization            0.3369687117     0.10580804898097576
+        loop permutation     0.11471050840000001     0.03601905657705559
+           array packing            0.1084643402     0.03405776210697018
+           block caching            0.1108003693    0.034791274367461114
+         parallelization            0.1459836263    0.045838803858212526
 
 
 
@@ -1661,11 +1661,6 @@ operations with tunable parameters that allows you to automatically optimize
 the computation for specific platforms.
 
 
-.. rst-class:: sphx-glr-timing
-
-   **Total running time of the script:** ( 1 minutes  1.163 seconds)
-
-
 .. _sphx_glr_download_tutorial_tensor_expr_get_started.py:
 
 .. only:: html
diff --git a/docs/commit_hash b/docs/commit_hash
index 3d40b35967..b07295921a 100644
--- a/docs/commit_hash
+++ b/docs/commit_hash
@@ -1 +1 @@
-4e4089edda7f3cd888178f4ad325d7824717ce8e
+d4bf9ecf5524d265916ac7b860b0027f5eee5c49
diff --git a/docs/genindex.html b/docs/genindex.html
index c5afc44599..35b405f21b 100644
--- a/docs/genindex.html
+++ b/docs/genindex.html
@@ -1936,6 +1936,8 @@
       <li><a href="reference/api/python/contrib.html#tvm.contrib.nvcc.get_target_compute_version">get_target_compute_version() (in module tvm.contrib.nvcc)</a>
 </li>
       <li><a href="reference/api/python/auto_scheduler.html#tvm.auto_scheduler.LayoutRewriteOption.get_target_default">get_target_default() (tvm.auto_scheduler.LayoutRewriteOption static method)</a>
+</li>
+      <li><a href="reference/api/python/target.html#tvm.target.Target.get_target_device_type">get_target_device_type() (tvm.target.Target method)</a>
 </li>
       <li><a href="reference/api/python/autotvm.html#tvm.autotvm.task.topi_integration.TaskExtractEnv.get_tasks">get_tasks() (tvm.autotvm.task.topi_integration.TaskExtractEnv method)</a>
 </li>
diff --git a/docs/how_to/compile_models/from_darknet.html b/docs/how_to/compile_models/from_darknet.html
index 095705205f..02d0568293 100644
--- a/docs/how_to/compile_models/from_darknet.html
+++ b/docs/how_to/compile_models/from_darknet.html
@@ -572,7 +572,7 @@ class:[&#39;truck 0.9266&#39;] left:471 top:83 right:689 bottom:169
 class:[&#39;bicycle 0.9984&#39;] left:111 top:113 right:577 bottom:447
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  9.719 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  8.901 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-darknet-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7716f96385bd5abb6e822041e285be54/from_darknet.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_darknet.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_keras.html b/docs/how_to/compile_models/from_keras.html
index f8ce9277e0..3cd2596437 100644
--- a/docs/how_to/compile_models/from_keras.html
+++ b/docs/how_to/compile_models/from_keras.html
@@ -493,7 +493,7 @@ pip install -U tensorflow --user
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Relay top-1 id: 285, class name: Egyptian cat
 
 1/1 [==============================] - ETA: 0s
-1/1 [==============================] - 1s 990ms/step
+1/1 [==============================] - 1s 946ms/step
 Keras top-1 id: 285, class name: Egyptian cat
 </pre></div>
 </div>
diff --git a/docs/how_to/compile_models/from_mxnet.html b/docs/how_to/compile_models/from_mxnet.html
index 52174fdac2..8ebb03cbbf 100644
--- a/docs/how_to/compile_models/from_mxnet.html
+++ b/docs/how_to/compile_models/from_mxnet.html
@@ -427,7 +427,7 @@ to download the full example code</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;x&quot;</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#tuple" title="builtins.tuple" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">x</span><span class="o">.</span><span class="n">shape</span></a><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip54f42959-1d21-4eb1-ae6d-40e9d77ca8b4 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipa8b93490-dbe1-4848-9780-b3f14fdec2b8 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
 x (1, 3, 224, 224)
 </pre></div>
 </div>
diff --git a/docs/how_to/compile_models/from_oneflow.html b/docs/how_to/compile_models/from_oneflow.html
index 28a84cdeb6..d9bd282ce9 100644
--- a/docs/how_to/compile_models/from_oneflow.html
+++ b/docs/how_to/compile_models/from_oneflow.html
@@ -435,12 +435,15 @@ Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdo
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip&quot; to /workspace/.oneflow/flowvision_cache/resnet18.zip
 
   0%|          | 0.00/41.5M [00:00&lt;?, ?B/s]
- 19%|#9        | 7.99M/41.5M [00:00&lt;00:00, 66.2MB/s]
- 39%|###8      | 16.0M/41.5M [00:00&lt;00:00, 60.6MB/s]
- 58%|#####7    | 24.0M/41.5M [00:00&lt;00:00, 59.7MB/s]
- 77%|#######7  | 32.1M/41.5M [00:00&lt;00:00, 67.3MB/s]
- 96%|#########6| 40.0M/41.5M [00:00&lt;00:00, 69.0MB/s]
-100%|##########| 41.5M/41.5M [00:00&lt;00:00, 68.1MB/s]
+ 15%|#5        | 6.33M/41.5M [00:00&lt;00:00, 42.0MB/s]
+ 25%|##4       | 10.3M/41.5M [00:00&lt;00:00, 38.7MB/s]
+ 35%|###5      | 14.7M/41.5M [00:00&lt;00:00, 41.5MB/s]
+ 45%|####5     | 18.7M/41.5M [00:00&lt;00:00, 37.9MB/s]
+ 58%|#####7    | 24.0M/41.5M [00:00&lt;00:00, 35.2MB/s]
+ 71%|#######1  | 29.5M/41.5M [00:00&lt;00:00, 41.1MB/s]
+ 81%|########  | 33.6M/41.5M [00:00&lt;00:00, 37.3MB/s]
+ 92%|#########2| 38.3M/41.5M [00:01&lt;00:00, 31.0MB/s]
+100%|##########| 41.5M/41.5M [00:01&lt;00:00, 36.8MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_pytorch.html b/docs/how_to/compile_models/from_pytorch.html
index b161026626..cc44fde4b5 100644
--- a/docs/how_to/compile_models/from_pytorch.html
+++ b/docs/how_to/compile_models/from_pytorch.html
@@ -414,10 +414,9 @@ be unstable.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://download.pytorch.org/models/resnet18-f37072fd.pth&quot; to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
 
   0%|          | 0.00/44.7M [00:00&lt;?, ?B/s]
-  6%|6         | 2.87M/44.7M [00:00&lt;00:01, 30.0MB/s]
- 13%|#2        | 5.73M/44.7M [00:00&lt;00:01, 28.9MB/s]
- 62%|######1   | 27.6M/44.7M [00:00&lt;00:00, 119MB/s]
-100%|##########| 44.7M/44.7M [00:00&lt;00:00, 124MB/s]
+ 41%|####1     | 18.5M/44.7M [00:00&lt;00:00, 194MB/s]
+ 95%|#########4| 42.4M/44.7M [00:00&lt;00:00, 227MB/s]
+100%|##########| 44.7M/44.7M [00:00&lt;00:00, 225MB/s]
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/compile_models/from_tensorflow.html b/docs/how_to/compile_models/from_tensorflow.html
index f62078b275..d404ab167a 100644
--- a/docs/how_to/compile_models/from_tensorflow.html
+++ b/docs/how_to/compile_models/from_tensorflow.html
@@ -632,7 +632,7 @@ banana (score = 0.00022)
 desk (score = 0.00019)
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  10.002 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  7.680 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-tensorflow-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7f1d3d1b878694c201c614c807cdebc8/from_tensorflow.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_tensorflow.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/sg_execution_times.html b/docs/how_to/compile_models/sg_execution_times.html
index eb1c69d139..486b3e6795 100644
--- a/docs/how_to/compile_models/sg_execution_times.html
+++ b/docs/how_to/compile_models/sg_execution_times.html
@@ -327,7 +327,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-compile-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>05:34.019</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
+<p><strong>05:30.009</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 81%" />
@@ -335,44 +335,44 @@
 <col style="width: 8%" />
 </colgroup>
 <tbody>
-<tr class="row-odd"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
-<td><p>01:10.002</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
+<td><p>01:08.901</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
-<td><p>01:09.719</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
+<td><p>01:07.680</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></td>
-<td><p>00:45.432</p></td>
+<td><p>00:45.725</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></td>
-<td><p>00:30.015</p></td>
+<td><p>00:31.234</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></td>
-<td><p>00:28.326</p></td>
+<td><p>00:26.967</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
-<td><p>00:25.986</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></td>
+<td><p>00:25.706</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></td>
-<td><p>00:24.868</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
+<td><p>00:24.825</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
-<td><p>00:21.952</p></td>
+<td><p>00:21.475</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
-<td><p>00:15.222</p></td>
+<td><p>00:15.045</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></td>
-<td><p>00:02.495</p></td>
+<td><p>00:02.450</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/deploy_models/deploy_model_on_android.html b/docs/how_to/deploy_models/deploy_model_on_android.html
index c7ac364d4f..0855bf2e01 100644
--- a/docs/how_to/deploy_models/deploy_model_on_android.html
+++ b/docs/how_to/deploy_models/deploy_model_on_android.html
@@ -649,7 +649,7 @@ to the remote android device.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  15.9375      15.9108      16.0574      15.8540       0.0662
+  15.6265      15.5624      15.9529      15.4771       0.1493
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
index 78dd3fedc9..cfd80dc0e7 100644
--- a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
+++ b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
@@ -436,15 +436,13 @@ be unstable.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth&quot; to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
 
   0%|          | 0.00/170M [00:00&lt;?, ?B/s]
-  8%|8         | 14.3M/170M [00:00&lt;00:01, 146MB/s]
- 17%|#6        | 28.2M/170M [00:00&lt;00:01, 110MB/s]
- 31%|###       | 51.8M/170M [00:00&lt;00:00, 163MB/s]
- 40%|####      | 68.5M/170M [00:00&lt;00:00, 165MB/s]
- 54%|#####3    | 91.6M/170M [00:00&lt;00:00, 191MB/s]
- 68%|######7   | 115M/170M [00:00&lt;00:00, 207MB/s]
- 81%|########  | 137M/170M [00:00&lt;00:00, 215MB/s]
- 96%|#########6| 164M/170M [00:00&lt;00:00, 235MB/s]
-100%|##########| 170M/170M [00:00&lt;00:00, 201MB/s]
+ 12%|#1        | 20.1M/170M [00:00&lt;00:00, 210MB/s]
+ 27%|##7       | 46.5M/170M [00:00&lt;00:00, 250MB/s]
+ 43%|####3     | 73.2M/170M [00:00&lt;00:00, 264MB/s]
+ 59%|#####8    | 99.6M/170M [00:00&lt;00:00, 269MB/s]
+ 74%|#######3  | 126M/170M [00:00&lt;00:00, 270MB/s]
+ 90%|########9 | 152M/170M [00:00&lt;00:00, 273MB/s]
+100%|##########| 170M/170M [00:00&lt;00:00, 267MB/s]
 /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torch/nn/functional.py:3878: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
   for i in range(dim)
 /venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/detection/anchor_utils.py:127: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the &#39;trunc&#39; function NOT &#39;floor&#39;). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode=&#39;trunc&#39;), or for actual floor division, use torch.div(a, b, rounding_mode=& [...]
@@ -542,7 +540,7 @@ torchvision rcnn models.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Get 9 valid boxes
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  12.305 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  4.179 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-object-detection-pytorch-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7795da4b258c8feff986668b95ef57ad/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized.html b/docs/how_to/deploy_models/deploy_prequantized.html
index 624f491927..f883f85a20 100644
--- a/docs/how_to/deploy_models/deploy_prequantized.html
+++ b/docs/how_to/deploy_models/deploy_prequantized.html
@@ -480,7 +480,7 @@ training. Other models require a full post training calibration.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: &quot;https://download.pytorch.org/models/mobilenet_v2-b0353104.pth&quot; to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
 
   0%|          | 0.00/13.6M [00:00&lt;?, ?B/s]
-100%|##########| 13.6M/13.6M [00:00&lt;00:00, 163MB/s]
+100%|##########| 13.6M/13.6M [00:00&lt;00:00, 172MB/s]
 </pre></div>
 </div>
 </div>
@@ -571,7 +571,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  90.3444      90.2771      92.3565      90.1384       0.2529
+  90.1937      90.0900      94.7577      89.9182       0.5024
 </pre></div>
 </div>
 <div class="admonition note">
@@ -610,7 +610,7 @@ This includes support for the VNNI 8 bit dot product instruction (CascadeLake or
 <div class="section" id="deploy-a-quantized-tflite-model">
 <h2>Deploy a quantized TFLite Model<a class="headerlink" href="#deploy-a-quantized-tflite-model" title="Permalink to this headline">¶</a></h2>
 <p>TODO</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  19.794 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  17.022 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/fb8217c13f4351224c6cf3aacf1a87fc/deploy_prequantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized_tflite.html b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
index d3896c7dd6..86eeda68f7 100644
--- a/docs/how_to/deploy_models/deploy_prequantized_tflite.html
+++ b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
@@ -569,7 +569,7 @@ TFLite Top-5 labels: [387 102 386 341 349]
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  120.8931     120.8245     127.0069     119.9038      0.7323
+  119.7762     119.8061     129.1444     118.2173      1.0593
 </pre></div>
 </div>
 <div class="admonition note">
@@ -597,7 +597,7 @@ network for ARM CPU</span></a>.</p></li>
 </ul>
 </div></blockquote>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  3.025 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  0.227 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-tflite-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/56691c7a27d45da61d112276334640d3/deploy_prequantized_tflite.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized_tflite.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_quantized.html b/docs/how_to/deploy_models/deploy_quantized.html
index 52bbc2011d..8465249c0d 100644
--- a/docs/how_to/deploy_models/deploy_quantized.html
+++ b/docs/how_to/deploy_models/deploy_quantized.html
@@ -507,7 +507,7 @@ for calibration. But the accuracy might be impacted.</p>
   DeprecationWarning,
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  23.944 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  28.095 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-quantized-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/7810ecf51bfc05f7d5e8a400ac3e815d/deploy_quantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_quantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_ssd_gluoncv.html b/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
index 69298dff88..5b541c300d 100644
--- a/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
+++ b/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
@@ -449,24 +449,22 @@ to your device.</p>
 Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_512_resnet50_v1_voc-9c8b225a.zip...
 
   0%|          | 0/132723 [00:00&lt;?, ?KB/s]
-  4%|4         | 5329/132723 [00:00&lt;00:02, 53286.75KB/s]
- 10%|9         | 12826/132723 [00:00&lt;00:01, 66037.15KB/s]
- 15%|#5        | 20113/132723 [00:00&lt;00:01, 69154.19KB/s]
- 21%|##1       | 28057/132723 [00:00&lt;00:01, 73213.35KB/s]
- 27%|##6       | 35698/132723 [00:00&lt;00:01, 74363.37KB/s]
- 33%|###2      | 43453/132723 [00:00&lt;00:01, 75443.86KB/s]
- 39%|###8      | 51222/132723 [00:00&lt;00:01, 76176.73KB/s]
- 44%|####4     | 58912/132723 [00:00&lt;00:00, 76405.54KB/s]
- 50%|#####     | 66651/132723 [00:00&lt;00:00, 76711.53KB/s]
- 56%|#####5    | 74323/132723 [00:01&lt;00:00, 76576.71KB/s]
- 62%|######1   | 82086/132723 [00:01&lt;00:00, 76896.55KB/s]
- 68%|######7   | 89776/132723 [00:01&lt;00:00, 76762.06KB/s]
- 73%|#######3  | 97483/132723 [00:01&lt;00:00, 76840.30KB/s]
- 79%|#######9  | 105223/132723 [00:01&lt;00:00, 77007.19KB/s]
- 85%|########5 | 112924/132723 [00:01&lt;00:00, 76825.72KB/s]
- 91%|######### | 120713/132723 [00:01&lt;00:00, 77142.62KB/s]
- 97%|#########6| 128428/132723 [00:01&lt;00:00, 77043.95KB/s]
-100%|##########| 132723/132723 [00:01&lt;00:00, 75548.17KB/s]
+  5%|5         | 6645/132723 [00:00&lt;00:01, 66438.50KB/s]
+ 12%|#1        | 15327/132723 [00:00&lt;00:01, 78414.71KB/s]
+ 18%|#8        | 24055/132723 [00:00&lt;00:01, 82459.28KB/s]
+ 25%|##4       | 32752/132723 [00:00&lt;00:01, 84238.26KB/s]
+ 31%|###1      | 41367/132723 [00:00&lt;00:01, 84925.29KB/s]
+ 38%|###7      | 50050/132723 [00:00&lt;00:00, 85571.07KB/s]
+ 44%|####4     | 58731/132723 [00:00&lt;00:00, 85973.23KB/s]
+ 51%|#####     | 67442/132723 [00:00&lt;00:00, 86332.44KB/s]
+ 57%|#####7    | 76110/132723 [00:00&lt;00:00, 86438.98KB/s]
+ 64%|######3   | 84830/132723 [00:01&lt;00:00, 86672.21KB/s]
+ 70%|#######   | 93565/132723 [00:01&lt;00:00, 86878.59KB/s]
+ 77%|#######7  | 102312/132723 [00:01&lt;00:00, 87055.80KB/s]
+ 84%|########3 | 111047/132723 [00:01&lt;00:00, 87141.67KB/s]
+ 90%|######### | 119762/132723 [00:01&lt;00:00, 87043.86KB/s]
+ 97%|#########6| 128467/132723 [00:01&lt;00:00, 86989.52KB/s]
+100%|##########| 132723/132723 [00:01&lt;00:00, 85544.87KB/s]
 </pre></div>
 </div>
 <p>Create TVM runtime and do inference
@@ -505,7 +503,7 @@ Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from h
 <span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" srcset="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" alt="deploy ssd gluoncv" class = "sphx-glr-single-img"/><p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  54.556 seconds)</p>
+<img src="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" srcset="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" alt="deploy ssd gluoncv" class = "sphx-glr-single-img"/><p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes  45.483 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-ssd-gluoncv-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/cccb17d28e5e8b2e94ea8cd5ec59f6ed/deploy_ssd_gluoncv.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_ssd_gluoncv.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/sg_execution_times.html b/docs/how_to/deploy_models/sg_execution_times.html
index c73473843d..daa24d25a8 100644
--- a/docs/how_to/deploy_models/sg_execution_times.html
+++ b/docs/how_to/deploy_models/sg_execution_times.html
@@ -327,7 +327,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-deploy-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>12:19.331</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
+<p><strong>11:58.541</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 86%" />
@@ -336,39 +336,39 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></td>
-<td><p>03:12.305</p></td>
+<td><p>03:04.179</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_ssd_gluoncv.html#sphx-glr-how-to-deploy-models-deploy-ssd-gluoncv-py"><span class="std std-ref">Deploy Single Shot Multibox Detector(SSD) model</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_ssd_gluoncv.py</span></code>)</p></td>
-<td><p>02:54.556</p></td>
+<td><p>02:45.483</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></td>
-<td><p>02:03.025</p></td>
+<td><p>02:00.227</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></td>
-<td><p>01:23.944</p></td>
+<td><p>01:28.095</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></td>
-<td><p>01:19.794</p></td>
+<td><p>01:17.022</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></td>
-<td><p>00:35.579</p></td>
+<td><p>00:34.460</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_nano.html#sphx-glr-how-to-deploy-models-deploy-model-on-nano-py"><span class="std std-ref">Deploy the Pretrained Model on Jetson Nano</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_nano.py</span></code>)</p></td>
-<td><p>00:25.277</p></td>
+<td><p>00:24.746</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></td>
-<td><p>00:24.844</p></td>
+<td><p>00:24.323</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></td>
-<td><p>00:00.007</p></td>
+<td><p>00:00.006</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/extend_tvm/bring_your_own_datatypes.html b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
index 96a797dd00..d7b06273a4 100644
--- a/docs/how_to/extend_tvm/bring_your_own_datatypes.html
+++ b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
@@ -608,7 +608,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
 <span class="n">module</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">params</span></a> <span class="o">=</span> <span class="n">get_mobilenet</span><span class="p">()</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipfdcaa7e9-2907-4fc5-bfcc-97b116dae2b5 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipe3eb3a30-6b5e-486b-9aa9-716a2133a859 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
 </pre></div>
 </div>
 <p>It’s easy to execute MobileNet with native TVM:</p>
diff --git a/docs/how_to/extend_tvm/sg_execution_times.html b/docs/how_to/extend_tvm/sg_execution_times.html
index b868d91f5f..4a2f65c0af 100644
--- a/docs/how_to/extend_tvm/sg_execution_times.html
+++ b/docs/how_to/extend_tvm/sg_execution_times.html
@@ -327,7 +327,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-extend-tvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:45.430</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
+<p><strong>00:43.893</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -336,19 +336,19 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></td>
-<td><p>00:42.016</p></td>
+<td><p>00:40.614</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></td>
-<td><p>00:02.395</p></td>
+<td><p>00:02.299</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></td>
-<td><p>00:01.011</p></td>
+<td><p>00:00.973</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></td>
-<td><p>00:00.008</p></td>
+<td><p>00:00.007</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/extend_tvm/use_pass_instrument.html b/docs/how_to/extend_tvm/use_pass_instrument.html
index c1a818a708..c48645db1d 100644
--- a/docs/how_to/extend_tvm/use_pass_instrument.html
+++ b/docs/how_to/extend_tvm/use_pass_instrument.html
@@ -512,10 +512,10 @@ profile the execution time of each passes.</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 6849us [6849us] (46.60%; 46.60%)
-FoldScaleAxis: 7847us [5us] (53.40%; 53.40%)
-        FoldConstant: 7841us [1594us] (53.36%; 99.93%)
-                InferType: 6247us [6247us] (42.51%; 79.67%)
+InferType: 6744us [6744us] (46.47%; 46.47%)
+FoldScaleAxis: 7768us [5us] (53.53%; 53.53%)
+        FoldConstant: 7763us [1585us] (53.49%; 99.93%)
+                InferType: 6178us [6178us] (42.57%; 79.58%)
 </pre></div>
 </div>
 </div>
@@ -537,10 +537,10 @@ Refer to following sections and <a class="reference internal" href="../../refere
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 6321us [6321us] (44.57%; 44.57%)
-FoldScaleAxis: 7862us [5us] (55.43%; 55.43%)
-        FoldConstant: 7857us [1612us] (55.40%; 99.94%)
-                InferType: 6246us [6246us] (44.04%; 79.49%)
+InferType: 6206us [6206us] (44.64%; 44.64%)
+FoldScaleAxis: 7696us [4us] (55.36%; 55.36%)
+        FoldConstant: 7692us [1600us] (55.33%; 99.94%)
+                InferType: 6092us [6092us] (43.82%; 79.20%)
 </pre></div>
 </div>
 <p>Register empty list to clear existing instruments.</p>
diff --git a/docs/how_to/optimize_operators/opt_conv_cuda.html b/docs/how_to/optimize_operators/opt_conv_cuda.html
index 5a67cbf9a7..82389f4c80 100644
--- a/docs/how_to/optimize_operators/opt_conv_cuda.html
+++ b/docs/how_to/optimize_operators/opt_conv_cuda.html
@@ -564,7 +564,7 @@ latency of convolution.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Convolution: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">*</span> <span cl [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 54.347423 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 54.205726 ms
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-optimize-operators-opt-conv-cuda-py">
diff --git a/docs/how_to/optimize_operators/opt_conv_tensorcore.html b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
index 44df8a9c8e..4def81df68 100644
--- a/docs/how_to/optimize_operators/opt_conv_tensorcore.html
+++ b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
@@ -906,7 +906,7 @@ be able to run on our build server</p>
     <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;conv2d with tensor core: </span><span class="si">%f</span><span class="s2"> ms&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">* [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 6.862215 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 6.835648 ms
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/optimize_operators/opt_gemm.html b/docs/how_to/optimize_operators/opt_gemm.html
index 858ed920b9..76f479352a 100644
--- a/docs/how_to/optimize_operators/opt_gemm.html
+++ b/docs/how_to/optimize_operators/opt_gemm.html
@@ -461,8 +461,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Baseline: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018952
-Baseline: 3.437101
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.017697
+Baseline: 3.190508
 </pre></div>
 </div>
 <p>In TVM, we can always inspect lower level IR to debug or optimize our schedule.
@@ -522,7 +522,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt1: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.312217
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.296542
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -589,7 +589,7 @@ vastly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt2: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.344336
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.331487
 </pre></div>
 </div>
 <p>Here is the generated IR after vectorization.</p>
@@ -650,7 +650,7 @@ the access pattern for A matrix is more cache friendly.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt3: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.117854
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.113773
 </pre></div>
 </div>
 <p>Here is the generated IR after loop permutation.</p>
@@ -733,7 +733,7 @@ flattening.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt4: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.109428
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.109522
 </pre></div>
 </div>
 <p>Here is the generated IR after array packing.</p>
@@ -819,7 +819,7 @@ write to C when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt5: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111098
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111170
 </pre></div>
 </div>
 <p>Here is the generated IR after blocking.</p>
@@ -909,7 +909,7 @@ write to C when all the block results are ready.</p>
 <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Opt6: </span><span class="si">%f</span><span class="s2">&quot;</span> <span class="o">%</span> <span class="n">opt6_time</span><span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.147946
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.146536
 </pre></div>
 </div>
 <p>Here is the generated IR after parallelization.</p>
diff --git a/docs/how_to/optimize_operators/sg_execution_times.html b/docs/how_to/optimize_operators/sg_execution_times.html
index 89ba9fbc04..f6e3f28cbc 100644
--- a/docs/how_to/optimize_operators/sg_execution_times.html
+++ b/docs/how_to/optimize_operators/sg_execution_times.html
@@ -327,7 +327,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-optimize-operators-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:34.980</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
+<p><strong>00:33.725</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -336,15 +336,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></td>
-<td><p>00:32.720</p></td>
+<td><p>00:31.358</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></td>
-<td><p>00:01.227</p></td>
+<td><p>00:01.284</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></td>
-<td><p>00:01.032</p></td>
+<td><p>00:01.082</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
index e5a6fb5cbb..3ad408b3a8 100644
--- a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
+++ b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
@@ -327,7 +327,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autoscheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>06:42.821</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
+<p><strong>06:43.662</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 85%" />
@@ -336,27 +336,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></td>
-<td><p>03:28.877</p></td>
+<td><p>03:25.791</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></td>
-<td><p>01:31.029</p></td>
+<td><p>01:29.019</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></td>
-<td><p>00:59.813</p></td>
+<td><p>00:58.633</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></td>
-<td><p>00:21.762</p></td>
+<td><p>00:29.554</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></td>
-<td><p>00:10.804</p></td>
+<td><p>00:10.527</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></td>
-<td><p>00:10.536</p></td>
+<td><p>00:10.138</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 </tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
index d869d98cde..93516646de 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
@@ -491,571 +491,266 @@ cooperative fetching, unrolling and operator fusion.</p>
              compute: Buffer(compute_2: Pointer(float32), float32, [25088], [])}
   buffer_map = {data_1: data, kernel_1: kernel, bias_1: bias, compute_1: compute}
   preflattened_buffer_map = {data_1: data_3: Buffer(data_2, float32, [1, 512, 7, 7], []), kernel_1: kernel_3: Buffer(kernel_2, float32, [512, 512, 3, 3], []), bias_1: bias_3: Buffer(bias_2, float32, [1, 512, 1, 1], []), compute_1: compute_3: Buffer(compute_2, float32, [1, 512, 7, 7], [])} {
-  attr [IterVar(blockIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;blockIdx.x&quot;)] &quot;thread_extent&quot; = 8;
-  allocate(conv2d_nchw: Pointer(local float32), float32, [14]), storage_scope = local;
-  allocate(pad_temp.shared: Pointer(shared float32), float32, [324]), storage_scope = shared;
-  allocate(kernel.shared: Pointer(shared float32), float32, [2304]), storage_scope = shared;
-  attr [IterVar(threadIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 224 {
-    conv2d_nchw_1: Buffer(conv2d_nchw, float32, [14], [], scope=&quot;local&quot;, align=32)[0] = 0f32
-    conv2d_nchw_1[1] = 0f32
+  attr [IterVar(blockIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;blockIdx.x&quot;)] &quot;thread_extent&quot; = 32;
+  allocate(conv2d_nchw: Pointer(local float32), float32, [16]), storage_scope = local;
+  allocate(pad_temp.shared: Pointer(shared float32), float32, [252]), storage_scope = shared;
+  allocate(kernel.shared: Pointer(shared float32), float32, [192]), storage_scope = shared;
+  attr [IterVar(threadIdx.x: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 49 {
+    conv2d_nchw_1: Buffer(conv2d_nchw, float32, [4], [], scope=&quot;local&quot;, align=8)[0] = 0f32
     conv2d_nchw_1[2] = 0f32
-    conv2d_nchw_1[3] = 0f32
     conv2d_nchw_1[4] = 0f32
-    conv2d_nchw_1[5] = 0f32
     conv2d_nchw_1[6] = 0f32
-    conv2d_nchw_1[7] = 0f32
     conv2d_nchw_1[8] = 0f32
-    conv2d_nchw_1[9] = 0f32
     conv2d_nchw_1[10] = 0f32
-    conv2d_nchw_1[11] = 0f32
     conv2d_nchw_1[12] = 0f32
+    conv2d_nchw_1[14] = 0f32
+    conv2d_nchw_1[1] = 0f32
+    conv2d_nchw_1[3] = 0f32
+    conv2d_nchw_1[5] = 0f32
+    conv2d_nchw_1[7] = 0f32
+    conv2d_nchw_1[9] = 0f32
+    conv2d_nchw_1[11] = 0f32
     conv2d_nchw_1[13] = 0f32
+    conv2d_nchw_1[15] = 0f32
     for (rc.outer.outer: int32, 0, 128) {
-      attr [IterVar(threadIdx.x_1: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 224 {
-        if @tir.likely((threadIdx.x_1 &lt; 162), dtype=bool) {
-          pad_temp.shared_1: Buffer(pad_temp.shared, float32, [324], [], scope=&quot;shared&quot;)[(threadIdx.x_1*2)] = @tir.if_then_else(((((9 &lt;= floormod((threadIdx.x_1*2), 81)) &amp;&amp; (floormod((threadIdx.x_1*2), 81) &lt; 72)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1*2), 9))) &amp;&amp; (floormod((threadIdx.x_1*2), 9) &lt; 8)), data[(((((rc.outer.outer*196) + (floordiv((threadIdx.x_1*2), 81)*49)) + (floordiv(floormod((threadIdx.x_1*2), 81), 9)*7)) + floormod((threadIdx.x_1*2 [...]
-        }
-        if @tir.likely((threadIdx.x_1 &lt; 162), dtype=bool) {
-          pad_temp.shared_1[((threadIdx.x_1*2) + 1)] = @tir.if_then_else(((((9 &lt;= floormod(((threadIdx.x_1*2) + 1), 81)) &amp;&amp; (floormod(((threadIdx.x_1*2) + 1), 81) &lt; 72)) &amp;&amp; (1 &lt;= floormod(((threadIdx.x_1*2) + 1), 9))) &amp;&amp; (floormod(((threadIdx.x_1*2) + 1), 9) &lt; 8)), data[(((((rc.outer.outer*196) + (floordiv(((threadIdx.x_1*2) + 1), 81)*49)) + (floordiv(floormod(((threadIdx.x_1*2) + 1), 81), 9)*7)) + floormod(((threadIdx.x_1*2) + 1), 9)) - 8)], 0f32, dty [...]
+      for (ry.outer.outer: int32, 0, 3) {
+        let cse_var_2: int32 = (rc.outer.outer*36)
+        let cse_var_1: int32 = (ry.outer.outer*3)
+         {
+          attr [IterVar(threadIdx.x_1: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 49 {
+            if @tir.likely((threadIdx.x_1 &lt; 42), dtype=bool) {
+              pad_temp.shared_1: Buffer(pad_temp.shared, float32, [252], [], scope=&quot;shared&quot;)[(threadIdx.x_1*6)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod((threadIdx.x_1*6), 9))) &amp;&amp; (floormod((threadIdx.x_1*6), 9) &lt; 8)), data[(((((rc.outer.outer*196) + (floordiv((threadIdx.x_1*2), 3)*7)) + (ry.outer.ou [...]
+            }
+            if @tir.likely((threadIdx.x_1 &lt; 42), dtype=bool) {
+              pad_temp.shared_1[((threadIdx.x_1*6) + 1)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod(((threadIdx.x_1*6) + 1), 9))) &amp;&amp; (floormod(((threadIdx.x_1*6) + 1), 9) &lt; 8)), data[(((((rc.outer.outer*196) + (floordiv((threadIdx.x_1*2), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1*6) + 1), 9)) - 8)] [...]
+            }
+            if @tir.likely((threadIdx.x_1 &lt; 42), dtype=bool) {
+              pad_temp.shared_1[((threadIdx.x_1*6) + 2)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod((threadIdx.x_1*2), 21), 3) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod(((threadIdx.x_1*6) + 2), 9))) &amp;&amp; (floormod(((threadIdx.x_1*6) + 2), 9) &lt; 8)), data[(((((rc.outer.outer*196) + (floordiv((threadIdx.x_1*2), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1*6) + 2), 9)) - 8)] [...]
+            }
+            if @tir.likely((threadIdx.x_1 &lt; 42), dtype=bool) {
+              pad_temp.shared_1[((threadIdx.x_1*6) + 3)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod(((threadIdx.x_1*6) + 3), 9))) &amp;&amp; (floormod(((threadIdx.x_1*6) + 3), 9) &lt; 8)), data[(((((rc.outer.outer*196) + (floordiv(((threadIdx.x_1*2) + 1), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1* [...]
+            }
+            if @tir.likely((threadIdx.x_1 &lt; 42), dtype=bool) {
+              pad_temp.shared_1[((threadIdx.x_1*6) + 4)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod(((threadIdx.x_1*6) + 4), 9))) &amp;&amp; (floormod(((threadIdx.x_1*6) + 4), 9) &lt; 8)), data[(((((rc.outer.outer*196) + (floordiv(((threadIdx.x_1*2) + 1), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1* [...]
+            }
+            if @tir.likely((threadIdx.x_1 &lt; 42), dtype=bool) {
+              pad_temp.shared_1[((threadIdx.x_1*6) + 5)] = @tir.if_then_else(((((1 &lt;= (floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer)) &amp;&amp; ((floordiv(floormod(((threadIdx.x_1*2) + 1), 21), 3) + ry.outer.outer) &lt; 8)) &amp;&amp; (1 &lt;= floormod(((threadIdx.x_1*6) + 5), 9))) &amp;&amp; (floormod(((threadIdx.x_1*6) + 5), 9) &lt; 8)), data[(((((rc.outer.outer*196) + (floordiv(((threadIdx.x_1*2) + 1), 3)*7)) + (ry.outer.outer*7)) + floormod(((threadIdx.x_1* [...]
+            }
+          }
+          attr [IterVar(threadIdx.x_2: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 49;
+          kernel.shared_1: Buffer(kernel.shared, float32, [192], [], scope=&quot;shared&quot;)[threadIdx.x_2] = kernel[((((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 12)*4608)) + cse_var_2) + (floordiv(floormod(threadIdx.x_2, 12), 3)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
+          attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 49;
+          kernel.shared_1[(threadIdx.x_2 + 49)] = kernel[((((((blockIdx.x*73728) + (floordiv((threadIdx.x_2 + 49), 12)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 1), 12), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 1), 3))]
+          attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 49;
+          kernel.shared_1[(threadIdx.x_2 + 98)] = kernel[((((((blockIdx.x*73728) + (floordiv((threadIdx.x_2 + 98), 12)*4608)) + cse_var_2) + (floordiv(floormod((threadIdx.x_2 + 2), 12), 3)*9)) + cse_var_1) + floormod((threadIdx.x_2 + 2), 3))]
+          attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 49;
+          if @tir.likely((threadIdx.x_2 &lt; 45), dtype=bool) {
+            kernel.shared_1[(threadIdx.x_2 + 147)] = kernel[((((((blockIdx.x*73728) + (floordiv((threadIdx.x_2 + 147), 12)*4608)) + cse_var_2) + (floormod((floordiv(threadIdx.x_2, 3) + 1), 4)*9)) + cse_var_1) + floormod(threadIdx.x_2, 3))]
+          }
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[0]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[24]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[48]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[72]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[96]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[120]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[144]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[168]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[12]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[36]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[60]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[84]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[108]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[132]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[156]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7))]*kernel.shared_1[180]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[3]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[27]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[51]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[75]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[99]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[123]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[147]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[171]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[15]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[39]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[63]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[87]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[111]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[135]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[159]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 63)]*kernel.shared_1[183]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[6]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[30]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[54]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[78]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[102]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[126]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[150]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[174]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[18]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[42]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[66]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[90]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[114]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[138]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[162]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 126)]*kernel.shared_1[186]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[9]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[33]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[57]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[81]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[105]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[129]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[153]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[177]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[21]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[45]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[69]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[93]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[117]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[141]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[165]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 189)]*kernel.shared_1[189]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[1]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[25]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[49]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[73]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[97]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[121]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[145]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[169]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[13]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[37]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[61]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[85]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[109]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[133]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[157]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 1)]*kernel.shared_1[181]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[4]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[28]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[52]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[76]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[100]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[124]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[148]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[172]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[16]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[40]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[64]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[88]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[112]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[136]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[160]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 64)]*kernel.shared_1[184]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[7]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[31]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[55]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[79]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[103]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[127]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[151]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[175]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[19]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[43]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[67]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[91]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[115]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[139]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[163]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 127)]*kernel.shared_1[187]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[10]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[34]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[58]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[82]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[106]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[130]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[154]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[178]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[22]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[46]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[70]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[94]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[118]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[142]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[166]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 190)]*kernel.shared_1[190]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[2]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[26]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[50]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[74]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[98]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[122]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[146]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[170]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[14]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[38]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[62]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[86]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[110]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[134]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[158]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 2)]*kernel.shared_1[182]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[5]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[29]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[53]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[77]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[101]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[125]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[149]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[173]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[17]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[41]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[65]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[89]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[113]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[137]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[161]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 65)]*kernel.shared_1[185]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[8]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[32]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[56]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[80]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[104]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[128]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[152]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[176]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[20]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[44]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[68]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[92]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[116]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[140]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[164]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 128)]*kernel.shared_1[188]))
+          conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[11]))
+          conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[35]))
+          conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[59]))
+          conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[83]))
+          conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[107]))
+          conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[131]))
+          conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[155]))
+          conv2d_nchw_1[14] = (conv2d_nchw_1[14] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[179]))
+          conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[23]))
+          conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[47]))
+          conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[71]))
+          conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[95]))
+          conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[119]))
+          conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[143]))
+          conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[167]))
+          conv2d_nchw_1[15] = (conv2d_nchw_1[15] + (pad_temp.shared_1[(((floordiv(threadIdx.x, 7)*9) + floormod(threadIdx.x, 7)) + 191)]*kernel.shared_1[191]))
         }
       }
-      attr [IterVar(threadIdx.x_2: int32, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 224 {
-        kernel.shared_1: Buffer(kernel.shared, float32, [2304], [], scope=&quot;shared&quot;)[(threadIdx.x_2*6)] = kernel[((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6))]
-        kernel.shared_1[((threadIdx.x_2*6) + 1)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 1)]
-        kernel.shared_1[((threadIdx.x_2*6) + 2)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 2)]
-        kernel.shared_1[((threadIdx.x_2*6) + 3)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 3)]
-        kernel.shared_1[((threadIdx.x_2*6) + 4)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 4)]
-        kernel.shared_1[((threadIdx.x_2*6) + 5)] = kernel[(((((blockIdx.x*294912) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*36)) + (floormod(threadIdx.x_2, 6)*6)) + 5)]
-      }
-      attr [IterVar(threadIdx.x_2, (nullptr), &quot;ThreadIndex&quot;, &quot;threadIdx.x&quot;)] &quot;thread_extent&quot; = 224 {
-        if @tir.likely((threadIdx.x_2 &lt; 160), dtype=bool) {
-          kernel.shared_1[((threadIdx.x_2*6) + 1344)] = kernel[((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 4), 12)*3))]
-        }
-        if @tir.likely((threadIdx.x_2 &lt; 160), dtype=bool) {
-          kernel.shared_1[((threadIdx.x_2*6) + 1345)] = kernel[(((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 4), 12)*3)) + 1)]
-        }
-        if @tir.likely((threadIdx.x_2 &lt; 160), dtype=bool) {
-          kernel.shared_1[((threadIdx.x_2*6) + 1346)] = kernel[(((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 4), 12)*3)) + 2)]
-        }
-        if @tir.likely((threadIdx.x_2 &lt; 160), dtype=bool) {
-          kernel.shared_1[((threadIdx.x_2*6) + 1347)] = kernel[((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 5), 12)*3))]
-        }
-        if @tir.likely((threadIdx.x_2 &lt; 160), dtype=bool) {
-          kernel.shared_1[((threadIdx.x_2*6) + 1348)] = kernel[(((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 5), 12)*3)) + 1)]
-        }
-        if @tir.likely((threadIdx.x_2 &lt; 160), dtype=bool) {
-          kernel.shared_1[((threadIdx.x_2*6) + 1349)] = kernel[(((((blockIdx.x*294912) + (floordiv((threadIdx.x_2 + 224), 6)*4608)) + (rc.outer.outer*36)) + (floormod(((threadIdx.x_2*2) + 5), 12)*3)) + 2)]
-        }
-      }
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[floormod(threadIdx.x, 7)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 9)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[(floordiv(threadIdx.x, 7)*72)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[floormod(threadIdx.x, 7)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 9)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 36)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 9)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 3)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 9)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 39)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 72)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 6)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 18)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 27)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 36)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 45)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 54)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 72)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 42)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 81)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 90)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 9)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 81)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 90)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 45)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 90)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 144)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 12)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 90)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 144)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 48)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 144)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 153)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 15)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 99)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 108)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 117)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 135)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 144)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 153)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 51)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 162)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 171)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 18)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 162)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 171)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 54)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 171)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 225)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 21)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 171)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 225)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 57)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 225)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 234)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 24)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 180)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 198)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 207)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 216)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 225)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 234)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 60)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 243)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 27)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 243)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 63)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 306)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 30)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 306)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 66)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 306)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 33)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 261)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 270)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 279)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 288)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 297)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 306)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 69)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 1)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 10)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 1)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 1)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 10)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 37)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 10)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 64)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 4)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 10)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 64)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 40)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 64)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 73)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 7)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 19)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 28)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 37)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 46)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 55)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 64)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 73)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 43)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 82)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 91)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 10)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 82)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 91)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 46)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 91)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 145)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 13)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 91)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 145)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 49)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 145)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 154)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 16)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 100)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 109)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 118)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 127)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 136)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 145)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 154)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 52)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 163)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 172)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 19)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 163)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 172)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 55)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 172)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 226)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 22)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 172)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 226)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 58)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 226)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 235)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 25)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 181)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 190)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 199)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 208)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 217)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 226)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 235)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 61)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 244)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 253)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 28)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 244)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 253)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 64)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 253)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 307)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 31)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 253)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 307)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 67)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 307)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 316)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 34)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 262)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 271)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 280)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 289)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 298)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 307)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 316)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 70)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 2)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 11)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 2)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 2)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 11)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 38)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 11)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 65)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 5)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 11)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 65)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 41)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 65)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 74)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 8)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 20)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 29)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 38)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 47)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 56)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 65)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 74)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 44)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 83)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 92)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 11)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 83)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 92)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 47)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 92)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 146)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 14)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 92)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 146)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 50)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 146)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 155)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 17)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 101)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 110)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 119)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 128)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 137)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 146)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 155)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 53)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 164)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 173)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 20)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 164)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 173)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 56)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 173)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 227)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 23)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 173)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 227)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 59)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 227)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 236)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 26)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 182)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 191)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 200)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 209)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 218)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 227)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 236)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 62)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 245)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 254)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 29)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 245)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 254)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 65)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 254)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 308)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 32)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 254)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 308)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 68)]))
-      conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-      conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-      conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-      conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-      conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-      conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 308)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-      conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 317)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 35)]))
-      conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 263)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-      conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 272)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-      conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 281)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-      conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 290)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-      conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 299)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-      conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 308)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
-      conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(floormod(threadIdx.x, 7) + 317)]*kernel.shared_1[((floordiv(threadIdx.x, 7)*72) + 71)]))
     }
     for (i1.inner: int32, 0, 2) {
-      for (i2.inner: int32, 0, 7) {
-        compute[(((((blockIdx.x*3136) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (i2.inner*7)) + floormod(threadIdx.x, 7))] = max((conv2d_nchw_1[((i1.inner*7) + i2.inner)] + bias[(((blockIdx.x*64) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
-      }
+      compute[(((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x)] = max((conv2d_nchw_1[i1.inner] + bias[((blockIdx.x*16) + i1.inner)]), 0f32)
+      compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 98)] = max((conv2d_nchw_1[(i1.inner + 2)] + bias[(((blockIdx.x*16) + i1.inner) + 2)]), 0f32)
+      compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 196)] = max((conv2d_nchw_1[(i1.inner + 4)] + bias[(((blockIdx.x*16) + i1.inner) + 4)]), 0f32)
+      compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 294)] = max((conv2d_nchw_1[(i1.inner + 6)] + bias[(((blockIdx.x*16) + i1.inner) + 6)]), 0f32)
+      compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 392)] = max((conv2d_nchw_1[(i1.inner + 8)] + bias[(((blockIdx.x*16) + i1.inner) + 8)]), 0f32)
+      compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 490)] = max((conv2d_nchw_1[(i1.inner + 10)] + bias[(((blockIdx.x*16) + i1.inner) + 10)]), 0f32)
+      compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 588)] = max((conv2d_nchw_1[(i1.inner + 12)] + bias[(((blockIdx.x*16) + i1.inner) + 12)]), 0f32)
+      compute[((((blockIdx.x*784) + (i1.inner*49)) + threadIdx.x) + 686)] = max((conv2d_nchw_1[(i1.inner + 14)] + bias[(((blockIdx.x*16) + i1.inner) + 14)]), 0f32)
     }
   }
 }
@@ -1092,7 +787,7 @@ cooperative fetching, unrolling and operator fusion.</p>
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.323 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.363 ms
 </pre></div>
 </div>
 </div>
@@ -1123,11 +818,11 @@ conv2d_nchw_nn_o_o_o_i, conv2d_nchw_nn_o_o_i = s[conv2d_nchw].split(conv2d_nchw_
 conv2d_nchw_nn_o_o_o_o, conv2d_nchw_nn_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_o_i, factor=1)
 conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=2)
 conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=1)
-conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=32)
-conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=1)
-conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=7)
+conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=1)
+conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=8)
+conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=1)
 conv2d_nchw_yy_o_o_i, conv2d_nchw_yy_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_i, factor=1)
-conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=1)
+conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=7)
 conv2d_nchw_yy_o_o_o_o, conv2d_nchw_yy_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_o_i, factor=1)
 conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=1)
 conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=1)
@@ -1135,7 +830,7 @@ conv2d_nchw_xx_o_o_o_i, conv2d_nchw_xx_o_o_i = s[conv2d_nchw].split(conv2d_nchw_
 conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=1)
 conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=4)
 conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=1)
-conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=3)
+conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=1)
 conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=1)
 conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=1)
 conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=3)
@@ -1144,10 +839,10 @@ compute_i0_o_i, compute_i0_i = s[compute].split(compute_i0, factor=1)
 compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
 compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
 compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=2)
-compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=32)
-compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
-compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=7)
-compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=1)
+compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=1)
+compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=8)
+compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
+compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
 compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
 compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=1)
 compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=7)
@@ -1168,16 +863,16 @@ s[compute].bind(compute_i0_o_o_i_i1_o_o_i_fused_i2_o_o_i_fused_i3_o_o_i_fused, t
 compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused = s[compute].fuse(compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i)
 s[compute].bind(compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused, te.thread_axis(&quot;threadIdx.x&quot;))
 kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
-kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=6)
+kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
 s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=224)
+kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=49)
 s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis(&quot;threadIdx.x&quot;))
 pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
-pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=2)
+pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=6)
 s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=224)
+pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=49)
 s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis(&quot;threadIdx.x&quot;))
-s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, &quot;auto_unroll_max_step&quot;, 1024)
+s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, &quot;auto_unroll_max_step&quot;, 512)
 s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, &quot;unroll_explicit&quot;, True)
 
 CUDA source code:
@@ -1195,566 +890,257 @@ CUDA source code:
   #define int64_t long long
   #define uint64_t unsigned long long
 #endif
-extern &quot;C&quot; __global__ void __launch_bounds__(224) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
-  float conv2d_nchw[14];
-  __shared__ float pad_temp_shared[324];
-  __shared__ float kernel_shared[2304];
+extern &quot;C&quot; __global__ void __launch_bounds__(49) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
+  float conv2d_nchw[16];
+  __shared__ float pad_temp_shared[252];
+  __shared__ float kernel_shared[192];
   conv2d_nchw[0] = 0.000000e+00f;
-  conv2d_nchw[1] = 0.000000e+00f;
   conv2d_nchw[2] = 0.000000e+00f;
-  conv2d_nchw[3] = 0.000000e+00f;
   conv2d_nchw[4] = 0.000000e+00f;
-  conv2d_nchw[5] = 0.000000e+00f;
   conv2d_nchw[6] = 0.000000e+00f;
-  conv2d_nchw[7] = 0.000000e+00f;
   conv2d_nchw[8] = 0.000000e+00f;
-  conv2d_nchw[9] = 0.000000e+00f;
   conv2d_nchw[10] = 0.000000e+00f;
-  conv2d_nchw[11] = 0.000000e+00f;
   conv2d_nchw[12] = 0.000000e+00f;
+  conv2d_nchw[14] = 0.000000e+00f;
+  conv2d_nchw[1] = 0.000000e+00f;
+  conv2d_nchw[3] = 0.000000e+00f;
+  conv2d_nchw[5] = 0.000000e+00f;
+  conv2d_nchw[7] = 0.000000e+00f;
+  conv2d_nchw[9] = 0.000000e+00f;
+  conv2d_nchw[11] = 0.000000e+00f;
   conv2d_nchw[13] = 0.000000e+00f;
+  conv2d_nchw[15] = 0.000000e+00f;
   for (int rc_outer_outer = 0; rc_outer_outer &lt; 128; ++rc_outer_outer) {
-    __syncthreads();
-    if (((int)threadIdx.x) &lt; 162) {
-      pad_temp_shared[(((int)threadIdx.x) * 2)] = (((((9 &lt;= ((((int)threadIdx.x) * 2) % 81)) &amp;&amp; (((((int)threadIdx.x) * 2) % 81) &lt; 72)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) * 2) % 9))) &amp;&amp; (((((int)threadIdx.x) * 2) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 196) + (((((int)threadIdx.x) * 2) / 81) * 49)) + ((((((int)threadIdx.x) * 2) % 81) / 9) * 7)) + ((((int)threadIdx.x) * 2) % 9)) - 8)] : 0.000000e+00f);
-    }
-    if (((int)threadIdx.x) &lt; 162) {
-      pad_temp_shared[((((int)threadIdx.x) * 2) + 1)] = (((((9 &lt;= (((((int)threadIdx.x) * 2) + 1) % 81)) &amp;&amp; ((((((int)threadIdx.x) * 2) + 1) % 81) &lt; 72)) &amp;&amp; (1 &lt;= (((((int)threadIdx.x) * 2) + 1) % 9))) &amp;&amp; ((((((int)threadIdx.x) * 2) + 1) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 196) + ((((((int)threadIdx.x) * 2) + 1) / 81) * 49)) + (((((((int)threadIdx.x) * 2) + 1) % 81) / 9) * 7)) + (((((int)threadIdx.x) * 2) + 1) % 9)) - 8)] : 0.000000e+00f);
-    }
-    kernel_shared[(((int)threadIdx.x) * 6)] = kernel[((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6))];
-    kernel_shared[((((int)threadIdx.x) * 6) + 1)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 1)];
-    kernel_shared[((((int)threadIdx.x) * 6) + 2)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 2)];
-    kernel_shared[((((int)threadIdx.x) * 6) + 3)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 3)];
-    kernel_shared[((((int)threadIdx.x) * 6) + 4)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 4)];
-    kernel_shared[((((int)threadIdx.x) * 6) + 5)] = kernel[(((((((int)blockIdx.x) * 294912) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((int)threadIdx.x) % 6) * 6)) + 5)];
-    if (((int)threadIdx.x) &lt; 160) {
-      kernel_shared[((((int)threadIdx.x) * 6) + 1344)] = kernel[((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 4) % 12) * 3))];
-    }
-    if (((int)threadIdx.x) &lt; 160) {
-      kernel_shared[((((int)threadIdx.x) * 6) + 1345)] = kernel[(((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 4) % 12) * 3)) + 1)];
-    }
-    if (((int)threadIdx.x) &lt; 160) {
-      kernel_shared[((((int)threadIdx.x) * 6) + 1346)] = kernel[(((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 4) % 12) * 3)) + 2)];
-    }
-    if (((int)threadIdx.x) &lt; 160) {
-      kernel_shared[((((int)threadIdx.x) * 6) + 1347)] = kernel[((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 5) % 12) * 3))];
-    }
-    if (((int)threadIdx.x) &lt; 160) {
-      kernel_shared[((((int)threadIdx.x) * 6) + 1348)] = kernel[(((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 5) % 12) * 3)) + 1)];
-    }
-    if (((int)threadIdx.x) &lt; 160) {
-      kernel_shared[((((int)threadIdx.x) * 6) + 1349)] = kernel[(((((((int)blockIdx.x) * 294912) + (((((int)threadIdx.x) + 224) / 6) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) * 2) + 5) % 12) * 3)) + 2)];
+    for (int ry_outer_outer = 0; ry_outer_outer &lt; 3; ++ry_outer_outer) {
+      __syncthreads();
+      if (((int)threadIdx.x) &lt; 42) {
+        pad_temp_shared[(((int)threadIdx.x) * 6)] = (((((1 &lt;= ((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer)) &amp;&amp; (((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= ((((int)threadIdx.x) * 6) % 9))) &amp;&amp; (((((int)threadIdx.x) * 6) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 196) + (((((int)threadIdx.x) * 2) / 3) * 7)) + (ry_outer_outer * 7)) + ((((int)threadIdx.x) * 6) % 9)) - 8)] : 0.000000e+00f);
+      }
+      if (((int)threadIdx.x) &lt; 42) {
+        pad_temp_shared[((((int)threadIdx.x) * 6) + 1)] = (((((1 &lt;= ((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer)) &amp;&amp; (((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= (((((int)threadIdx.x) * 6) + 1) % 9))) &amp;&amp; ((((((int)threadIdx.x) * 6) + 1) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 196) + (((((int)threadIdx.x) * 2) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 1) % 9)) - 8)] : 0.000000e+00f);
+      }
+      if (((int)threadIdx.x) &lt; 42) {
+        pad_temp_shared[((((int)threadIdx.x) * 6) + 2)] = (((((1 &lt;= ((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer)) &amp;&amp; (((((((int)threadIdx.x) * 2) % 21) / 3) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= (((((int)threadIdx.x) * 6) + 2) % 9))) &amp;&amp; ((((((int)threadIdx.x) * 6) + 2) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 196) + (((((int)threadIdx.x) * 2) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 2) % 9)) - 8)] : 0.000000e+00f);
+      }
+      if (((int)threadIdx.x) &lt; 42) {
+        pad_temp_shared[((((int)threadIdx.x) * 6) + 3)] = (((((1 &lt;= (((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer)) &amp;&amp; ((((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= (((((int)threadIdx.x) * 6) + 3) % 9))) &amp;&amp; ((((((int)threadIdx.x) * 6) + 3) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 196) + ((((((int)threadIdx.x) * 2) + 1) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 3) % 9)) - 8)] : 0.00 [...]
+      }
+      if (((int)threadIdx.x) &lt; 42) {
+        pad_temp_shared[((((int)threadIdx.x) * 6) + 4)] = (((((1 &lt;= (((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer)) &amp;&amp; ((((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= (((((int)threadIdx.x) * 6) + 4) % 9))) &amp;&amp; ((((((int)threadIdx.x) * 6) + 4) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 196) + ((((((int)threadIdx.x) * 2) + 1) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 4) % 9)) - 8)] : 0.00 [...]
+      }
+      if (((int)threadIdx.x) &lt; 42) {
+        pad_temp_shared[((((int)threadIdx.x) * 6) + 5)] = (((((1 &lt;= (((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer)) &amp;&amp; ((((((((int)threadIdx.x) * 2) + 1) % 21) / 3) + ry_outer_outer) &lt; 8)) &amp;&amp; (1 &lt;= (((((int)threadIdx.x) * 6) + 5) % 9))) &amp;&amp; ((((((int)threadIdx.x) * 6) + 5) % 9) &lt; 8)) ? data[(((((rc_outer_outer * 196) + ((((((int)threadIdx.x) * 2) + 1) / 3) * 7)) + (ry_outer_outer * 7)) + (((((int)threadIdx.x) * 6) + 5) % 9)) - 8)] : 0.00 [...]
+      }
+      kernel_shared[((int)threadIdx.x)] = kernel[((((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 12) * 4608)) + (rc_outer_outer * 36)) + (((((int)threadIdx.x) % 12) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
+      kernel_shared[(((int)threadIdx.x) + 49)] = kernel[((((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 49) / 12) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) + 1) % 12) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+      kernel_shared[(((int)threadIdx.x) + 98)] = kernel[((((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 98) / 12) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) + 2) % 12) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+      if (((int)threadIdx.x) &lt; 45) {
+        kernel_shared[(((int)threadIdx.x) + 147)] = kernel[((((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 147) / 12) * 4608)) + (rc_outer_outer * 36)) + ((((((int)threadIdx.x) / 3) + 1) &amp; 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
+      }
+      __syncthreads();
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[0]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[24]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[48]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[72]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[96]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[120]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[144]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[168]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[12]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[36]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[60]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[84]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[108]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[132]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[156]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[(((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7))] * kernel_shared[180]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[3]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[27]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[51]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[75]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[99]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[123]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[147]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[171]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[15]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[39]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[63]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[87]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[111]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[135]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[159]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 63)] * kernel_shared[183]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[6]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[30]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[54]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[78]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[102]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[126]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[150]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[174]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[18]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[42]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[66]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[90]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[114]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[138]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[162]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 126)] * kernel_shared[186]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[9]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[33]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[57]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[81]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[105]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[129]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[153]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[177]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[21]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[45]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[69]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[93]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[117]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[141]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[165]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 189)] * kernel_shared[189]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[1]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[25]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[49]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[73]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[97]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[121]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[145]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[169]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[13]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[37]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[61]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[85]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[109]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[133]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[157]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 1)] * kernel_shared[181]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[4]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[28]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[52]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[76]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[100]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[124]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[148]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[172]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[16]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[40]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[64]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[88]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[112]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[136]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[160]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 64)] * kernel_shared[184]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[7]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[31]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[55]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[79]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[103]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[127]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[151]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[175]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[19]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[43]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[67]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[91]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[115]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[139]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[163]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 127)] * kernel_shared[187]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[10]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[34]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[58]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[82]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[106]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[130]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[154]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[178]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[22]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[46]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[70]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[94]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[118]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[142]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[166]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 190)] * kernel_shared[190]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[2]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[26]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[50]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[74]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[98]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[122]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[146]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[170]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[14]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[38]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[62]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[86]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[110]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[134]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[158]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 2)] * kernel_shared[182]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[5]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[29]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[53]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[77]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[101]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[125]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[149]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[173]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[17]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[41]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[65]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[89]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[113]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[137]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[161]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 65)] * kernel_shared[185]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[8]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[32]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[56]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[80]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[104]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[128]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[152]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[176]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[20]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[44]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[68]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[92]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[116]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[140]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[164]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 128)] * kernel_shared[188]));
+      conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[11]));
+      conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[35]));
+      conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[59]));
+      conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[83]));
+      conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[107]));
+      conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[131]));
+      conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[155]));
+      conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[179]));
+      conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[23]));
+      conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[47]));
+      conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[71]));
+      conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[95]));
+      conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[119]));
+      conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[143]));
+      conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[167]));
+      conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp_shared[((((((int)threadIdx.x) / 7) * 9) + (((int)threadIdx.x) % 7)) + 191)] * kernel_shared[191]));
     }
-    __syncthreads();
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((int)threadIdx.x) % 7)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 9)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[((((int)threadIdx.x) / 7) * 72)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((int)threadIdx.x) % 7)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 9)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 36)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 9)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 63)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 3)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 9)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 63)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 39)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 63)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 72)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 6)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 18)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 27)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 36)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 45)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 54)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 63)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 72)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 42)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 81)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 90)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 9)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 81)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 90)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 45)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 90)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 144)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 12)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 90)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 144)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 48)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 144)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 153)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 15)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 99)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 108)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 117)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 126)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 135)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 144)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 153)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 51)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 162)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 171)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 18)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 162)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 171)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 54)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 171)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 225)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 21)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 171)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 225)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 57)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 225)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 234)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 24)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 180)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 189)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 198)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 207)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 216)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 225)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 234)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 60)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 243)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 252)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 27)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 243)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 252)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 63)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 252)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 306)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 30)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 252)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 306)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 66)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 306)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 315)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 33)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 261)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 270)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 279)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 288)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 297)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 306)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 315)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 69)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 1)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 10)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 1)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 1)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 10)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 37)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 10)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 64)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 4)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 10)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 64)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 40)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 64)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 73)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 7)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 19)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 28)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 37)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 46)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 55)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 64)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 73)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 43)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 82)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 91)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 10)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 82)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 91)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 46)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 91)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 145)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 13)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 91)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 145)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 49)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 145)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 154)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 16)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 100)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 109)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 118)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 127)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 136)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 145)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 154)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 52)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 163)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 172)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 19)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 163)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 172)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 55)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 172)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 226)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 22)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 172)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 226)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 58)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 226)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 235)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 25)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 181)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 190)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 199)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 208)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 217)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 226)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 235)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 61)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 244)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 253)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 28)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 244)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 253)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 64)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 253)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 307)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 31)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 253)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 307)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 67)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 307)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 316)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 34)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 262)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 271)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 280)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 289)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 298)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 307)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 316)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 70)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 2)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 11)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 2)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 2)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 11)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 38)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 11)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 65)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 5)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 11)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 65)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 41)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 65)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 74)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 8)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 20)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 29)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 38)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 47)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 56)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 65)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 74)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 44)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 83)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 92)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 11)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 83)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 92)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 47)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 92)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 146)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 14)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 92)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 146)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 50)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 146)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 155)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 17)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 101)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 110)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 119)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 128)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 137)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 146)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 155)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 53)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 164)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 173)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 20)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 164)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 173)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 56)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 173)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 227)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 23)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 173)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 227)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 59)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 227)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 236)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 26)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 182)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 191)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 200)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 209)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 218)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 227)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 236)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 62)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 245)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 254)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 29)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 245)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 254)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 65)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 254)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 308)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 32)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 254)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 308)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 68)]));
-    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 308)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 317)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 35)]));
-    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 263)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 272)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 281)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 290)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 299)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 308)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
-    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[((((int)threadIdx.x) % 7) + 317)] * kernel_shared[(((((int)threadIdx.x) / 7) * 72) + 71)]));
   }
   for (int i1_inner = 0; i1_inner &lt; 2; ++i1_inner) {
-    for (int i2_inner = 0; i2_inner &lt; 7; ++i2_inner) {
-      compute[(((((((int)blockIdx.x) * 3136) + ((((int)threadIdx.x) / 7) * 98)) + (i1_inner * 49)) + (i2_inner * 7)) + (((int)threadIdx.x) % 7))] = max((conv2d_nchw[((i1_inner * 7) + i2_inner)] + bias[(((((int)blockIdx.x) * 64) + ((((int)threadIdx.x) / 7) * 2)) + i1_inner)]), 0.000000e+00f);
-    }
+    compute[(((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x))] = max((conv2d_nchw[i1_inner] + bias[((((int)blockIdx.x) * 16) + i1_inner)]), 0.000000e+00f);
+    compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 98)] = max((conv2d_nchw[(i1_inner + 2)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 2)]), 0.000000e+00f);
+    compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 196)] = max((conv2d_nchw[(i1_inner + 4)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 4)]), 0.000000e+00f);
+    compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 294)] = max((conv2d_nchw[(i1_inner + 6)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 6)]), 0.000000e+00f);
+    compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 392)] = max((conv2d_nchw[(i1_inner + 8)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 8)]), 0.000000e+00f);
+    compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 490)] = max((conv2d_nchw[(i1_inner + 10)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 10)]), 0.000000e+00f);
+    compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 588)] = max((conv2d_nchw[(i1_inner + 12)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 12)]), 0.000000e+00f);
+    compute[((((((int)blockIdx.x) * 784) + (i1_inner * 49)) + ((int)threadIdx.x)) + 686)] = max((conv2d_nchw[(i1_inner + 14)] + bias[(((((int)blockIdx.x) * 16) + i1_inner) + 14)]), 0.000000e+00f);
   }
 }
 </pre></div>
@@ -1791,7 +1177,7 @@ In the example below we resume the status and do more 5 trials.</p>
 Get devices for measurement successfully!
 </pre></div>
 </div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  28.877 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  25.791 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/e3e540f3b477c0c52d8eb73e674e8ffd/tune_conv2d_layer_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_conv2d_layer_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
index a93b55df60..e22a9b0ec1 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
@@ -902,7 +902,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-   8.2600       8.2629       8.2651       8.2518       0.0058
+   8.1702       8.1746       8.1787       8.1573       0.0093
 </pre></div>
 </div>
 </div>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
index fda250fd63..dd05a9bff9 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
@@ -921,7 +921,7 @@ so we can read the log file and load the best schedules.</p>
 Evaluate inference time cost...
 Execution time summary:
  mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
-  756.5330     756.6696     756.9334     755.9959      0.3947
+  744.2726     744.0238     745.1500     743.6439      0.6395
 </pre></div>
 </div>
 </div>
@@ -943,7 +943,7 @@ to learn how to use the RPC Tracker and RPC Server.
 To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
 with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
 </ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  31.029 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  29.019 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-x86-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/e416b94ca1090b0897c0f6e0df95b911/tune_network_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_x86.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html b/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
index c3fb949744..aa027ad720 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
@@ -625,21 +625,23 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
              placeholder_4: Buffer(placeholder_14: Pointer(float32), float32, [65536], []),
              compute: Buffer(compute_2: Pointer(float32), float32, [65536], [])}
   buffer_map = {placeholder_5: placeholder, placeholder_6: placeholder_1, placeholder_7: placeholder_2, placeholder_8: placeholder_3, placeholder_9: placeholder_4, compute_1: compute}
-  preflattened_buffer_map = {placeholder_9: placeholder_15: Buffer(placeholder_14, float32, [128, 512], []), compute_1: compute_3: Buffer(compute_2, float32, [128, 512], []), placeholder_5: placeholder_16: Buffer(placeholder_10, float32, [128, 256], []), placeholder_6: placeholder_17: Buffer(placeholder_11, float32, [4916, 16, 1], []), placeholder_8: placeholder_18: Buffer(placeholder_13, int32, [33], []), placeholder_7: placeholder_19: Buffer(placeholder_12, int32, [4916], [])} {
+  preflattened_buffer_map = {placeholder_8: placeholder_15: Buffer(placeholder_13, int32, [33], []), placeholder_9: placeholder_16: Buffer(placeholder_14, float32, [128, 512], []), placeholder_6: placeholder_17: Buffer(placeholder_11, float32, [4916, 16, 1], []), placeholder_5: placeholder_18: Buffer(placeholder_10, float32, [128, 256], []), compute_1: compute_3: Buffer(compute_2, float32, [128, 512], []), placeholder_7: placeholder_19: Buffer(placeholder_12, int32, [4916], [])} {
   for (i0.outer.i1.outer.fused: int32, 0, 64) &quot;parallel&quot; {
     allocate(compute_4: Pointer(global float32), float32, [1024]), storage_scope = global {
-      for (nb_j.inner: int32, 0, 2) {
-        for (i.inner.init: int32, 0, 32) {
-          for (j.init: int32, 0, 16) {
-            compute_5: Buffer(compute_4, float32, [1024], [])[(((i.inner.init*32) + (nb_j.inner*16)) + j.init)] = 0f32
+      for (i.outer.inner: int32, 0, 4) {
+        for (nb_j.inner: int32, 0, 2) {
+          for (i.inner.init: int32, 0, 8) {
+            for (j.init: int32, 0, 16) {
+              compute_5: Buffer(compute_4, float32, [1024], [])[((((i.outer.inner*256) + (i.inner.init*32)) + (nb_j.inner*16)) + j.init)] = 0f32
+            }
           }
-        }
-        for (elem_idx: int32, 0, let cse_var_1: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner) in (placeholder_3[(cse_var_1 + 1)] - placeholder_3[cse_var_1])) {
-          for (i.inner: int32, 0, 32) {
-            for (j: int32, 0, 16) {
-              let cse_var_3: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner)
-              let cse_var_2: int32 = (((i.inner*32) + (nb_j.inner*16)) + j)
-              compute_5[cse_var_2] = (compute_5[cse_var_2] + (placeholder_1[(((placeholder_3[cse_var_3]*16) + (elem_idx*16)) + j)]*max(placeholder[(((floordiv(i0.outer.i1.outer.fused, 16)*8192) + (i.inner*256)) + placeholder_2[(placeholder_3[cse_var_3] + elem_idx)])], 0f32)))
+          for (elem_idx: int32, 0, let cse_var_1: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner) in (placeholder_3[(cse_var_1 + 1)] - placeholder_3[cse_var_1])) {
+            for (i.inner: int32, 0, 8) {
+              for (j: int32, 0, 16) {
+                let cse_var_3: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner)
+                let cse_var_2: int32 = ((((i.outer.inner*256) + (i.inner*32)) + (nb_j.inner*16)) + j)
+                compute_5[cse_var_2] = (compute_5[cse_var_2] + (placeholder_1[(((placeholder_3[cse_var_3]*16) + (elem_idx*16)) + j)]*max(placeholder[((((floordiv(i0.outer.i1.outer.fused, 16)*8192) + (i.outer.inner*2048)) + (i.inner*256)) + placeholder_2[(placeholder_3[cse_var_3] + elem_idx)])], 0f32)))
+              }
             }
           }
         }
@@ -684,7 +686,7 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
 <span class="p">)</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 1.620 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 1.567 ms
 </pre></div>
 </div>
 <div class="admonition note">
diff --git a/docs/how_to/tune_with_autotvm/sg_execution_times.html b/docs/how_to/tune_with_autotvm/sg_execution_times.html
index 6bf6414bed..1fc5d20d75 100644
--- a/docs/how_to/tune_with_autotvm/sg_execution_times.html
+++ b/docs/how_to/tune_with_autotvm/sg_execution_times.html
@@ -327,7 +327,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-tune-with-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:38.569</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
+<p><strong>00:52.134</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -336,11 +336,11 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></td>
-<td><p>00:38.533</p></td>
+<td><p>00:52.099</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></td>
-<td><p>00:00.021</p></td>
+<td><p>00:00.020</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a Convolutional Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></td>
diff --git a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
index 80e892f46f..2be1f35f31 100644
--- a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
@@ -557,9 +557,7 @@ for this template</p>
 waiting for device...
 device available
 Get devices for measurement successfully!
-No: 1   GFLOPS: 41.90/41.90     result: MeasureResult(costs=(0.005524936,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.104245662689209, timestamp=1664521849.1920621) [(&#39;tile_f&#39;, [-1, 4, 8, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4160669
-No: 2   GFLOPS: 209.95/209.95   result: MeasureResult(costs=(0.0011026573379310344,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.8389084339141846, timestamp=1664521850.0930169)      [(&#39;tile_f&#39;, [-1, 1, 16, 8]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 16, 4]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7052038
-No: 3   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+No: 1   GFLOPS: 0.00/0.00       result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -681,8 +679,9 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 2, 8]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 4, 128]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,5409063
-No: 4   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 32, 16]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 32, 8]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4179304
+No: 2   GFLOPS: 8.88/8.88       result: MeasureResult(costs=(0.02607909575,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.145456790924072, timestamp=1664540728.8273628)       [(&#39;tile_f&#39;, [-1, 32, 2, 2]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 2, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,8752769
+No: 3   GFLOPS: 0.00/8.88       result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -804,8 +803,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 2, 8, 2]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 32, 16]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,8271420
-No: 5   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 128, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 8, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4885809
+No: 4   GFLOPS: 0.00/8.88       result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -927,8 +926,9 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 8, 1, 16]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 32, 4]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2217327
-No: 6   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 4, 1, 4]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 16, 8]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,499282
+No: 5   GFLOPS: 2.37/8.88       result: MeasureResult(costs=(0.09773433675,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.024892807006836, timestamp=1664540733.2792308)       [(&#39;tile_f&#39;, [-1, 8, 4, 16]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 2]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,1778658
+No: 6   GFLOPS: 0.00/8.88       result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -1050,9 +1050,10 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 128, 2, 2]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 1, 32]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3627871
-No: 7   GFLOPS: 1.32/209.95     result: MeasureResult(costs=(0.1760204925,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.549057245254517, timestamp=1664521857.5830307)        [(&#39;tile_f&#39;, [-1, 8, 4, 4]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,6620578
-No: 8   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 4, 1, 64]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 256, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,1190402
+No: 7   GFLOPS: 3.47/8.88       result: MeasureResult(costs=(0.06671308875,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.086146831512451, timestamp=1664540735.3611345)       [(&#39;tile_f&#39;, [-1, 1, 4, 64]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 1]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,6782807
+No: 8   GFLOPS: 21.12/21.12     result: MeasureResult(costs=(0.0109592353,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.3600950241088867, timestamp=1664540736.1205916)       [(&#39;tile_f&#39;, [-1, 32, 4, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 1, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4876984
+No: 9   GFLOPS: 0.00/21.12      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -1174,8 +1175,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 128, 4]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 128]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,3277035
-No: 9   GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 2, 4, 64]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 32, 16]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,8269568
+No: 10  GFLOPS: 0.00/21.12      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -1297,132 +1298,162 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 32, 2, 2]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 64, 4]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2606629
-No: 10  GFLOPS: 4.00/209.95     result: MeasureResult(costs=(0.0578777045,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.9959490299224854, timestamp=1664521859.7966175)       [(&#39;tile_f&#39;, [-1, 2, 8, 16]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 32, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,1761720
-No: 11  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
-  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
-    func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
-  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
-    func = build(s, args, target_host=task.target_host, runtime=runtime)
-  File &quot;/workspace/python/tvm/driver/build_module.py&quot;, line 227, in build
-    input_mod = lower(inputs, args, name=name, binds=binds)
-  File &quot;/workspace/python/tvm/driver/build_module.py&quot;, line 134, in lower
-    return ffi.lower_schedule(inp, args, name, binds, simple_mode)
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 4, 2, 64]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 64, 1]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,6218946
+No: 11  GFLOPS: 0.00/21.12      result: Traceback (most recent call last):
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 738, in __call__
+    yield remote, remote.load_module(os.path.split(build_result.filename)[1])
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 702, in run_through_rpc
+    costs = time_f(*args).results
+  File &quot;/workspace/python/tvm/runtime/module.py&quot;, line 357, in evaluator
+    blob = feval(*args)
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
-  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 276, in tvm._ffi._cy3.core.FuncCall
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 262, in tvm._ffi._cy3.core.FuncCall
+  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 251, in tvm._ffi._cy3.core.FuncCall3
   File &quot;tvm/_ffi/_cython/./base.pxi&quot;, line 181, in tvm._ffi._cy3.core.CHECK_CALL
 tvm._ffi.base.TVMError: Traceback (most recent call last):
-  24: TVMFuncCall
+  4: TVMFuncCall
         at ../src/runtime/c_runtime_api.cc:477
-  23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
-        at ../include/tvm/runtime/packed_func.h:1217
-  22: Call
-        at ../include/tvm/runtime/packed_func.h:1213
-  21: operator()
-        at ../include/tvm/runtime/packed_func.h:1731
-  20: unpack_call&lt;tvm::IRModule, 5, tvm::&lt;lambda(tvm::te::Schedule, const tvm::runtime::Array&lt;tvm::runtime::ObjectRef&gt;&amp;, const tvm::runtime::String&amp;, const tvm::runtime::Map&lt;tvm::te::Tensor, tvm::tir::Buffer&gt;&amp;, bool)&gt; &gt;
-        at ../include/tvm/runtime/packed_func.h:1671
-  19: run&lt;&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  18: run&lt;tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  17: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  16: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  15: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  14: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1646
-  13: operator()
-        at ../src/driver/driver_api.cc:379
-  12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array&lt;tvm::runtime::ObjectRef, void&gt; const&amp;, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, std::unordered_map&lt;tvm::te::Tensor, tvm::tir::Buffer, std::hash&lt;tvm::te::Tensor&gt;, std::equal_to&lt;tvm::te::Tensor&gt;, std::allocator&lt;std::pair&lt;tvm::te::Tensor const, tvm::tir::Buffer&gt; &gt; &gt; const&amp;, tvm::GlobalVarSupply, bool)
-        at ../src/driver/driver_api.cc:365
-  11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array&lt;tvm::transform::Pass, void&gt;)
-        at ../src/driver/driver_api.cc:260
-  10: tvm::transform::Pass::operator()(tvm::IRModule) const
-        at ../src/ir/transform.cc:258
-  9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
-        at ../src/ir/transform.cc:274
-  8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
-        at ../src/ir/transform.cc:453
-  7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
-        at ../src/ir/transform.cc:274
-  6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
-        at ../src/tir/ir/transform.cc:100
-  5: tvm::runtime::TypedPackedFunc&lt;tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)&gt;::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
-        at ../include/tvm/runtime/packed_func.h:1750
-  4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher&lt;tvm::tir::PrimFunc&gt;::run&lt;tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext&gt;(tvm::runtime::PackedFunc const&amp;, tvm::tir::PrimFunc&amp;&amp;, tvm::IRModule&amp;&amp;, tvm::transform::PassContext&amp;&amp;)
-        at ../include/tvm/runtime/packed_func.h:1694
-  3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()&lt;tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext&gt;(tvm::tir::PrimFunc&amp;&amp;, tvm::IRModule&amp;&amp;, tvm::transform::PassContext&amp;&amp;) const
-        at ../include/tvm/runtime/packed_func.h:1618
-  2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+  3: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
         at ../include/tvm/runtime/packed_func.h:1217
-  1: Call
-        at ../include/tvm/runtime/packed_func.h:1213
-  0: operator()
-        at ../src/runtime/c_runtime_api.cc:534
-  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
-  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
-    raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
+  2: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+        at ../src/runtime/rpc/rpc_module.cc:129
+  1: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function&lt;void (tvm::runtime::TVMArgs)&gt; const&amp;)
+        at ../src/runtime/rpc/rpc_endpoint.cc:1009
+  0: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function&lt;void (tvm::runtime::TVMArgs)&gt;)
+        at ../src/runtime/rpc/rpc_endpoint.cc:801
+  File &quot;../src/runtime/rpc/rpc_endpoint.cc&quot;, line 801
+TVMError:
+---------------------------------------------------------------
+An error occurred during the execution of TVM.
+For more information, please see: https://tvm.apache.org/docs/errors.html
+---------------------------------------------------------------
+  Check failed: (code == RPCCode::kReturn) is false: code=kShutdown
+
+During handling of the above exception, another exception occurred:
 
 Traceback (most recent call last):
-  24: TVMFuncCall
-        at ../src/runtime/c_runtime_api.cc:477
-  23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
-        at ../include/tvm/runtime/packed_func.h:1217
-  22: Call
-        at ../include/tvm/runtime/packed_func.h:1213
-  21: operator()
-        at ../include/tvm/runtime/packed_func.h:1731
-  20: unpack_call&lt;tvm::IRModule, 5, tvm::&lt;lambda(tvm::te::Schedule, const tvm::runtime::Array&lt;tvm::runtime::ObjectRef&gt;&amp;, const tvm::runtime::String&amp;, const tvm::runtime::Map&lt;tvm::te::Tensor, tvm::tir::Buffer&gt;&amp;, bool)&gt; &gt;
-        at ../include/tvm/runtime/packed_func.h:1671
-  19: run&lt;&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  18: run&lt;tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  17: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  16: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  15: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1631
-  14: run&lt;tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_&gt;
-        at ../include/tvm/runtime/packed_func.h:1646
-  13: operator()
-        at ../src/driver/driver_api.cc:379
-  12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array&lt;tvm::runtime::ObjectRef, void&gt; const&amp;, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, std::unordered_map&lt;tvm::te::Tensor, tvm::tir::Buffer, std::hash&lt;tvm::te::Tensor&gt;, std::equal_to&lt;tvm::te::Tensor&gt;, std::allocator&lt;std::pair&lt;tvm::te::Tensor const, tvm::tir::Buffer&gt; &gt; &gt; const&amp;, tvm::GlobalVarSupply, bool)
-        at ../src/driver/driver_api.cc:365
-  11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array&lt;tvm::transform::Pass, void&gt;)
-        at ../src/driver/driver_api.cc:260
-  10: tvm::transform::Pass::operator()(tvm::IRModule) const
-        at ../src/ir/transform.cc:258
-  9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
-        at ../src/ir/transform.cc:274
-  8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
-        at ../src/ir/transform.cc:453
-  7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
-        at ../src/ir/transform.cc:274
-  6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&amp;) const
-        at ../src/tir/ir/transform.cc:100
-  5: tvm::runtime::TypedPackedFunc&lt;tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)&gt;::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
-        at ../include/tvm/runtime/packed_func.h:1750
-  4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher&lt;tvm::tir::PrimFunc&gt;::run&lt;tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext&gt;(tvm::runtime::PackedFunc const&amp;, tvm::tir::PrimFunc&amp;&amp;, tvm::IRModule&amp;&amp;, tvm::transform::PassContext&amp;&amp;)
-        at ../include/tvm/runtime/packed_func.h:1694
-  3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()&lt;tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext&gt;(tvm::tir::PrimFunc&amp;&amp;, tvm::IRModule&amp;&amp;, tvm::transform::PassContext&amp;&amp;) const
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 702, in run_through_rpc
+    costs = time_f(*args).results
+  File &quot;/usr/lib/python3.7/contextlib.py&quot;, line 130, in __exit__
+    self.gen.throw(type, value, traceback)
+  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 742, in __call__
+    remote.remove(build_result.filename)
+  File &quot;/workspace/python/tvm/rpc/client.py&quot;, line 143, in remove
+    self._remote_funcs[&quot;remove&quot;] = self.get_function(&quot;tvm.rpc.server.remove&quot;)
+  File &quot;/workspace/python/tvm/rpc/client.py&quot;, line 71, in get_function
+    return self._sess.get_function(name)
+  File &quot;/workspace/python/tvm/runtime/module.py&quot;, line 171, in get_function
+    self.handle, c_str(name), ctypes.c_int(query_imports), ctypes.byref(ret_handle)
+  File &quot;/workspace/python/tvm/_ffi/base.py&quot;, line 348, in check_call
+    raise get_last_ffi_error()
+tvm._ffi.base.TVMError: Traceback (most recent call last):
+  52: 0xffffffffffffffff
+  51: _start
+  50: __libc_start_main
+  49: _Py_UnixMain
+  48: 0x0000000000650da0
+  47: 0x0000000000650afa
+  46: _PyFunction_FastCallDict
+  45: _PyEval_EvalCodeWithName
+  44: _PyEval_EvalFrameDefault
+  43: _PyFunction_FastCallKeywords
+  42: _PyEval_EvalCodeWithName
+  41: _PyEval_EvalFrameDefault
+  40: _PyMethodDef_RawFastCallKeywords
+  39: 0x0000000000546369
+  38: _PyEval_EvalCodeWithName
+  37: _PyEval_EvalFrameDefault
+  36: _PyFunction_FastCallKeywords
+  35: _PyEval_EvalCodeWithName
+  34: _PyEval_EvalFrameDefault
+  33: _PyFunction_FastCallDict
+  32: _PyEval_EvalCodeWithName
+  31: _PyEval_EvalFrameDefault
+  30: _PyObject_FastCallDict
+  29: 0x00000000004c06e1
+  28: _PyFunction_FastCallDict
+  27: _PyEval_EvalFrameDefault
+  26: _PyMethodDescr_FastCallKeywords
+  25: 0x00000000005dcb58
+  24: 0x00000000005dc83f
+  23: 0x00000000004ba127
+  22: _PyEval_EvalFrameDefault
+  21: _PyFunction_FastCallKeywords
+  20: _PyEval_EvalFrameDefault
+  19: _PyFunction_FastCallKeywords
+  18: _PyEval_EvalFrameDefault
+  17: _PyFunction_FastCallKeywords
+  16: _PyEval_EvalCodeWithName
+  15: _PyEval_EvalFrameDefault
+  14: 0x0000000000537c30
+  13: _PyObject_FastCallKeywords
+  12: 0x00007f0d086adfa2
+  11: _ctypes_callproc
+  10: ffi_call
+  9: ffi_call_unix64
+  8: TVMModGetFunction
+        at ../src/runtime/c_runtime_api.cc:408
+  7: tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, bool)
+        at ../src/runtime/module.cc:66
+  6: tvm::runtime::RPCModuleNode::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, tvm::runtime::ObjectPtr&lt;tvm::runtime::Object&gt; const&amp;)
+        at ../src/runtime/rpc/rpc_module.cc:181
+  5: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;)
+        at ../src/runtime/rpc/rpc_endpoint.cc:1004
+  4: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote&lt;std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;&gt;(tvm::runtime::RPCCode, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;)
+        at ../src/runtime/rpc/rpc_endpoint.h:211
+  3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()&lt;int, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;&gt;(int&amp;&amp;, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;) const
         at ../include/tvm/runtime/packed_func.h:1618
   2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
         at ../include/tvm/runtime/packed_func.h:1217
   1: Call
         at ../include/tvm/runtime/packed_func.h:1213
   0: operator()
-        at ../src/runtime/c_runtime_api.cc:534
-  File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
-  File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
-    raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 1, 128]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 4, 16]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,5743310
-No: 12  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+        at ../src/runtime/rpc/rpc_endpoint.cc:681
+  File &quot;../src/runtime/rpc/rpc_endpoint.cc&quot;, line 681
+TVMError:
+---------------------------------------------------------------
+An error occurred during the execution of TVM.
+For more information, please see: https://tvm.apache.org/docs/errors.html
+---------------------------------------------------------------
+  Check failed: (code == RPCCode::kReturn) is false: code=1
+
+Traceback (most recent call last):
+  52: 0xffffffffffffffff
+  51: _start
+  50: __libc_start_main
+  49: _Py_UnixMain
+  48: 0x0000000000650da0
+  47: 0x0000000000650afa
+  46: _PyFunction_FastCallDict
+  45: _PyEval_EvalCodeWithName
+  44: _PyEval_EvalFrameDefault
+  43: _PyFunction_FastCallKeywords
+  42: _PyEval_EvalCodeWithName
+  41: _PyEval_EvalFrameDefault
+  40: _PyMethodDef_RawFastCallKeywords
+  39: 0x0000000000546369
+  38: _PyEval_EvalCodeWithName
+  37: _PyEval_EvalFrameDefault
+  36: _PyFunction_FastCallKeywords
+  35: _PyEval_EvalCodeWithName
+  34: _PyEval_EvalFrameDefault
+  33: _PyFunction_FastCallDict
+  32: _PyEval_EvalCodeWithName
+  31: _PyEval_EvalFrameDefault
+  30: _PyObject_FastCallDict
+  29: 0x00000000004c06e1
+  28: _PyFunction_FastCallDict
+  27: _PyEval_EvalFrameDefault
+  26: _PyMethodDescr_FastCallKeywords
+  25: 0x00000000005dcb58
+  24: 0x00000000005dc83f
+  23: 0x00000000004ba127
+  22: _PyEval_EvalFrameDefault
+  21: _PyFunction_FastCallKeywords
+  20: _PyEval_EvalFrameDefault
+  19: _PyFunction_FastCall      [(&#39;tile_f&#39;, [-1, 1, 1, 64]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 16]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,1091180
+No: 12  GFLOPS: 10.22/21.12     result: MeasureResult(costs=(0.02264849083333333,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.971567392349243, timestamp=1664540744.7623453) [(&#39;tile_f&#39;, [-1, 2, 4, 4]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 2, 4]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,9364416
+No: 13  GFLOPS: 0.00/21.12      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -1544,10 +1575,10 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 2, 16, 4]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 32]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,921707
-No: 13  GFLOPS: 5.01/209.95     result: MeasureResult(costs=(0.046250613999999995,), error_no=MeasureErrorNo.NO_ERROR, all_cost=5.1030497550964355, timestamp=1664521865.0797503)       [(&#39;tile_f&#39;, [-1, 2, 1, 8]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 1]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,9102637
-No: 14  GFLOPS: 71.47/209.95    result: MeasureResult(costs=(0.0032390244838709672,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.8225221633911133, timestamp=1664521865.7329972)      [(&#39;tile_f&#39;, [-1, 4, 8, 1]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 1, 8]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,7454729
-No: 15  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 8, 4, 2]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 128, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,254615
+No: 14  GFLOPS: 1.22/21.12      result: MeasureResult(costs=(0.1892443235,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.4586076736450195, timestamp=1664540749.4042463)       [(&#39;tile_f&#39;, [-1, 8, 1, 2]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 7]), (&#39;tile_rc&#39;, [-1, 2, 32]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,5568038
+No: 15  GFLOPS: 54.18/54.18     result: MeasureResult(costs=(0.004272896125000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.2073407173156738, timestamp=1664540750.0702045)       [(&#39;tile_f&#39;, [-1, 2, 1, 16]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 32, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,19085
+No: 16  GFLOPS: 0.00/54.18      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -1669,8 +1700,9 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 32, 4, 1]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 256, 2]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,257644
-No: 16  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 1, 256]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 32, 16]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 1)],None,6914156
+No: 17  GFLOPS: 7.45/54.18      result: MeasureResult(costs=(0.031059663749999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=10.050727844238281, timestamp=1664540760.2964559)       [(&#39;tile_f&#39;, [-1, 4, 1, 2]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 64]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 1)],None,8871117
+No: 18  GFLOPS: 0.00/54.18      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -1792,8 +1824,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 32, 1, 16]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 4, 128]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 0)],None,2503549
-No: 17  GFLOPS: 0.00/209.95     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 128, 2]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 4, 128]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,955117
+No: 19  GFLOPS: 0.00/54.18      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -1915,9 +1947,8 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 128, 2]), (&#39;tile_y&#39;, [-1, 1, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 4]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,647777
-No: 18  GFLOPS: 265.53/265.53   result: MeasureResult(costs=(0.0008718405652173913,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.2890162467956543, timestamp=1664521867.6697655)      [(&#39;tile_f&#39;, [-1, 1, 1, 8]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 1, 8]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3775556
-No: 19  GFLOPS: 0.00/265.53     result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 2, 128, 1]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 64, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 512), (&#39;unroll_explicit&#39;, 1)],None,8152590
+No: 20  GFLOPS: 0.00/54.18      result: Traceback (most recent call last):
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 588, in __call__
     func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 540, in _build_func_common
@@ -2039,8 +2070,7 @@ Traceback (most recent call last):
   File &quot;tvm/_ffi/_cython/./packed_func.pxi&quot;, line 56, in tvm._ffi._cy3.core.tvm_callback
   File &quot;/workspace/python/tvm/autotvm/measure/measure_methods.py&quot;, line 871, in verify_pass
     raise InstantiationError(&quot;Skipped because of invalid gpu kernel&quot;)
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 1, 2, 128]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 2, 8]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 3, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,4553553
-No: 20  GFLOPS: 1.00/265.53     result: MeasureResult(costs=(0.23153633099999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=4.421393156051636, timestamp=1664521871.0406532) [(&#39;tile_f&#39;, [-1, 4, 4, 32]), (&#39;tile_y&#39;, [-1, 1, 1, 7]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 1, 1]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,1356936
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel        [(&#39;tile_f&#39;, [-1, 4, 32, 1]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 1, 1, 1]), (&#39;tile_rc&#39;, [-1, 4, 8]), (&#39;tile_ry&#39;, [-1, 1, 3]), (&#39;tile_rx&#39;, [-1, 1, 3]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,5136162
 </pre></div>
 </div>
 <p>Finally we can inspect the best config from log file, check correctness,
@@ -2079,9 +2109,9 @@ and measure running time.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Finish loading 20 records
 
 Best config:
-[(&#39;tile_f&#39;, [-1, 1, 1, 8]), (&#39;tile_y&#39;, [-1, 7, 1, 1]), (&#39;tile_x&#39;, [-1, 1, 7, 1]), (&#39;tile_rc&#39;, [-1, 1, 8]), (&#39;tile_ry&#39;, [-1, 3, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 1500), (&#39;unroll_explicit&#39;, 0)],None,3775556
+[(&#39;tile_f&#39;, [-1, 2, 1, 16]), (&#39;tile_y&#39;, [-1, 1, 7, 1]), (&#39;tile_x&#39;, [-1, 7, 1, 1]), (&#39;tile_rc&#39;, [-1, 32, 1]), (&#39;tile_ry&#39;, [-1, 1, 1]), (&#39;tile_rx&#39;, [-1, 1, 1]), (&#39;auto_unroll_max_step&#39;, 0), (&#39;unroll_explicit&#39;, 0)],None,19085
 Finish loading 20 records
-Time cost of this operator: 0.001233
+Time cost of this operator: 0.004598
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autotvm-tune-conv2d-cuda-py">
diff --git a/docs/how_to/tune_with_autotvm/tune_relay_cuda.html b/docs/how_to/tune_with_autotvm/tune_relay_cuda.html
index a55eb450d0..6b05ee9e4e 100644
--- a/docs/how_to/tune_with_autotvm/tune_relay_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_relay_cuda.html
@@ -482,7 +482,7 @@ We can also load models from MXNet, ONNX and TensorFlow.</p>
 <span class="p">}</span>
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/workspace/python/tvm/target/target.py:389: UserWarning: Try specifying cuda arch by adding &#39;arch=sm_xx&#39; to your target.
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/workspace/python/tvm/target/target.py:393: UserWarning: Try specifying cuda arch by adding &#39;arch=sm_xx&#39; to your target.
   warnings.warn(&quot;Try specifying cuda arch by adding &#39;arch=sm_xx&#39; to your target.&quot;)
 </pre></div>
 </div>
diff --git a/docs/how_to/work_with_microtvm/micro_autotune.html b/docs/how_to/work_with_microtvm/micro_autotune.html
index e568d73904..e37f805f4d 100644
--- a/docs/how_to/work_with_microtvm/micro_autotune.html
+++ b/docs/how_to/work_with_microtvm/micro_autotune.html
@@ -582,10 +582,10 @@ the tuned operator.</p>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build without Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  310.9     98.719   (1, 2, 10, 10, 3)  2       1        [310.9]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       3.063     0.973    (1, 6, 10, 10)     1       1        [3.063]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.97      0.308    (1, 1, 10, 10, 3)  1       1        [0.97]
-Total_time                                    -                                             314.933   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  311.0     98.724   (1, 2, 10, 10, 3)  2       1        [311.0]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       3.03      0.962    (1, 6, 10, 10)     1       1        [3.03]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.99      0.314    (1, 1, 10, 10, 3)  1       1        [0.99]
+Total_time                                    -                                             315.021   -        -                  -       -        -
 </pre></div>
 </div>
 </div>
@@ -636,10 +636,10 @@ Total_time                                    -
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build with Autotuning ##########
 Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs  Measurements(us)
 ---------                                     ---                                           --------  -------  -----              ------  -------  ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.7     97.456   (1, 6, 10, 10, 1)  2       1        [102.7]
-tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.829     1.736    (1, 6, 10, 10)     1       1        [1.829]
-tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.852     0.809    (1, 3, 10, 10, 1)  1       1        [0.852]
-Total_time                                    -                                             105.381   -        -                  -       -        -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  102.2     97.408   (1, 6, 10, 10, 1)  2       1        [102.2]
+tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       1.767     1.684    (1, 6, 10, 10)     1       1        [1.767]
+tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         0.952     0.908    (1, 1, 10, 10, 3)  1       1        [0.952]
+Total_time                                    -                                             104.919   -        -                  -       -        -
 </pre></div>
 </div>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-autotune-py">
diff --git a/docs/how_to/work_with_microtvm/micro_train.html b/docs/how_to/work_with_microtvm/micro_train.html
index 925a578780..7e5e8961b4 100644
--- a/docs/how_to/work_with_microtvm/micro_train.html
+++ b/docs/how_to/work_with_microtvm/micro_train.html
@@ -516,7 +516,7 @@ take about <strong>2 minutes</strong> to download the Stanford Cars, while COCO
 <a href="https://docs.python.org/3/library/shutil.html#shutil.move" title="shutil.move" class="sphx-glr-backref-module-shutil sphx-glr-backref-type-py-function"><span class="n">shutil</span><span class="o">.</span><span class="n">move</span></a><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx-glr-backref-typ [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmp_yfqyeec/images/random&#39;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&#39;/tmp/tmp34kfzoxj/images/random&#39;
 </pre></div>
 </div>
 </div>
@@ -576,8 +576,8 @@ objects to other stuff? We can display some examples from our datasets using <co
     <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span>
 </pre></div>
 </div>
-<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmp_yfqyeec/images/target contains 8144 images
-/tmp/tmp_yfqyeec/images/random contains 5000 images
+<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[0.0, 1.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmp34kfzoxj/images/target contains 8144 images
+/tmp/tmp34kfzoxj/images/random contains 5000 images
 </pre></div>
 </div>
 </div>
@@ -689,13 +689,13 @@ the time on our validation set).</p>
 </pre></div>
 </div>
 <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Epoch 1/3
-328/328 - 47s - loss: 0.2058 - accuracy: 0.9242 - val_loss: 0.1113 - val_accuracy: 0.9592 - 47s/epoch - 143ms/step
+328/328 - 46s - loss: 0.2255 - accuracy: 0.9233 - val_loss: 0.1192 - val_accuracy: 0.9592 - 46s/epoch - 141ms/step
 Epoch 2/3
-328/328 - 43s - loss: 0.0924 - accuracy: 0.9652 - val_loss: 0.1031 - val_accuracy: 0.9660 - 43s/epoch - 132ms/step
+328/328 - 43s - loss: 0.1065 - accuracy: 0.9597 - val_loss: 0.0886 - val_accuracy: 0.9705 - 43s/epoch - 130ms/step
 Epoch 3/3
-328/328 - 43s - loss: 0.0545 - accuracy: 0.9791 - val_loss: 0.1109 - val_accuracy: 0.9637 - 43s/epoch - 131ms/step
+328/328 - 43s - loss: 0.0617 - accuracy: 0.9773 - val_loss: 0.0985 - val_accuracy: 0.9694 - 43s/epoch - 130ms/step
 
-&lt;keras.callbacks.History object at 0x7f023043f410&gt;
+&lt;keras.callbacks.History object at 0x7f4b83feb150&gt;
 </pre></div>
 </div>
 </div>
@@ -957,7 +957,7 @@ as intended.</p>
 <p>From here, we could modify the model to read live images from the camera - we have another
 Arduino tutorial for how to do that <a class="reference external" href="https://github.com/guberti/tvm-arduino-demos/tree/master/examples/person_detection">on GitHub</a>. Alternatively, we could also
 <a class="reference external" href="https://tvm.apache.org/docs/how_to/work_with_microtvm/micro_autotune.html">use TVM’s autotuning capabilities</a> to dramatically improve the model’s performance.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  11.116 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes  28.534 seconds)</p>
 <div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-train-py">
 <div class="sphx-glr-download sphx-glr-download-python docutils container">
 <p><a class="reference download internal" download="" href="../../_downloads/b52cec46baf4f78d6bcd94cbe269c8a6/micro_train.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_train.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/sg_execution_times.html b/docs/how_to/work_with_microtvm/sg_execution_times.html
index b398eb056a..41d5c26820 100644
--- a/docs/how_to/work_with_microtvm/sg_execution_times.html
+++ b/docs/how_to/work_with_microtvm/sg_execution_times.html
@@ -327,7 +327,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-microtvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>05:13.055</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
+<p><strong>05:29.707</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -336,19 +336,19 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_train.html#sphx-glr-how-to-work-with-microtvm-micro-train-py"><span class="std std-ref">Training Vision Models for microTVM on Arduino</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_train.py</span></code>)</p></td>
-<td><p>04:11.116</p></td>
+<td><p>04:28.534</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">Autotuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
-<td><p>00:49.489</p></td>
+<td><p>00:48.247</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_aot.html#sphx-glr-how-to-work-with-microtvm-micro-aot-py"><span class="std std-ref">microTVM Host-Driven AoT</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_aot.py</span></code>)</p></td>
-<td><p>00:08.678</p></td>
+<td><p>00:09.282</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">microTVM with TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></td>
-<td><p>00:03.770</p></td>
+<td><p>00:03.642</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_relay/sg_execution_times.html b/docs/how_to/work_with_relay/sg_execution_times.html
index 0bc7490cc6..91000efbef 100644
--- a/docs/how_to/work_with_relay/sg_execution_times.html
+++ b/docs/how_to/work_with_relay/sg_execution_times.html
@@ -327,7 +327,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-relay-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:43.734</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
+<p><strong>00:42.975</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 84%" />
@@ -336,15 +336,15 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="using_pipeline_executor.html#sphx-glr-how-to-work-with-relay-using-pipeline-executor-py"><span class="std std-ref">Using Pipeline Executor in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_pipeline_executor.py</span></code>)</p></td>
-<td><p>00:31.847</p></td>
+<td><p>00:31.430</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></td>
-<td><p>00:10.357</p></td>
+<td><p>00:10.049</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></td>
-<td><p>00:01.523</p></td>
+<td><p>00:01.489</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/intrin_math.html b/docs/how_to/work_with_schedules/intrin_math.html
index 6ac2fe738d..871e4e0a90 100644
--- a/docs/how_to/work_with_schedules/intrin_math.html
+++ b/docs/how_to/work_with_schedules/intrin_math.html
@@ -522,7 +522,7 @@ The following example customizes CUDA lowering rule for <code class="code docuti
 <a href="../../reference/api/python/ir.html#tvm.ir.register_intrin_lowering" title="tvm.ir.register_intrin_lowering" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-function"><span class="n">register_intrin_lowering</span></a><span class="p">(</span><span class="s2">&quot;tir.exp&quot;</span><span class="p">,</span> <span class="n">target</span><span class="o">=</span><span class="s2">&quot;cuda&quot;</span><span class="p">,</span> <span class="n">f</span><span class="o">= [...]
 </pre></div>
 </div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7f01d0e8ae60&gt;
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>&lt;function my_cuda_math_rule at 0x7f4b1f4ae8c0&gt;
 </pre></div>
 </div>
 <p>Register the rule to TVM with override option to override existing rule.
diff --git a/docs/how_to/work_with_schedules/sg_execution_times.html b/docs/how_to/work_with_schedules/sg_execution_times.html
index b75089f2bb..96b380ffcd 100644
--- a/docs/how_to/work_with_schedules/sg_execution_times.html
+++ b/docs/how_to/work_with_schedules/sg_execution_times.html
@@ -327,7 +327,7 @@
             
   <div class="section" id="computation-times">
 <span id="sphx-glr-how-to-work-with-schedules-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:06.758</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
+<p><strong>00:07.591</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
 <table class="docutils align-default">
 <colgroup>
 <col style="width: 83%" />
@@ -336,27 +336,27 @@
 </colgroup>
 <tbody>
 <tr class="row-odd"><td><p><a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></td>
-<td><p>00:04.456</p></td>
+<td><p>00:05.309</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></td>
-<td><p>00:01.021</p></td>
+<td><p>00:00.986</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></td>
-<td><p>00:00.550</p></td>
+<td><p>00:00.567</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></td>
-<td><p>00:00.532</p></td>
+<td><p>00:00.537</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></td>
-<td><p>00:00.117</p></td>
+<td><p>00:00.112</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-even"><td><p><a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></td>
-<td><p>00:00.040</p></td>
+<td><p>00:00.039</p></td>
 <td><p>0.0 MB</p></td>
 </tr>
 <tr class="row-odd"><td><p><a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/tensorize.html b/docs/how_to/work_with_schedules/tensorize.html
index 9cf180b4b5..727871fead 100644
--- a/docs/how_to/work_with_schedules/tensorize.html
+++ b/docs/how_to/work_with_schedules/tensorize.html
@@ -577,7 +577,7 @@ The importing needs to happen before the tensorized GEMV being executed.</p>
              C: Buffer(C_2: Pointer(float32), float32, [524288], [])}
   buffer_map = {A_1: A, B_1: B, C_1: C}
   preflattened_buffer_map = {A_1: A_3: Buffer(A_2, float32, [1024, 64], []), B_1: B_3: Buffer(B_2, float32, [512, 64], []), C_1: C_3: Buffer(C_2, float32, [1024, 512], [])} {
-  attr [IterVar(i: int32, (nullptr), &quot;DataPar&quot;, &quot;&quot;)] &quot;pragma_import_llvm&quot; = &quot;; ModuleID = &#39;/tmp/tmpesn7feqn/input0.cc&#39;\nsource_filename = \&quot;/tmp/tmpesn7feqn/input0.cc\&quot;\ntarget datalayout = \&quot;e-m:e-i64:64-f80:128-n8:16:32:64-S128\&quot;\ntarget triple = \&quot;x86_64-pc-linux-gnu\&quot;\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n  %7 = allo [...]
+  attr [IterVar(i: int32, (nullptr), &quot;DataPar&quot;, &quot;&quot;)] &quot;pragma_import_llvm&quot; = &quot;; ModuleID = &#39;/tmp/tmpmwsga47u/input0.cc&#39;\nsource_filename = \&quot;/tmp/tmpmwsga47u/input0.cc\&quot;\ntarget datalayout = \&quot;e-m:e-i64:64-f80:128-n8:16:32:64-S128\&quot;\ntarget triple = \&quot;x86_64-pc-linux-gnu\&quot;\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n  %7 = allo [...]
   for (i, 0, 1024) {
     for (j.outer: int32, 0, 32) {
       @tir.call_extern(&quot;gemv_update&quot;, @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), C_2, ((i*512) + (j.outer*16)), 16, 2, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), A_2, (i*64), 64, 1, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), B_2, (j.outer*1024), 1024, 1, dtype=handle), 16, 64, 64, dtype=int32)
diff --git a/docs/objects.inv b/docs/objects.inv
index dc17e6c026..fe80440979 100644
Binary files a/docs/objects.inv and b/docs/objects.inv differ
diff --git a/docs/reference/api/doxygen/builder_8h_source.html b/docs/reference/api/doxygen/builder_8h_source.html
index ffcc687f2c..16c59b9d24 100644
--- a/docs/reference/api/doxygen/builder_8h_source.html
+++ b/docs/reference/api/doxygen/builder_8h_source.html
@@ -89,7 +89,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1BuilderInputNode_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1BuilderInputNode.html">tvm::meta_schedule::BuilderInputNode</a></div><div class="ttdoc">The builder&amp;#39;s input, containing an IRModule and the target. </div><div class="ttdef"><b>Definition:</b> builder.h:37</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1TypedPackedFunc_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1TypedPackedFunc.html">tvm::runtime::TypedPackedFunc</a></div><div class="ttdoc">Please refer to TypedPackedFunc&lt;R(Args..)&gt;. </div><div class="ttdef"><b>Definition:</b> packed_func.h:60</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1Builder_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1Builder.html">tvm::meta_schedule::Builder</a></div><div class="ttdoc">Managed reference to BuilderNode. </div><div class="ttdef"><b>Definition:</b> builder.h:131</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1BuilderInput_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1BuilderInput.html">tvm::meta_schedule::BuilderInput</a></div><div class="ttdoc">Managed reference to BuilderInputNode. </div><div class="ttdef"><b>Definition:</b> builder.h:60</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1PyBuilderNode_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyBuilderNode.html">tvm::meta_schedule::PyBuilderNode</a></div><div class="ttdoc">An abstract builder with customized build method on the python-side. </div><div class="ttdef"><b>Definition:</b> builder.h:143</div></div>
diff --git a/docs/reference/api/doxygen/classtvm_1_1CompilationConfigNode.html b/docs/reference/api/doxygen/classtvm_1_1CompilationConfigNode.html
index 66c897679b..d71f4f06d6 100644
--- a/docs/reference/api/doxygen/classtvm_1_1CompilationConfigNode.html
+++ b/docs/reference/api/doxygen/classtvm_1_1CompilationConfigNode.html
@@ -456,7 +456,7 @@ Additional Inherited Members</h2></td></tr>
 <p>Primitive targets must be unique by their kind name. In this way the <code>FindPrimitiveTargetForKind</code> method will find the unique target for the given kind name. This method is used when transitioning from an external codegen "Compiler" attribute value to the external codegen target representing that compiler.</p>
 <p>It is possible to have multiple primitive targets for the same device type. However given primitive targets left and right where:</p><ul>
 <li>left appears before right in the array</li>
-<li>left-&gt;kind-&gt;device_type == right-&gt;kind-&gt;device_type then:</li>
+<li>left-&gt;GetTargetDeviceType() == right-&gt;GetTargetDeviceType() then:</li>
 <li>right.IsExternalCodegenFor(left) must be true In this way the <code>FindPrimitiveTargetForDeviceOrFail</code> method will find the 'most general' target for the requested device type. This method is used when transitioning from a device constraint to the target needed to compile for that device.</li>
 </ul>
 <p>In the homogeneous case primitive_targets will have just one entry, which will be pointer equal to optional_homogeneous_target.</p>
diff --git a/docs/reference/api/doxygen/classtvm_1_1Target.html b/docs/reference/api/doxygen/classtvm_1_1Target.html
index 8ffa29e639..7c6ffb12f8 100644
--- a/docs/reference/api/doxygen/classtvm_1_1Target.html
+++ b/docs/reference/api/doxygen/classtvm_1_1Target.html
@@ -411,9 +411,9 @@ Additional Inherited Members</h2></td></tr>
 <ul>
 <li><code>this</code> has a true <a class="el" href="namespacetvm_1_1attr.html#a17f834882ba3cd00890329433e8e81dd" title="A TargetKind attribute of type Bool. If true, then the target kind name also corresponds to an extern...">tvm::attr::kIsExternalCodegen</a> attribute</li>
 <li><code>that</code> does not have a true <a class="el" href="namespacetvm_1_1attr.html#a17f834882ba3cd00890329433e8e81dd" title="A TargetKind attribute of type Bool. If true, then the target kind name also corresponds to an extern...">tvm::attr::kIsExternalCodegen</a> attribute</li>
-<li><code>this</code> and <code>that</code> have the same kind-&gt;device_type</li>
+<li><code>this</code> and <code>that</code> have the same GetTargetDeviceType()</li>
 </ul>
-<p>After partitioning, the external codegen compilation path may use <code>that</code> to guide it's compilation to a <code><a class="el" href="classtvm_1_1runtime_1_1Module.html" title="Module container of TVM. ">runtime::Module</a></code>. Given <code>this</code>, an appropriate <code>that</code> can be found using <code>CompilationConfig::FindPrimitiveTargetOrFail</code>(this-&gt;kind-&gt;device_type).</p>
+<p>After partitioning, the external codegen compilation path may use <code>that</code> to guide it's compilation to a <code><a class="el" href="classtvm_1_1runtime_1_1Module.html" title="Module container of TVM. ">runtime::Module</a></code>. Given <code>this</code>, an appropriate <code>that</code> can be found using <code>CompilationConfig::FindPrimitiveTargetOrFail</code>(this-&gt;GetTargetDeviceType()).</p>
 <p>The <code>CollagePartition</code> pass uses this method to guide it's search over candidate partitions using external codegen. </p>
 
 </div>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetKindNode-members.html b/docs/reference/api/doxygen/classtvm_1_1TargetKindNode-members.html
index 71aa19c323..8d95c5f413 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetKindNode-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetKindNode-members.html
@@ -81,10 +81,10 @@ $(function() {
   <tr><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html#abebbfdf6393012b39dfdda67edd2a26b">AttrRegistry</a> class</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html">tvm::TargetKindNode</a></td><td class="entry"><span class="mlabel">friend</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html#ac2e1f56abc584f0bbd3d730959f5bad0">AttrRegistryMapContainerMap</a> class</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html">tvm::TargetKindNode</a></td><td class="entry"><span class="mlabel">friend</span></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a70fb5361147634605d6595bb89381f03">DecRef</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html#aa62e049ba158730d9ab88e4c0b173de9">default_keys</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html">tvm::TargetKindNode</a></td><td class="entry"></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#af4407d2b59132e803ff791482dbe0145">deleter_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html#a9ce830f20c377093d7812ffc2eb5c628">detail::ValueTypeInfoMaker</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html">tvm::TargetKindNode</a></td><td class="entry"><span class="mlabel">friend</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html#a18459286d8d501892992a4209ad08652">device_type</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html">tvm::TargetKindNode</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html#a0d66deaddc1ac8bfe3e39616df811b7e">default_device_type</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html">tvm::TargetKindNode</a></td><td class="entry"></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html#aa62e049ba158730d9ab88e4c0b173de9">default_keys</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html">tvm::TargetKindNode</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#af4407d2b59132e803ff791482dbe0145">deleter_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html#a9ce830f20c377093d7812ffc2eb5c628">detail::ValueTypeInfoMaker</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetKindNode.html">tvm::TargetKindNode</a></td><td class="entry"><span class="mlabel">friend</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a9e84841ca982bff376a978ade0132631">FDeleter</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a726972ff315c446192df94027ddea032">GetOrAllocRuntimeTypeIndex</a>(const std::string &amp;key, uint32_t static_tindex, uint32_t parent_tindex, uint32_t type_child_slots, bool type_child_slots_can_overflow)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4d951e51832081b85875669eac90e940">GetTypeKey</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetKindNode.html b/docs/reference/api/doxygen/classtvm_1_1TargetKindNode.html
index 691f9eb4c5..c8cece9027 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetKindNode.html
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetKindNode.html
@@ -125,9 +125,9 @@ Public Attributes</h2></td></tr>
 <tr class="memitem:a496c8f36bc4ead9952b6a1fd369d20ad"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1runtime_1_1String.html">String</a>&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetKindNode.html#a496c8f36bc4ead9952b6a1fd369d20ad">name</a></td></tr>
 <tr class="memdesc:a496c8f36bc4ead9952b6a1fd369d20ad"><td class="mdescLeft">&#160;</td><td class="mdescRight">Name of the target kind.  <a href="#a496c8f36bc4ead9952b6a1fd369d20ad">More...</a><br /></td></tr>
 <tr class="separator:a496c8f36bc4ead9952b6a1fd369d20ad"><td class="memSeparator" colspan="2">&#160;</td></tr>
-<tr class="memitem:a18459286d8d501892992a4209ad08652"><td class="memItemLeft" align="right" valign="top">int&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetKindNode.html#a18459286d8d501892992a4209ad08652">device_type</a></td></tr>
-<tr class="memdesc:a18459286d8d501892992a4209ad08652"><td class="mdescLeft">&#160;</td><td class="mdescRight">Device type of target kind.  <a href="#a18459286d8d501892992a4209ad08652">More...</a><br /></td></tr>
-<tr class="separator:a18459286d8d501892992a4209ad08652"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a0d66deaddc1ac8bfe3e39616df811b7e"><td class="memItemLeft" align="right" valign="top">int&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetKindNode.html#a0d66deaddc1ac8bfe3e39616df811b7e">default_device_type</a></td></tr>
+<tr class="memdesc:a0d66deaddc1ac8bfe3e39616df811b7e"><td class="mdescLeft">&#160;</td><td class="mdescRight">Device type of target kind.  <a href="#a0d66deaddc1ac8bfe3e39616df811b7e">More...</a><br /></td></tr>
+<tr class="separator:a0d66deaddc1ac8bfe3e39616df811b7e"><td class="memSeparator" colspan="2">&#160;</td></tr>
 <tr class="memitem:aa62e049ba158730d9ab88e4c0b173de9"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt; <a class="el" href="classtvm_1_1runtime_1_1String.html">String</a> &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetKindNode.html#aa62e049ba158730d9ab88e4c0b173de9">default_keys</a></td></tr>
 <tr class="memdesc:aa62e049ba158730d9ab88e4c0b173de9"><td class="mdescLeft">&#160;</td><td class="mdescRight">Default keys of the target.  <a href="#aa62e049ba158730d9ab88e4c0b173de9">More...</a><br /></td></tr>
 <tr class="separator:aa62e049ba158730d9ab88e4c0b173de9"><td class="memSeparator" colspan="2">&#160;</td></tr>
@@ -417,35 +417,35 @@ template&lt;typename , typename , typename &gt; </div>
 
 </div>
 </div>
-<a id="aa62e049ba158730d9ab88e4c0b173de9"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#aa62e049ba158730d9ab88e4c0b173de9">&#9670;&nbsp;</a></span>default_keys</h2>
+<a id="a0d66deaddc1ac8bfe3e39616df811b7e"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a0d66deaddc1ac8bfe3e39616df811b7e">&#9670;&nbsp;</a></span>default_device_type</h2>
 
 <div class="memitem">
 <div class="memproto">
       <table class="memname">
         <tr>
-          <td class="memname"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1runtime_1_1String.html">String</a>&gt; tvm::TargetKindNode::default_keys</td>
+          <td class="memname">int tvm::TargetKindNode::default_device_type</td>
         </tr>
       </table>
 </div><div class="memdoc">
 
-<p>Default keys of the target. </p>
+<p>Device type of target kind. </p>
 
 </div>
 </div>
-<a id="a18459286d8d501892992a4209ad08652"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#a18459286d8d501892992a4209ad08652">&#9670;&nbsp;</a></span>device_type</h2>
+<a id="aa62e049ba158730d9ab88e4c0b173de9"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#aa62e049ba158730d9ab88e4c0b173de9">&#9670;&nbsp;</a></span>default_keys</h2>
 
 <div class="memitem">
 <div class="memproto">
       <table class="memname">
         <tr>
-          <td class="memname">int tvm::TargetKindNode::device_type</td>
+          <td class="memname"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>&lt;<a class="el" href="classtvm_1_1runtime_1_1String.html">String</a>&gt; tvm::TargetKindNode::default_keys</td>
         </tr>
       </table>
 </div><div class="memdoc">
 
-<p>Device type of target kind. </p>
+<p>Default keys of the target. </p>
 
 </div>
 </div>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetKindNode__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1TargetKindNode__coll__graph.svg
index a542c091d7..103905fe8d 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetKindNode__coll__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetKindNode__coll__graph.svg
@@ -15,7 +15,7 @@
 <polygon fill="#bfbfbf" stroke="#000000" points="408,-.5 408,-79.5 617,-79.5 617,-.5 408,-.5"/>
 <text text-anchor="middle" x="512.5" y="-67.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetKindNode</text>
 <polyline fill="none" stroke="#000000" points="408,-60.5 617,-60.5 "/>
-<text text-anchor="start" x="416" y="-48.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ device_type</text>
+<text text-anchor="start" x="416" y="-48.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ default_device_type</text>
 <text text-anchor="start" x="416" y="-37.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
 <polyline fill="none" stroke="#000000" points="408,-30.5 617,-30.5 "/>
 <text text-anchor="start" x="416" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetKindNode__inherit__graph.svg b/docs/reference/api/doxygen/classtvm_1_1TargetKindNode__inherit__graph.svg
index 66467849a6..8258f7207b 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetKindNode__inherit__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetKindNode__inherit__graph.svg
@@ -16,7 +16,7 @@
 <text text-anchor="middle" x="104.5" y="-111.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetKindNode</text>
 <polyline fill="none" stroke="#000000" points="0,-104.5 209,-104.5 "/>
 <text text-anchor="start" x="8" y="-92.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ name</text>
-<text text-anchor="start" x="8" y="-81.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ device_type</text>
+<text text-anchor="start" x="8" y="-81.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ default_device_type</text>
 <text text-anchor="start" x="8" y="-70.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ default_keys</text>
 <text text-anchor="start" x="8" y="-59.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ preprocessor</text>
 <text text-anchor="start" x="8" y="-48.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ target_parser</text>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry-members.html b/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry-members.html
index aa0db24950..0644940d2a 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry-members.html
@@ -77,8 +77,8 @@ $(function() {
   <tr><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a478c1bd27f0b8dd1b95c58808f8d0c70">RegisterOrGet</a>(const String &amp;target_kind_name)</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a4fa4f8e5fa280ddf3dc71310afd467a5">set_attr</a>(const String &amp;attr_name, const ValueType &amp;value, int plevel=10)</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a00b1eb0ab1927210a6a519baecb3085e">set_attrs_preprocessor</a>(FLambda f)</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a2995c32e12246e892f7f4cb621a2819c">set_default_keys</a>(std::vector&lt; String &gt; keys)</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#ae3ce5349493f402b82e755a0a180bd9a">set_device_type</a>(int device_type)</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#aa34789ae275e36dcd6696aa3881bbc92">set_default_device_type</a>(int device_type)</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a2995c32e12246e892f7f4cb621a2819c">set_default_keys</a>(std::vector&lt; String &gt; keys)</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a36f21402bccb03300478d6c85bd05512">set_name</a>()</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a21152c83f61180dcb6293226a98025a8">set_target_parser</a>(FTVMTargetParser parser)</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a75150485a300a03a22d9edad8619cc25">TargetKind</a> class</td><td class="entry"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">tvm::TargetKindRegEntry</a></td><td class="entry"><span class="mlabel">friend</span></td></tr>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry.html b/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry.html
index d721845f66..675c069445 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry.html
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry.html
@@ -79,7 +79,7 @@ $(function() {
 <div class="dynheader">
 Collaboration diagram for tvm::TargetKindRegEntry:</div>
 <div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1TargetKindRegEntry__coll__graph.svg" width="206" height="235"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1TargetKindRegEntry__coll__graph.svg" width="218" height="235"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
 </div>
 </div>
 <table class="memberdecls">
@@ -89,9 +89,9 @@ Public Member Functions</h2></td></tr>
 <tr class="memitem:a4fa4f8e5fa280ddf3dc71310afd467a5"><td class="memTemplItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">TargetKindRegEntry</a> &amp;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a4fa4f8e5fa280ddf3dc71310afd467a5">set_attr</a> (const <a class="el" href="classtvm_1_1runtime_1_1String.html">String</a> &amp;attr_name, const ValueType &amp;value, int plevel=10)</td></tr>
 <tr class="memdesc:a4fa4f8e5fa280ddf3dc71310afd467a5"><td class="mdescLeft">&#160;</td><td class="mdescRight">Register additional attributes to target_kind.  <a href="#a4fa4f8e5fa280ddf3dc71310afd467a5">More...</a><br /></td></tr>
 <tr class="separator:a4fa4f8e5fa280ddf3dc71310afd467a5"><td class="memSeparator" colspan="2">&#160;</td></tr>
-<tr class="memitem:ae3ce5349493f402b82e755a0a180bd9a"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">TargetKindRegEntry</a> &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#ae3ce5349493f402b82e755a0a180bd9a">set_device_type</a> (int device_type)</td></tr>
-<tr class="memdesc:ae3ce5349493f402b82e755a0a180bd9a"><td class="mdescLeft">&#160;</td><td class="mdescRight">Set DLPack's device_type the target.  <a href="#ae3ce5349493f402b82e755a0a180bd9a">More...</a><br /></td></tr>
-<tr class="separator:ae3ce5349493f402b82e755a0a180bd9a"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:aa34789ae275e36dcd6696aa3881bbc92"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">TargetKindRegEntry</a> &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#aa34789ae275e36dcd6696aa3881bbc92">set_default_device_type</a> (int device_type)</td></tr>
+<tr class="memdesc:aa34789ae275e36dcd6696aa3881bbc92"><td class="mdescLeft">&#160;</td><td class="mdescRight">Set DLPack's device_type the target.  <a href="#aa34789ae275e36dcd6696aa3881bbc92">More...</a><br /></td></tr>
+<tr class="separator:aa34789ae275e36dcd6696aa3881bbc92"><td class="memSeparator" colspan="2">&#160;</td></tr>
 <tr class="memitem:a2995c32e12246e892f7f4cb621a2819c"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">TargetKindRegEntry</a> &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetKindRegEntry.html#a2995c32e12246e892f7f4cb621a2819c">set_default_keys</a> (std::vector&lt; <a class="el" href="classtvm_1_1runtime_1_1String.html">String</a> &gt; keys)</td></tr>
 <tr class="memdesc:a2995c32e12246e892f7f4cb621a2819c"><td class="mdescLeft">&#160;</td><td class="mdescRight">Set DLPack's device_type the target.  <a href="#a2995c32e12246e892f7f4cb621a2819c">More...</a><br /></td></tr>
 <tr class="separator:a2995c32e12246e892f7f4cb621a2819c"><td class="memSeparator" colspan="2">&#160;</td></tr>
@@ -428,8 +428,8 @@ template&lt;typename FLambda &gt; </div>
 
 </div>
 </div>
-<a id="a2995c32e12246e892f7f4cb621a2819c"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#a2995c32e12246e892f7f4cb621a2819c">&#9670;&nbsp;</a></span>set_default_keys()</h2>
+<a id="aa34789ae275e36dcd6696aa3881bbc92"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#aa34789ae275e36dcd6696aa3881bbc92">&#9670;&nbsp;</a></span>set_default_device_type()</h2>
 
 <div class="memitem">
 <div class="memproto">
@@ -438,10 +438,10 @@ template&lt;typename FLambda &gt; </div>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">TargetKindRegEntry</a> &amp; tvm::TargetKindRegEntry::set_default_keys </td>
+          <td class="memname"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">TargetKindRegEntry</a> &amp; tvm::TargetKindRegEntry::set_default_device_type </td>
           <td>(</td>
-          <td class="paramtype">std::vector&lt; <a class="el" href="classtvm_1_1runtime_1_1String.html">String</a> &gt;&#160;</td>
-          <td class="paramname"><em>keys</em></td><td>)</td>
+          <td class="paramtype">int&#160;</td>
+          <td class="paramname"><em>device_type</em></td><td>)</td>
           <td></td>
         </tr>
       </table>
@@ -455,15 +455,15 @@ template&lt;typename FLambda &gt; </div>
 <p>Set DLPack's device_type the target. </p>
 <dl class="params"><dt>Parameters</dt><dd>
   <table class="params">
-    <tr><td class="paramname">keys</td><td>The default keys </td></tr>
+    <tr><td class="paramname">device_type</td><td>Device type </td></tr>
   </table>
   </dd>
 </dl>
 
 </div>
 </div>
-<a id="ae3ce5349493f402b82e755a0a180bd9a"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#ae3ce5349493f402b82e755a0a180bd9a">&#9670;&nbsp;</a></span>set_device_type()</h2>
+<a id="a2995c32e12246e892f7f4cb621a2819c"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a2995c32e12246e892f7f4cb621a2819c">&#9670;&nbsp;</a></span>set_default_keys()</h2>
 
 <div class="memitem">
 <div class="memproto">
@@ -472,10 +472,10 @@ template&lt;typename FLambda &gt; </div>
   <td class="mlabels-left">
       <table class="memname">
         <tr>
-          <td class="memname"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">TargetKindRegEntry</a> &amp; tvm::TargetKindRegEntry::set_device_type </td>
+          <td class="memname"><a class="el" href="classtvm_1_1TargetKindRegEntry.html">TargetKindRegEntry</a> &amp; tvm::TargetKindRegEntry::set_default_keys </td>
           <td>(</td>
-          <td class="paramtype">int&#160;</td>
-          <td class="paramname"><em>device_type</em></td><td>)</td>
+          <td class="paramtype">std::vector&lt; <a class="el" href="classtvm_1_1runtime_1_1String.html">String</a> &gt;&#160;</td>
+          <td class="paramname"><em>keys</em></td><td>)</td>
           <td></td>
         </tr>
       </table>
@@ -489,7 +489,7 @@ template&lt;typename FLambda &gt; </div>
 <p>Set DLPack's device_type the target. </p>
 <dl class="params"><dt>Parameters</dt><dd>
   <table class="params">
-    <tr><td class="paramname">device_type</td><td>Device type </td></tr>
+    <tr><td class="paramname">keys</td><td>The default keys </td></tr>
   </table>
   </dd>
 </dl>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry__coll__graph.svg
index c28f1567b4..9bad8e9de0 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry__coll__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetKindRegEntry__coll__graph.svg
@@ -4,21 +4,21 @@
 <!-- Generated by graphviz version 2.40.1 (20161225.0304)
  -->
 <!-- Title: tvm::TargetKindRegEntry Pages: 1 -->
-<svg width="154pt" height="176pt"
- viewBox="0.00 0.00 154.00 176.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<svg width="163pt" height="176pt"
+ viewBox="0.00 0.00 163.00 176.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
 <g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 172)">
 <title>tvm::TargetKindRegEntry</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-172 150,-172 150,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-172 159,-172 159,4 -4,4"/>
 <!-- Node1 -->
 <g id="node1" class="node">
 <title>Node1</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-167.5 146,-167.5 146,-.5 0,-.5"/>
-<text text-anchor="middle" x="73" y="-155.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetKindRegEntry</text>
-<polyline fill="none" stroke="#000000" points="0,-148.5 146,-148.5 "/>
-<text text-anchor="middle" x="73" y="-136.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="0,-129.5 146,-129.5 "/>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-167.5 155,-167.5 155,-.5 0,-.5"/>
+<text text-anchor="middle" x="77.5" y="-155.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetKindRegEntry</text>
+<polyline fill="none" stroke="#000000" points="0,-148.5 155,-148.5 "/>
+<text text-anchor="middle" x="77.5" y="-136.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="0,-129.5 155,-129.5 "/>
 <text text-anchor="start" x="8" y="-117.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ set_attr()</text>
-<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ set_device_type()</text>
+<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ set_default_device_type()</text>
 <text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ set_default_keys()</text>
 <text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ set_attrs_preprocessor()</text>
 <text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ set_target_parser()</text>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetNode-members.html b/docs/reference/api/doxygen/classtvm_1_1TargetNode-members.html
index 321610a200..c795a078bf 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetNode-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetNode-members.html
@@ -92,35 +92,36 @@ $(function() {
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#abd05b2c258974b13af1192c911ccb12b">GetKeys</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
   <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a1bd600905c1a4469726184adbc9087b0">GetLibs</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
   <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a726972ff315c446192df94027ddea032">GetOrAllocRuntimeTypeIndex</a>(const std::string &amp;key, uint32_t static_tindex, uint32_t parent_tindex, uint32_t type_child_slots, bool type_child_slots_can_overflow)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span><span class="mlabel">static</span [...]
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4d951e51832081b85875669eac90e940">GetTypeKey</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a5693cbadcc1168b96db7b1cc5c200b86">GetTypeKeyHash</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#abdeae1bf6e037771b1b931f26dba15c6">host</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ac9e5eed7719e322117bde996a171e33a">IncRef</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a90e90b3f4ba8a590baff78c75807bbc7">IsInstance</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#aec9e821b23172eb9460f46df0dc346fb">keys</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#ac19a4ee0f0ec7ea607ec746bc4551b71">kind</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a133436a9ec5c4a768b94102bf95a660b">Object</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ab7968feb6ad38ecaffc320e13819d826">Object</a>(const Object &amp;other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#aa1612f69ea5b4225d4cda759cd517323">Object</a>(Object &amp;&amp;other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a69c32fbd96181f5c21d2c878ab285e4f">operator=</a>(const Object &amp;other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ae341e561272ff43cdcbc927bc29ac50d">operator=</a>(Object &amp;&amp;other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a0d492efee331e2239a093f4b2017c10f">ref_counter_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a55549a6c23987890246248682560a03d">RefCounterType</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ad94d79729ac85aa7c976e23d39066383">RuntimeTypeIndex</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#acedf257c039c25a6a16bf36b664d35c6">SEqualReduce</a>(const TargetNode *other, SEqualReducer equal) const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a1b64ab2ca286e1cd63c181f469707218">SHashReduce</a>(SHashReducer hash_reduce) const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a30cd67db46a9c4b098a8ba38fff22e26">str</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a3046260cd16b7b134fa99705b41d2aee">tag</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a7924ccb2fdea6074cca1978c062fb034">TargetInternal</a> class</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"><span class="mlabel">friend</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a41181a3757227725abc614e976b264ad">ToDebugString</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a8602fa00bc833f39fa16b682acd704b7">TVM_DECLARE_FINAL_OBJECT_INFO</a>(TargetNode, Object)</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a481f01923b14e1851ebd38506e9c66ea">type_index</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4bfc2586cb55f2af47728187b3256255">type_index_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">TypeIndex2Key</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6ee32a02dd44257da105fbbe5d9c8622">TypeIndex2KeyHash</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6841f97e06e6614dd7e82c6dd41b818a">TypeKey2Index</a>(const std::string &amp;key)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
-  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
-  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#ad4a9f21d97d244c2055e9ba2eca71ee5">VisitAttrs</a>(AttrVisitor *v)</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a01c985da7b7451518db042094336a4b1">GetTargetDeviceType</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4d951e51832081b85875669eac90e940">GetTypeKey</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a5693cbadcc1168b96db7b1cc5c200b86">GetTypeKeyHash</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#abdeae1bf6e037771b1b931f26dba15c6">host</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ac9e5eed7719e322117bde996a171e33a">IncRef</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a90e90b3f4ba8a590baff78c75807bbc7">IsInstance</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#aec9e821b23172eb9460f46df0dc346fb">keys</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#ac19a4ee0f0ec7ea607ec746bc4551b71">kind</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a133436a9ec5c4a768b94102bf95a660b">Object</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ab7968feb6ad38ecaffc320e13819d826">Object</a>(const Object &amp;other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#aa1612f69ea5b4225d4cda759cd517323">Object</a>(Object &amp;&amp;other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a69c32fbd96181f5c21d2c878ab285e4f">operator=</a>(const Object &amp;other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ae341e561272ff43cdcbc927bc29ac50d">operator=</a>(Object &amp;&amp;other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a0d492efee331e2239a093f4b2017c10f">ref_counter_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a55549a6c23987890246248682560a03d">RefCounterType</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ad94d79729ac85aa7c976e23d39066383">RuntimeTypeIndex</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#acedf257c039c25a6a16bf36b664d35c6">SEqualReduce</a>(const TargetNode *other, SEqualReducer equal) const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a1b64ab2ca286e1cd63c181f469707218">SHashReduce</a>(SHashReducer hash_reduce) const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a30cd67db46a9c4b098a8ba38fff22e26">str</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a3046260cd16b7b134fa99705b41d2aee">tag</a></td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a7924ccb2fdea6074cca1978c062fb034">TargetInternal</a> class</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"><span class="mlabel">friend</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a41181a3757227725abc614e976b264ad">ToDebugString</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#a8602fa00bc833f39fa16b682acd704b7">TVM_DECLARE_FINAL_OBJECT_INFO</a>(TargetNode, Object)</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a481f01923b14e1851ebd38506e9c66ea">type_index</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4bfc2586cb55f2af47728187b3256255">type_index_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">TypeIndex2Key</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6ee32a02dd44257da105fbbe5d9c8622">TypeIndex2KeyHash</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6841f97e06e6614dd7e82c6dd41b818a">TypeKey2Index</a>(const std::string &amp;key)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+  <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+  <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html#ad4a9f21d97d244c2055e9ba2eca71ee5">VisitAttrs</a>(AttrVisitor *v)</td><td class="entry"><a class="el" href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
 </table></div><!-- contents -->
 <!-- start footer part -->
 <hr class="footer"/><address class="footer"><small>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetNode.html b/docs/reference/api/doxygen/classtvm_1_1TargetNode.html
index d1f711c9d0..48e80ffcd2 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetNode.html
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetNode.html
@@ -80,13 +80,13 @@ $(function() {
 <div class="dynheader">
 Inheritance diagram for tvm::TargetNode:</div>
 <div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1TargetNode__inherit__graph.svg" width="290" height="1006"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1TargetNode__inherit__graph.svg" width="290" height="1020"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
 </div>
 </div>
 <div class="dynheader">
 Collaboration diagram for tvm::TargetNode:</div>
 <div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1TargetNode__coll__graph.svg" width="1435" height="1580"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1TargetNode__coll__graph.svg" width="1435" height="1595"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
 </div>
 </div>
 <table class="memberdecls">
@@ -99,6 +99,8 @@ Public Member Functions</h2></td></tr>
 <tr class="separator:af313f5aedbe162374d424358d34d3c7e"><td class="memSeparator" colspan="2">&#160;</td></tr>
 <tr class="memitem:a94129658128c764ddd0e2255a490be05"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1runtime_1_1Optional.html">Optional</a>&lt; <a class="el" href="classtvm_1_1Target.html">Target</a> &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetNode.html#a94129658128c764ddd0e2255a490be05">GetHost</a> () const</td></tr>
 <tr class="separator:a94129658128c764ddd0e2255a490be05"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:a01c985da7b7451518db042094336a4b1"><td class="memItemLeft" align="right" valign="top">int&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetNode.html#a01c985da7b7451518db042094336a4b1">GetTargetDeviceType</a> () const</td></tr>
+<tr class="separator:a01c985da7b7451518db042094336a4b1"><td class="memSeparator" colspan="2">&#160;</td></tr>
 <tr class="memitem:a41181a3757227725abc614e976b264ad"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1runtime_1_1String.html">String</a>&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1TargetNode.html#a41181a3757227725abc614e976b264ad">ToDebugString</a> () const</td></tr>
 <tr class="memdesc:a41181a3757227725abc614e976b264ad"><td class="mdescLeft">&#160;</td><td class="mdescRight">Returns a human readable representation of <code><a class="el" href="classtvm_1_1Target.html" title="Managed reference class to TargetNode. ">Target</a></code> which includes all fields, especially the host. Useful for diagnostic messages and debugging.  <a href="#a41181a3757227725abc614e976b264ad">More...</a><br /></td></tr>
 <tr class="separator:a41181a3757227725abc614e976b264ad"><td class="memSeparator" colspan="2">&#160;</td></tr>
@@ -532,6 +534,24 @@ template&lt;typename TObjectRef &gt; </div>
 
 <p>Get the keys for this target as an unordered_set of string. </p>
 
+</div>
+</div>
+<a id="a01c985da7b7451518db042094336a4b1"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a01c985da7b7451518db042094336a4b1">&#9670;&nbsp;</a></span>GetTargetDeviceType()</h2>
+
+<div class="memitem">
+<div class="memproto">
+      <table class="memname">
+        <tr>
+          <td class="memname">int tvm::TargetNode::GetTargetDeviceType </td>
+          <td>(</td>
+          <td class="paramname"></td><td>)</td>
+          <td> const</td>
+        </tr>
+      </table>
+</div><div class="memdoc">
+<dl class="section return"><dt>Returns</dt><dd>The device type for this target </dd></dl>
+
 </div>
 </div>
 <a id="acedf257c039c25a6a16bf36b664d35c6"></a>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetNode__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1TargetNode__coll__graph.svg
index 61b50bda38..04530b08b9 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetNode__coll__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetNode__coll__graph.svg
@@ -4,26 +4,27 @@
 <!-- Generated by graphviz version 2.40.1 (20161225.0304)
  -->
 <!-- Title: tvm::TargetNode Pages: 1 -->
-<svg width="1076pt" height="1185pt"
- viewBox="0.00 0.00 1075.50 1185.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 1181)">
+<svg width="1076pt" height="1196pt"
+ viewBox="0.00 0.00 1075.50 1196.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 1192)">
 <title>tvm::TargetNode</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-1181 1071.5,-1181 1071.5,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-1192 1071.5,-1192 1071.5,4 -4,4"/>
 <!-- Node2 -->
 <g id="node1" class="node">
 <title>Node2</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="505,-.5 505,-244.5 714,-244.5 714,-.5 505,-.5"/>
-<text text-anchor="middle" x="609.5" y="-232.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetNode</text>
-<polyline fill="none" stroke="#000000" points="505,-225.5 714,-225.5 "/>
-<text text-anchor="start" x="513" y="-213.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
-<text text-anchor="start" x="513" y="-202.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
-<text text-anchor="start" x="513" y="-191.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="513" y="-180.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
-<text text-anchor="start" x="513" y="-169.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<polyline fill="none" stroke="#000000" points="505,-162.5 714,-162.5 "/>
-<text text-anchor="start" x="513" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ str()</text>
-<text text-anchor="start" x="513" y="-139.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Export()</text>
-<text text-anchor="start" x="513" y="-128.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetHost()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="505,-.5 505,-255.5 714,-255.5 714,-.5 505,-.5"/>
+<text text-anchor="middle" x="609.5" y="-243.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetNode</text>
+<polyline fill="none" stroke="#000000" points="505,-236.5 714,-236.5 "/>
+<text text-anchor="start" x="513" y="-224.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
+<text text-anchor="start" x="513" y="-213.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
+<text text-anchor="start" x="513" y="-202.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="513" y="-191.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
+<text text-anchor="start" x="513" y="-180.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<polyline fill="none" stroke="#000000" points="505,-173.5 714,-173.5 "/>
+<text text-anchor="start" x="513" y="-161.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ str()</text>
+<text text-anchor="start" x="513" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Export()</text>
+<text text-anchor="start" x="513" y="-139.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetHost()</text>
+<text text-anchor="start" x="513" y="-128.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTargetDeviceType()</text>
 <text text-anchor="start" x="513" y="-117.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ToDebugString()</text>
 <text text-anchor="start" x="513" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
 <text text-anchor="start" x="513" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetAttr()</text>
@@ -40,157 +41,158 @@
 <g id="node2" class="node">
 <title>Node3</title>
 <g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1Object.html" target="_top" xlink:title="base class of all object containers. ">
-<polygon fill="#ffffff" stroke="#000000" points="0,-303.5 0,-690.5 183,-690.5 183,-303.5 0,-303.5"/>
-<text text-anchor="middle" x="91.5" y="-678.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Object</text>
-<polyline fill="none" stroke="#000000" points="0,-671.5 183,-671.5 "/>
-<text text-anchor="start" x="8" y="-659.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
-<text text-anchor="start" x="8" y="-648.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_final</text>
-<text text-anchor="start" x="8" y="-637.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots</text>
-<text text-anchor="start" x="8" y="-626.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots_can</text>
-<text text-anchor="start" x="8" y="-615.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_overflow</text>
-<text text-anchor="start" x="8" y="-604.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_visit</text>
-<text text-anchor="start" x="8" y="-593.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_attrs</text>
-<text text-anchor="start" x="8" y="-582.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
-<text text-anchor="start" x="8" y="-571.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="8" y="-560.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
-<text text-anchor="start" x="8" y="-549.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="8" y="-538.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_index</text>
-<text text-anchor="start" x="8" y="-527.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># type_index_</text>
-<text text-anchor="start" x="8" y="-516.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># ref_counter_</text>
-<polyline fill="none" stroke="#000000" points="0,-509.5 183,-509.5 "/>
-<text text-anchor="start" x="8" y="-497.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ type_index()</text>
-<text text-anchor="start" x="8" y="-486.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKey()</text>
-<text text-anchor="start" x="8" y="-475.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKeyHash()</text>
-<text text-anchor="start" x="8" y="-464.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IsInstance()</text>
-<text text-anchor="start" x="8" y="-453.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<polygon fill="#ffffff" stroke="#000000" points="0,-314.5 0,-701.5 183,-701.5 183,-314.5 0,-314.5"/>
+<text text-anchor="middle" x="91.5" y="-689.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Object</text>
+<polyline fill="none" stroke="#000000" points="0,-682.5 183,-682.5 "/>
+<text text-anchor="start" x="8" y="-670.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
+<text text-anchor="start" x="8" y="-659.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_final</text>
+<text text-anchor="start" x="8" y="-648.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots</text>
+<text text-anchor="start" x="8" y="-637.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots_can</text>
+<text text-anchor="start" x="8" y="-626.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_overflow</text>
+<text text-anchor="start" x="8" y="-615.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_visit</text>
+<text text-anchor="start" x="8" y="-604.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_attrs</text>
+<text text-anchor="start" x="8" y="-593.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
+<text text-anchor="start" x="8" y="-582.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="8" y="-571.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
+<text text-anchor="start" x="8" y="-560.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="8" y="-549.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_index</text>
+<text text-anchor="start" x="8" y="-538.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># type_index_</text>
+<text text-anchor="start" x="8" y="-527.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># ref_counter_</text>
+<polyline fill="none" stroke="#000000" points="0,-520.5 183,-520.5 "/>
+<text text-anchor="start" x="8" y="-508.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ type_index()</text>
+<text text-anchor="start" x="8" y="-497.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKey()</text>
+<text text-anchor="start" x="8" y="-486.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKeyHash()</text>
+<text text-anchor="start" x="8" y="-475.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IsInstance()</text>
+<text text-anchor="start" x="8" y="-464.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="8" y="-453.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
 <text text-anchor="start" x="8" y="-442.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
 <text text-anchor="start" x="8" y="-431.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
-<text text-anchor="start" x="8" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
+<text text-anchor="start" x="8" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
 <text text-anchor="start" x="8" y="-409.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="8" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="8" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2Key()</text>
-<text text-anchor="start" x="8" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2KeyHash()</text>
-<text text-anchor="start" x="8" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeKey2Index()</text>
-<text text-anchor="start" x="8" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _GetOrAllocRuntimeTypeIndex()</text>
-<text text-anchor="start" x="8" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RuntimeTypeIndex()</text>
-<text text-anchor="start" x="8" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># IncRef()</text>
-<text text-anchor="start" x="8" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DecRef()</text>
-<text text-anchor="start" x="8" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetOrAllocRuntimeTypeIndex()</text>
+<text text-anchor="start" x="8" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2Key()</text>
+<text text-anchor="start" x="8" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2KeyHash()</text>
+<text text-anchor="start" x="8" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeKey2Index()</text>
+<text text-anchor="start" x="8" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _GetOrAllocRuntimeTypeIndex()</text>
+<text text-anchor="start" x="8" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RuntimeTypeIndex()</text>
+<text text-anchor="start" x="8" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># IncRef()</text>
+<text text-anchor="start" x="8" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DecRef()</text>
+<text text-anchor="start" x="8" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetOrAllocRuntimeTypeIndex()</text>
 </a>
 </g>
 </g>
 <!-- Node3&#45;&gt;Node2 -->
 <g id="edge1" class="edge">
 <title>Node3&#45;&gt;Node2</title>
-<path fill="none" stroke="#191970" d="M190.0314,-371.3266C212.346,-347.0138 237.0257,-322.9342 262.5,-303 336.8352,-244.8311 432.7478,-196.6159 504.891,-164.6943"/>
-<polygon fill="none" stroke="#191970" points="187.3103,-369.1169 183.1838,-378.8752 192.495,-373.8201 187.3103,-369.1169"/>
+<path fill="none" stroke="#191970" d="M190.0132,-382.8001C212.3694,-358.392 237.07,-334.1549 262.5,-314 336.959,-254.9866 432.861,-205.2225 504.9686,-172.0615"/>
+<polygon fill="none" stroke="#191970" points="187.2705,-380.6145 183.1521,-390.3762 192.459,-385.3134 187.2705,-380.6145"/>
 </g>
 <!-- Node3&#45;&gt;Node3 -->
 <g id="edge2" class="edge">
 <title>Node3&#45;&gt;Node3</title>
-<path fill="none" stroke="#404040" d="M183.3625,-530.9248C194.0482,-524.6637 201,-513.3555 201,-497 201,-486.0112 197.8618,-477.3007 192.5615,-470.8687"/>
-<polygon fill="none" stroke="#404040" points="192.5184,-470.8322 185.3548,-470.0056 183.3625,-463.0752 190.5261,-463.9017 192.5184,-470.8322"/>
-<text text-anchor="middle" x="227" y="-494.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #deleter_</text>
+<path fill="none" stroke="#404040" d="M183.3625,-541.9248C194.0482,-535.6637 201,-524.3555 201,-508 201,-497.0112 197.8618,-488.3007 192.5615,-481.8687"/>
+<polygon fill="none" stroke="#404040" points="192.5184,-481.8322 185.3548,-481.0056 183.3625,-474.0752 190.5261,-474.9017 192.5184,-481.8322"/>
+<text text-anchor="middle" x="227" y="-505.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #deleter_</text>
 </g>
 <!-- Node4 -->
 <g id="node3" class="node">
 <title>Node4</title>
 <g id="a_node3"><a xlink:href="classtvm_1_1TargetKind.html" target="_top" xlink:title="Managed reference class to TargetKindNode. ">
-<polygon fill="#ffffff" stroke="#000000" points="271.5,-446.5 271.5,-547.5 437.5,-547.5 437.5,-446.5 271.5,-446.5"/>
-<text text-anchor="middle" x="354.5" y="-535.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetKind</text>
-<polyline fill="none" stroke="#000000" points="271.5,-528.5 437.5,-528.5 "/>
-<text text-anchor="middle" x="354.5" y="-516.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="271.5,-509.5 437.5,-509.5 "/>
-<text text-anchor="start" x="279.5" y="-497.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TargetKind()</text>
-<text text-anchor="start" x="279.5" y="-486.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_NOTNULLABLE</text>
-<text text-anchor="start" x="279.5" y="-475.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
-<text text-anchor="start" x="279.5" y="-464.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetAttrMap()</text>
-<text text-anchor="start" x="279.5" y="-453.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Get()</text>
+<polygon fill="#ffffff" stroke="#000000" points="271.5,-457.5 271.5,-558.5 437.5,-558.5 437.5,-457.5 271.5,-457.5"/>
+<text text-anchor="middle" x="354.5" y="-546.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetKind</text>
+<polyline fill="none" stroke="#000000" points="271.5,-539.5 437.5,-539.5 "/>
+<text text-anchor="middle" x="354.5" y="-527.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="271.5,-520.5 437.5,-520.5 "/>
+<text text-anchor="start" x="279.5" y="-508.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TargetKind()</text>
+<text text-anchor="start" x="279.5" y="-497.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_NOTNULLABLE</text>
+<text text-anchor="start" x="279.5" y="-486.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
+<text text-anchor="start" x="279.5" y="-475.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetAttrMap()</text>
+<text text-anchor="start" x="279.5" y="-464.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Get()</text>
 </a>
 </g>
 </g>
 <!-- Node4&#45;&gt;Node2 -->
 <g id="edge3" class="edge">
 <title>Node4&#45;&gt;Node2</title>
-<path fill="none" stroke="#404040" d="M373.7859,-446.2246C390.2564,-405.7751 416.0751,-348.5728 446.5,-303 461.4082,-280.6694 478.9673,-258.2955 496.9407,-237.2775"/>
-<polygon fill="none" stroke="#404040" points="497.0843,-237.1117 497.9883,-229.9575 504.9399,-228.0403 504.0359,-235.1945 497.0843,-237.1117"/>
-<text text-anchor="middle" x="490" y="-271.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +kind</text>
+<path fill="none" stroke="#404040" d="M373.971,-457.3473C390.5463,-416.9671 416.4156,-359.7982 446.5,-314 461.5061,-291.1559 479.1277,-268.1721 497.1364,-246.5304"/>
+<polygon fill="none" stroke="#404040" points="497.1434,-246.5221 497.9475,-239.356 504.8716,-237.3419 504.0676,-244.5081 497.1434,-246.5221"/>
+<text text-anchor="middle" x="490" y="-282.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +kind</text>
 </g>
 <!-- Node5 -->
 <g id="node4" class="node">
 <title>Node5</title>
 <g id="a_node4"><a xlink:href="classtvm_1_1runtime_1_1ObjectRef.html" target="_top" xlink:title="Base class of all object reference. ">
-<polygon fill="#ffffff" stroke="#000000" points="623.5,-728.5 623.5,-950.5 757.5,-950.5 757.5,-728.5 623.5,-728.5"/>
-<text text-anchor="middle" x="690.5" y="-938.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
-<polyline fill="none" stroke="#000000" points="623.5,-931.5 757.5,-931.5 "/>
-<text text-anchor="start" x="631.5" y="-919.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<polyline fill="none" stroke="#000000" points="623.5,-912.5 757.5,-912.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="623.5,-739.5 623.5,-961.5 757.5,-961.5 757.5,-739.5 623.5,-739.5"/>
+<text text-anchor="middle" x="690.5" y="-949.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
+<polyline fill="none" stroke="#000000" points="623.5,-942.5 757.5,-942.5 "/>
+<text text-anchor="start" x="631.5" y="-930.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<polyline fill="none" stroke="#000000" points="623.5,-923.5 757.5,-923.5 "/>
+<text text-anchor="start" x="631.5" y="-911.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
 <text text-anchor="start" x="631.5" y="-900.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="631.5" y="-889.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="631.5" y="-878.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
-<text text-anchor="start" x="631.5" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
-<text text-anchor="start" x="631.5" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
-<text text-anchor="start" x="631.5" y="-845.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&lt;()</text>
-<text text-anchor="start" x="631.5" y="-834.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
-<text text-anchor="start" x="631.5" y="-823.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="631.5" y="-812.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
-<text text-anchor="start" x="631.5" y="-801.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="631.5" y="-790.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
-<text text-anchor="start" x="631.5" y="-779.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
-<text text-anchor="start" x="631.5" y="-768.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
-<text text-anchor="start" x="631.5" y="-757.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
-<text text-anchor="start" x="631.5" y="-746.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
-<text text-anchor="start" x="631.5" y="-735.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
+<text text-anchor="start" x="631.5" y="-889.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
+<text text-anchor="start" x="631.5" y="-878.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
+<text text-anchor="start" x="631.5" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
+<text text-anchor="start" x="631.5" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&lt;()</text>
+<text text-anchor="start" x="631.5" y="-845.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
+<text text-anchor="start" x="631.5" y="-834.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="631.5" y="-823.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
+<text text-anchor="start" x="631.5" y="-812.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="631.5" y="-801.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
+<text text-anchor="start" x="631.5" y="-790.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
+<text text-anchor="start" x="631.5" y="-779.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
+<text text-anchor="start" x="631.5" y="-768.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
+<text text-anchor="start" x="631.5" y="-757.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
+<text text-anchor="start" x="631.5" y="-746.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
 </a>
 </g>
 </g>
 <!-- Node5&#45;&gt;Node4 -->
 <g id="edge4" class="edge">
 <title>Node5&#45;&gt;Node4</title>
-<path fill="none" stroke="#191970" d="M614.0677,-807.3492C561.4482,-782.0684 492.9269,-742.5202 446.5,-691 409.0695,-649.4631 383.9171,-589.9977 369.5053,-547.8742"/>
-<polygon fill="none" stroke="#191970" points="612.8293,-810.6353 623.3655,-811.7481 615.8229,-804.3077 612.8293,-810.6353"/>
+<path fill="none" stroke="#191970" d="M614.0677,-818.3492C561.4482,-793.0684 492.9269,-753.5202 446.5,-702 409.0695,-660.4631 383.9171,-600.9977 369.5053,-558.8742"/>
+<polygon fill="none" stroke="#191970" points="612.8293,-821.6353 623.3655,-822.7481 615.8229,-815.3077 612.8293,-821.6353"/>
 </g>
 <!-- Node7 -->
 <g id="node6" class="node">
 <title>Node7</title>
 <g id="a_node6"><a xlink:href="classtvm_1_1runtime_1_1Map.html" target="_top" xlink:title="{tvm::runtime::Map\&lt;\l tvm::runtime::String,\l tvm::runtime::ObjectRef \&gt;\n||+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ operator=()\l+ operator=()\l+ at()\land 12 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="455.5,-402.5 455.5,-591.5 603.5,-591.5 603.5,-402.5 455.5,-402.5"/>
-<text text-anchor="start" x="463.5" y="-579.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Map&lt;</text>
-<text text-anchor="start" x="463.5" y="-568.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::runtime::String,</text>
-<text text-anchor="middle" x="529.5" y="-557.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::runtime::ObjectRef &gt;</text>
-<polyline fill="none" stroke="#000000" points="455.5,-550.5 603.5,-550.5 "/>
-<text text-anchor="middle" x="529.5" y="-538.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="455.5,-531.5 603.5,-531.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="455.5,-413.5 455.5,-602.5 603.5,-602.5 603.5,-413.5 455.5,-413.5"/>
+<text text-anchor="start" x="463.5" y="-590.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Map&lt;</text>
+<text text-anchor="start" x="463.5" y="-579.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::runtime::String,</text>
+<text text-anchor="middle" x="529.5" y="-568.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::runtime::ObjectRef &gt;</text>
+<polyline fill="none" stroke="#000000" points="455.5,-561.5 603.5,-561.5 "/>
+<text text-anchor="middle" x="529.5" y="-549.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="455.5,-542.5 603.5,-542.5 "/>
+<text text-anchor="start" x="463.5" y="-530.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
 <text text-anchor="start" x="463.5" y="-519.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
 <text text-anchor="start" x="463.5" y="-508.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
 <text text-anchor="start" x="463.5" y="-497.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
 <text text-anchor="start" x="463.5" y="-486.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
 <text text-anchor="start" x="463.5" y="-475.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
 <text text-anchor="start" x="463.5" y="-464.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
-<text text-anchor="start" x="463.5" y="-453.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="463.5" y="-453.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
 <text text-anchor="start" x="463.5" y="-442.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="463.5" y="-431.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="463.5" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ at()</text>
-<text text-anchor="start" x="463.5" y="-409.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 12 more...</text>
+<text text-anchor="start" x="463.5" y="-431.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ at()</text>
+<text text-anchor="start" x="463.5" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 12 more...</text>
 </a>
 </g>
 </g>
 <!-- Node5&#45;&gt;Node7 -->
 <g id="edge7" class="edge">
 <title>Node5&#45;&gt;Node7</title>
-<path fill="none" stroke="#191970" d="M627.2715,-719.2883C622.5323,-709.7561 617.8911,-700.2436 613.5,-691 598.3026,-659.0086 582.6268,-623.5695 568.9926,-591.8278"/>
-<polygon fill="none" stroke="#191970" points="624.2375,-721.0466 631.8404,-728.4254 630.4984,-717.9159 624.2375,-721.0466"/>
+<path fill="none" stroke="#191970" d="M627.2715,-730.2883C622.5323,-720.7561 617.8911,-711.2436 613.5,-702 598.3026,-670.0086 582.6268,-634.5695 568.9926,-602.8278"/>
+<polygon fill="none" stroke="#191970" points="624.2375,-732.0466 631.8404,-739.4254 630.4984,-728.9159 624.2375,-732.0466"/>
 </g>
 <!-- Node8 -->
 <g id="node7" class="node">
 <title>Node8</title>
 <g id="a_node7"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\&lt; tvm::runtime::String \&gt;\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="622,-408 622,-586 759,-586 759,-408 622,-408"/>
-<text text-anchor="start" x="630" y="-574" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
-<text text-anchor="middle" x="690.5" y="-563" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">&lt; tvm::runtime::String &gt;</text>
-<polyline fill="none" stroke="#000000" points="622,-556 759,-556 "/>
-<text text-anchor="middle" x="690.5" y="-544" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="622,-537 759,-537 "/>
+<polygon fill="#ffffff" stroke="#000000" points="622,-419 622,-597 759,-597 759,-419 622,-419"/>
+<text text-anchor="start" x="630" y="-585" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="690.5" y="-574" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">&lt; tvm::runtime::String &gt;</text>
+<polyline fill="none" stroke="#000000" points="622,-567 759,-567 "/>
+<text text-anchor="middle" x="690.5" y="-555" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="622,-548 759,-548 "/>
+<text text-anchor="start" x="630" y="-536" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
 <text text-anchor="start" x="630" y="-525" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
 <text text-anchor="start" x="630" y="-514" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
 <text text-anchor="start" x="630" y="-503" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
@@ -198,60 +200,60 @@
 <text text-anchor="start" x="630" y="-481" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
 <text text-anchor="start" x="630" y="-470" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
 <text text-anchor="start" x="630" y="-459" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="630" y="-448" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="630" y="-448" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
 <text text-anchor="start" x="630" y="-437" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="630" y="-426" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="630" y="-415" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
+<text text-anchor="start" x="630" y="-426" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
 </a>
 </g>
 </g>
 <!-- Node5&#45;&gt;Node8 -->
 <g id="edge9" class="edge">
 <title>Node5&#45;&gt;Node8</title>
-<path fill="none" stroke="#191970" d="M690.5,-718.2298C690.5,-674.7434 690.5,-626.5445 690.5,-586.2656"/>
-<polygon fill="none" stroke="#191970" points="687.0001,-718.3 690.5,-728.3001 694.0001,-718.3001 687.0001,-718.3"/>
+<path fill="none" stroke="#191970" d="M690.5,-729.2298C690.5,-685.7434 690.5,-637.5445 690.5,-597.2656"/>
+<polygon fill="none" stroke="#191970" points="687.0001,-729.3 690.5,-739.3001 694.0001,-729.3001 687.0001,-729.3"/>
 </g>
 <!-- Node9 -->
 <g id="node8" class="node">
 <title>Node9</title>
 <g id="a_node8"><a xlink:href="classtvm_1_1runtime_1_1String.html" target="_top" xlink:title="Reference to string objects. ">
-<polygon fill="#ffffff" stroke="#000000" points="777.5,-402.5 777.5,-591.5 893.5,-591.5 893.5,-402.5 777.5,-402.5"/>
-<text text-anchor="middle" x="835.5" y="-579.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::String</text>
-<polyline fill="none" stroke="#000000" points="777.5,-572.5 893.5,-572.5 "/>
-<text text-anchor="middle" x="835.5" y="-560.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="777.5,-553.5 893.5,-553.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="777.5,-413.5 777.5,-602.5 893.5,-602.5 893.5,-413.5 777.5,-413.5"/>
+<text text-anchor="middle" x="835.5" y="-590.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::String</text>
+<polyline fill="none" stroke="#000000" points="777.5,-583.5 893.5,-583.5 "/>
+<text text-anchor="middle" x="835.5" y="-571.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="777.5,-564.5 893.5,-564.5 "/>
+<text text-anchor="start" x="785.5" y="-552.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ String()</text>
 <text text-anchor="start" x="785.5" y="-541.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ String()</text>
 <text text-anchor="start" x="785.5" y="-530.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ String()</text>
 <text text-anchor="start" x="785.5" y="-519.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ String()</text>
-<text text-anchor="start" x="785.5" y="-508.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ String()</text>
+<text text-anchor="start" x="785.5" y="-508.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
 <text text-anchor="start" x="785.5" y="-497.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="785.5" y="-486.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="785.5" y="-486.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compare()</text>
 <text text-anchor="start" x="785.5" y="-475.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compare()</text>
 <text text-anchor="start" x="785.5" y="-464.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compare()</text>
-<text text-anchor="start" x="785.5" y="-453.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compare()</text>
-<text text-anchor="start" x="785.5" y="-442.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ c_str()</text>
-<text text-anchor="start" x="785.5" y="-431.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 6 more...</text>
-<text text-anchor="start" x="785.5" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ CanConvertFrom()</text>
-<text text-anchor="start" x="785.5" y="-409.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ HashBytes()</text>
+<text text-anchor="start" x="785.5" y="-453.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ c_str()</text>
+<text text-anchor="start" x="785.5" y="-442.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 6 more...</text>
+<text text-anchor="start" x="785.5" y="-431.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ CanConvertFrom()</text>
+<text text-anchor="start" x="785.5" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ HashBytes()</text>
 </a>
 </g>
 </g>
 <!-- Node5&#45;&gt;Node9 -->
 <g id="edge11" class="edge">
 <title>Node5&#45;&gt;Node9</title>
-<path fill="none" stroke="#191970" d="M755.5363,-719.3697C760.0935,-709.8516 764.4713,-700.3179 768.5,-691 782.2168,-659.2747 795.1581,-623.7387 805.9421,-591.8345"/>
-<polygon fill="none" stroke="#191970" points="752.3312,-717.9578 751.1177,-728.4829 758.6299,-721.0118 752.3312,-717.9578"/>
+<path fill="none" stroke="#191970" d="M755.5363,-730.3697C760.0935,-720.8516 764.4713,-711.3179 768.5,-702 782.2168,-670.2747 795.1581,-634.7387 805.9421,-602.8345"/>
+<polygon fill="none" stroke="#191970" points="752.3312,-728.9578 751.1177,-739.4829 758.6299,-732.0118 752.3312,-728.9578"/>
 </g>
 <!-- Node10 -->
 <g id="node9" class="node">
 <title>Node10</title>
 <g id="a_node9"><a xlink:href="classtvm_1_1runtime_1_1Optional.html" target="_top" xlink:title="{tvm::runtime::Optional\l\&lt; tvm::runtime::ObjectRef \&gt;\n|+ _type_is_nullable\l|+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ operator=()\l+ operator=()\land 15 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="911.5,-408 911.5,-586 1067.5,-586 1067.5,-408 911.5,-408"/>
-<text text-anchor="start" x="919.5" y="-574" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Optional</text>
-<text text-anchor="middle" x="989.5" y="-563" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">&lt; tvm::runtime::ObjectRef &gt;</text>
-<polyline fill="none" stroke="#000000" points="911.5,-556 1067.5,-556 "/>
-<text text-anchor="start" x="919.5" y="-544" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<polyline fill="none" stroke="#000000" points="911.5,-537 1067.5,-537 "/>
+<polygon fill="#ffffff" stroke="#000000" points="911.5,-419 911.5,-597 1067.5,-597 1067.5,-419 911.5,-419"/>
+<text text-anchor="start" x="919.5" y="-585" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Optional</text>
+<text text-anchor="middle" x="989.5" y="-574" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">&lt; tvm::runtime::ObjectRef &gt;</text>
+<polyline fill="none" stroke="#000000" points="911.5,-567 1067.5,-567 "/>
+<text text-anchor="start" x="919.5" y="-555" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<polyline fill="none" stroke="#000000" points="911.5,-548 1067.5,-548 "/>
+<text text-anchor="start" x="919.5" y="-536" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
 <text text-anchor="start" x="919.5" y="-525" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
 <text text-anchor="start" x="919.5" y="-514" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
 <text text-anchor="start" x="919.5" y="-503" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
@@ -259,78 +261,77 @@
 <text text-anchor="start" x="919.5" y="-481" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
 <text text-anchor="start" x="919.5" y="-470" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
 <text text-anchor="start" x="919.5" y="-459" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
-<text text-anchor="start" x="919.5" y="-448" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="919.5" y="-448" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
 <text text-anchor="start" x="919.5" y="-437" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="919.5" y="-426" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="919.5" y="-415" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 15 more...</text>
+<text text-anchor="start" x="919.5" y="-426" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 15 more...</text>
 </a>
 </g>
 </g>
 <!-- Node5&#45;&gt;Node10 -->
 <g id="edge13" class="edge">
 <title>Node5&#45;&gt;Node10</title>
-<path fill="none" stroke="#191970" d="M766.6513,-799.9034C810.9372,-773.8013 865.1956,-736.1383 902.5,-691 927.7784,-660.4131 947.168,-621.3056 961.157,-586.2269"/>
-<polygon fill="none" stroke="#191970" points="764.8226,-796.918 757.935,-804.9685 768.3397,-802.9703 764.8226,-796.918"/>
+<path fill="none" stroke="#191970" d="M766.6513,-810.9034C810.9372,-784.8013 865.1956,-747.1383 902.5,-702 927.7784,-671.4131 947.168,-632.3056 961.157,-597.2269"/>
+<polygon fill="none" stroke="#191970" points="764.8226,-807.918 757.935,-815.9685 768.3397,-813.9703 764.8226,-807.918"/>
 </g>
 <!-- Node6 -->
 <g id="node5" class="node">
 <title>Node6</title>
 <g id="a_node5"><a xlink:href="classtvm_1_1runtime_1_1ObjectPtr.html" target="_top" xlink:title="{tvm::runtime::ObjectPtr\l\&lt; tvm::runtime::Object \&gt;\n||+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ~ObjectPtr()\l+ swap()\l+ get()\l+ operator&#45;\&gt;()\land 11 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="620.5,-998.5 620.5,-1176.5 760.5,-1176.5 760.5,-998.5 620.5,-998.5"/>
-<text text-anchor="start" x="628.5" y="-1164.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
-<text text-anchor="middle" x="690.5" y="-1153.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">&lt; tvm::runtime::Object &gt;</text>
-<polyline fill="none" stroke="#000000" points="620.5,-1146.5 760.5,-1146.5 "/>
-<text text-anchor="middle" x="690.5" y="-1134.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="620.5,-1127.5 760.5,-1127.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="620.5,-1009.5 620.5,-1187.5 760.5,-1187.5 760.5,-1009.5 620.5,-1009.5"/>
+<text text-anchor="start" x="628.5" y="-1175.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
+<text text-anchor="middle" x="690.5" y="-1164.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">&lt; tvm::runtime::Object &gt;</text>
+<polyline fill="none" stroke="#000000" points="620.5,-1157.5 760.5,-1157.5 "/>
+<text text-anchor="middle" x="690.5" y="-1145.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="620.5,-1138.5 760.5,-1138.5 "/>
+<text text-anchor="start" x="628.5" y="-1126.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
 <text text-anchor="start" x="628.5" y="-1115.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
 <text text-anchor="start" x="628.5" y="-1104.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
 <text text-anchor="start" x="628.5" y="-1093.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
 <text text-anchor="start" x="628.5" y="-1082.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
 <text text-anchor="start" x="628.5" y="-1071.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="628.5" y="-1060.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="628.5" y="-1049.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
-<text text-anchor="start" x="628.5" y="-1038.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
-<text text-anchor="start" x="628.5" y="-1027.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="628.5" y="-1016.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
-<text text-anchor="start" x="628.5" y="-1005.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
+<text text-anchor="start" x="628.5" y="-1060.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
+<text text-anchor="start" x="628.5" y="-1049.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
+<text text-anchor="start" x="628.5" y="-1038.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="628.5" y="-1027.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator&#45;&gt;()</text>
+<text text-anchor="start" x="628.5" y="-1016.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
 </a>
 </g>
 </g>
 <!-- Node6&#45;&gt;Node5 -->
 <g id="edge5" class="edge">
 <title>Node6&#45;&gt;Node5</title>
-<path fill="none" stroke="#404040" d="M690.5,-998.3167C690.5,-986.8765 690.5,-975.0062 690.5,-963.1402"/>
-<polygon fill="none" stroke="#404040" points="690.5001,-962.7944 686.5,-956.7944 690.5,-950.7944 694.5,-956.7943 690.5001,-962.7944"/>
-<text text-anchor="middle" x="710" y="-972" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
+<path fill="none" stroke="#404040" d="M690.5,-1009.3167C690.5,-997.8765 690.5,-986.0062 690.5,-974.1402"/>
+<polygon fill="none" stroke="#404040" points="690.5001,-973.7944 686.5,-967.7944 690.5,-961.7944 694.5,-967.7943 690.5001,-973.7944"/>
+<text text-anchor="middle" x="710" y="-983" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
 </g>
 <!-- Node7&#45;&gt;Node2 -->
 <g id="edge6" class="edge">
 <title>Node7&#45;&gt;Node2</title>
-<path fill="none" stroke="#404040" d="M549.6874,-402.4978C559.1378,-358.2582 570.5621,-304.778 580.8834,-256.4614"/>
-<polygon fill="none" stroke="#404040" points="580.9053,-256.3585 578.247,-249.6553 583.4122,-244.6233 586.0705,-251.3266 580.9053,-256.3585"/>
-<text text-anchor="middle" x="603" y="-277" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +attrs</text>
-<text text-anchor="middle" x="603" y="-266" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+features</text>
+<path fill="none" stroke="#404040" d="M549.481,-413.0902C558.7375,-369.1221 569.921,-316.0005 580.1134,-267.5864"/>
+<polygon fill="none" stroke="#404040" points="580.1405,-267.4572 577.4624,-260.7618 582.6127,-255.7146 585.2908,-262.41 580.1405,-267.4572"/>
+<text text-anchor="middle" x="601" y="-288" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +attrs</text>
+<text text-anchor="middle" x="601" y="-277" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+features</text>
 </g>
 <!-- Node8&#45;&gt;Node2 -->
 <g id="edge8" class="edge">
 <title>Node8&#45;&gt;Node2</title>
-<path fill="none" stroke="#404040" d="M671.2014,-407.7737C661.4327,-362.6084 649.3721,-306.847 638.5128,-256.6393"/>
-<polygon fill="none" stroke="#404040" points="638.4629,-256.4082 633.2848,-251.3895 635.926,-244.6794 641.104,-249.6982 638.4629,-256.4082"/>
-<text text-anchor="middle" x="660" y="-271.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +keys</text>
+<path fill="none" stroke="#404040" d="M671.5091,-418.9069C661.9205,-373.9231 650.0658,-318.3085 639.2983,-267.7943"/>
+<polygon fill="none" stroke="#404040" points="639.2331,-267.4881 634.0701,-262.4539 636.7313,-255.7518 641.8943,-260.786 639.2331,-267.4881"/>
+<text text-anchor="middle" x="662" y="-282.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +keys</text>
 </g>
 <!-- Node9&#45;&gt;Node2 -->
 <g id="edge10" class="edge">
 <title>Node9&#45;&gt;Node2</title>
-<path fill="none" stroke="#404040" d="M811.0004,-402.489C800.4362,-369.7663 786.3421,-333.6378 768.5,-303 755.3782,-280.4678 739.199,-258.1895 722.2342,-237.3816"/>
-<polygon fill="none" stroke="#404040" points="722.0543,-237.1647 715.1448,-235.1008 714.3924,-227.929 721.3019,-229.9929 722.0543,-237.1647"/>
-<text text-anchor="middle" x="768.5" y="-271.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +tag</text>
+<path fill="none" stroke="#404040" d="M810.5848,-413.1433C800.0161,-380.564 786.0255,-344.6149 768.5,-314 755.2169,-290.7961 738.8551,-267.7595 721.731,-246.2104"/>
+<polygon fill="none" stroke="#404040" points="721.6265,-246.081 714.7447,-243.9268 714.0863,-236.7458 720.9681,-238.9 721.6265,-246.081"/>
+<text text-anchor="middle" x="768.5" y="-282.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +tag</text>
 </g>
 <!-- Node10&#45;&gt;Node2 -->
 <g id="edge12" class="edge">
 <title>Node10&#45;&gt;Node2</title>
-<path fill="none" stroke="#404040" d="M962.4405,-407.9893C948.5632,-372.5414 928.9117,-333.1066 902.5,-303 854.0022,-247.7176 784.4045,-203.8678 725.2092,-173.2371"/>
-<polygon fill="none" stroke="#404040" points="724.9771,-173.1189 717.8151,-173.9594 714.2846,-167.6717 721.4466,-166.8311 724.9771,-173.1189"/>
-<text text-anchor="middle" x="899" y="-271.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +host</text>
+<path fill="none" stroke="#404040" d="M962.045,-418.6761C948.1475,-383.3825 928.5872,-344.1446 902.5,-314 853.9684,-257.9204 784.3695,-212.7685 725.1819,-181.0006"/>
+<polygon fill="none" stroke="#404040" points="724.8672,-180.8342 717.6933,-181.5657 714.2589,-175.225 721.4328,-174.4935 724.8672,-180.8342"/>
+<text text-anchor="middle" x="899" y="-282.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +host</text>
 </g>
 </g>
 </svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1TargetNode__inherit__graph.svg b/docs/reference/api/doxygen/classtvm_1_1TargetNode__inherit__graph.svg
index 011f055506..573cd46fee 100644
--- a/docs/reference/api/doxygen/classtvm_1_1TargetNode__inherit__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1TargetNode__inherit__graph.svg
@@ -4,32 +4,33 @@
 <!-- Generated by graphviz version 2.40.1 (20161225.0304)
  -->
 <!-- Title: tvm::TargetNode Pages: 1 -->
-<svg width="217pt" height="754pt"
- viewBox="0.00 0.00 217.00 754.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 750)">
+<svg width="217pt" height="765pt"
+ viewBox="0.00 0.00 217.00 765.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 761)">
 <title>tvm::TargetNode</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-750 213,-750 213,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-761 213,-761 213,4 -4,4"/>
 <!-- Node0 -->
 <g id="node1" class="node">
 <title>Node0</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-310.5 209,-310.5 209,-.5 0,-.5"/>
-<text text-anchor="middle" x="104.5" y="-298.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetNode</text>
-<polyline fill="none" stroke="#000000" points="0,-291.5 209,-291.5 "/>
-<text text-anchor="start" x="8" y="-279.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ kind</text>
-<text text-anchor="start" x="8" y="-268.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ host</text>
-<text text-anchor="start" x="8" y="-257.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ tag</text>
-<text text-anchor="start" x="8" y="-246.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ keys</text>
-<text text-anchor="start" x="8" y="-235.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ attrs</text>
-<text text-anchor="start" x="8" y="-224.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ features</text>
-<text text-anchor="start" x="8" y="-213.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
-<text text-anchor="start" x="8" y="-202.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
-<text text-anchor="start" x="8" y="-191.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="8" y="-180.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
-<text text-anchor="start" x="8" y="-169.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<polyline fill="none" stroke="#000000" points="0,-162.5 209,-162.5 "/>
-<text text-anchor="start" x="8" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ str()</text>
-<text text-anchor="start" x="8" y="-139.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Export()</text>
-<text text-anchor="start" x="8" y="-128.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetHost()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-321.5 209,-321.5 209,-.5 0,-.5"/>
+<text text-anchor="middle" x="104.5" y="-309.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::TargetNode</text>
+<polyline fill="none" stroke="#000000" points="0,-302.5 209,-302.5 "/>
+<text text-anchor="start" x="8" y="-290.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ kind</text>
+<text text-anchor="start" x="8" y="-279.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ host</text>
+<text text-anchor="start" x="8" y="-268.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ tag</text>
+<text text-anchor="start" x="8" y="-257.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ keys</text>
+<text text-anchor="start" x="8" y="-246.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ attrs</text>
+<text text-anchor="start" x="8" y="-235.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ features</text>
+<text text-anchor="start" x="8" y="-224.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
+<text text-anchor="start" x="8" y="-213.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
+<text text-anchor="start" x="8" y="-202.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="8" y="-191.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
+<text text-anchor="start" x="8" y="-180.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<polyline fill="none" stroke="#000000" points="0,-173.5 209,-173.5 "/>
+<text text-anchor="start" x="8" y="-161.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ str()</text>
+<text text-anchor="start" x="8" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Export()</text>
+<text text-anchor="start" x="8" y="-139.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetHost()</text>
+<text text-anchor="start" x="8" y="-128.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTargetDeviceType()</text>
 <text text-anchor="start" x="8" y="-117.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ToDebugString()</text>
 <text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
 <text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetAttr()</text>
@@ -46,51 +47,51 @@
 <g id="node2" class="node">
 <title>Node1</title>
 <g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1Object.html" target="_top" xlink:title="base class of all object containers. ">
-<polygon fill="#ffffff" stroke="#000000" points="13,-347.5 13,-745.5 196,-745.5 196,-347.5 13,-347.5"/>
-<text text-anchor="middle" x="104.5" y="-733.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Object</text>
-<polyline fill="none" stroke="#000000" points="13,-726.5 196,-726.5 "/>
-<text text-anchor="start" x="21" y="-714.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
-<text text-anchor="start" x="21" y="-703.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_final</text>
-<text text-anchor="start" x="21" y="-692.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots</text>
-<text text-anchor="start" x="21" y="-681.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots_can</text>
-<text text-anchor="start" x="21" y="-670.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_overflow</text>
-<text text-anchor="start" x="21" y="-659.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_visit</text>
-<text text-anchor="start" x="21" y="-648.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_attrs</text>
-<text text-anchor="start" x="21" y="-637.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
-<text text-anchor="start" x="21" y="-626.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="21" y="-615.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
-<text text-anchor="start" x="21" y="-604.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="21" y="-593.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_index</text>
-<text text-anchor="start" x="21" y="-582.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># type_index_</text>
-<text text-anchor="start" x="21" y="-571.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># ref_counter_</text>
-<text text-anchor="start" x="21" y="-560.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># deleter_</text>
-<polyline fill="none" stroke="#000000" points="13,-553.5 196,-553.5 "/>
-<text text-anchor="start" x="21" y="-541.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ type_index()</text>
-<text text-anchor="start" x="21" y="-530.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKey()</text>
-<text text-anchor="start" x="21" y="-519.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKeyHash()</text>
-<text text-anchor="start" x="21" y="-508.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IsInstance()</text>
-<text text-anchor="start" x="21" y="-497.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<polygon fill="#ffffff" stroke="#000000" points="13,-358.5 13,-756.5 196,-756.5 196,-358.5 13,-358.5"/>
+<text text-anchor="middle" x="104.5" y="-744.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Object</text>
+<polyline fill="none" stroke="#000000" points="13,-737.5 196,-737.5 "/>
+<text text-anchor="start" x="21" y="-725.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
+<text text-anchor="start" x="21" y="-714.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_final</text>
+<text text-anchor="start" x="21" y="-703.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots</text>
+<text text-anchor="start" x="21" y="-692.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots_can</text>
+<text text-anchor="start" x="21" y="-681.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_overflow</text>
+<text text-anchor="start" x="21" y="-670.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_visit</text>
+<text text-anchor="start" x="21" y="-659.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_attrs</text>
+<text text-anchor="start" x="21" y="-648.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
+<text text-anchor="start" x="21" y="-637.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="21" y="-626.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
+<text text-anchor="start" x="21" y="-615.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="21" y="-604.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_index</text>
+<text text-anchor="start" x="21" y="-593.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># type_index_</text>
+<text text-anchor="start" x="21" y="-582.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># ref_counter_</text>
+<text text-anchor="start" x="21" y="-571.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># deleter_</text>
+<polyline fill="none" stroke="#000000" points="13,-564.5 196,-564.5 "/>
+<text text-anchor="start" x="21" y="-552.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ type_index()</text>
+<text text-anchor="start" x="21" y="-541.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKey()</text>
+<text text-anchor="start" x="21" y="-530.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKeyHash()</text>
+<text text-anchor="start" x="21" y="-519.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IsInstance()</text>
+<text text-anchor="start" x="21" y="-508.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="21" y="-497.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
 <text text-anchor="start" x="21" y="-486.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
 <text text-anchor="start" x="21" y="-475.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
-<text text-anchor="start" x="21" y="-464.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
+<text text-anchor="start" x="21" y="-464.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
 <text text-anchor="start" x="21" y="-453.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="21" y="-442.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="21" y="-431.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2Key()</text>
-<text text-anchor="start" x="21" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2KeyHash()</text>
-<text text-anchor="start" x="21" y="-409.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeKey2Index()</text>
-<text text-anchor="start" x="21" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _GetOrAllocRuntimeTypeIndex()</text>
-<text text-anchor="start" x="21" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RuntimeTypeIndex()</text>
-<text text-anchor="start" x="21" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># IncRef()</text>
-<text text-anchor="start" x="21" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DecRef()</text>
-<text text-anchor="start" x="21" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetOrAllocRuntimeTypeIndex()</text>
+<text text-anchor="start" x="21" y="-442.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2Key()</text>
+<text text-anchor="start" x="21" y="-431.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2KeyHash()</text>
+<text text-anchor="start" x="21" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeKey2Index()</text>
+<text text-anchor="start" x="21" y="-409.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _GetOrAllocRuntimeTypeIndex()</text>
+<text text-anchor="start" x="21" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RuntimeTypeIndex()</text>
+<text text-anchor="start" x="21" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># IncRef()</text>
+<text text-anchor="start" x="21" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DecRef()</text>
+<text text-anchor="start" x="21" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetOrAllocRuntimeTypeIndex()</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node0 -->
 <g id="edge1" class="edge">
 <title>Node1&#45;&gt;Node0</title>
-<path fill="none" stroke="#191970" d="M104.5,-337.1595C104.5,-328.2091 104.5,-319.2976 104.5,-310.5005"/>
-<polygon fill="none" stroke="#191970" points="101.0001,-337.2773 104.5,-347.2773 108.0001,-337.2773 101.0001,-337.2773"/>
+<path fill="none" stroke="#191970" d="M104.5,-348.2494C104.5,-339.2855 104.5,-330.3517 104.5,-321.5216"/>
+<polygon fill="none" stroke="#191970" points="101.0001,-348.3788 104.5,-358.3788 108.0001,-348.3788 101.0001,-348.3788"/>
 </g>
 </g>
 </svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1VirtualDevice.html b/docs/reference/api/doxygen/classtvm_1_1VirtualDevice.html
index 3e7b19807b..c01fc0c17d 100644
--- a/docs/reference/api/doxygen/classtvm_1_1VirtualDevice.html
+++ b/docs/reference/api/doxygen/classtvm_1_1VirtualDevice.html
@@ -250,7 +250,7 @@ Additional Inherited Members</h2></td></tr>
 <p>Construct a virtual device. </p>
 <dl class="params"><dt>Parameters</dt><dd>
   <table class="params">
-    <tr><td class="paramname">device_type</td><td>The device type for the virtual device, or <code>kInvalidDeviceType</code> if unconstrained. If <code>target</code> is defined then must match its <code>target-&gt;kind-&gt;device_type</code>. </td></tr>
+    <tr><td class="paramname">device_type</td><td>The device type for the virtual device, or <code>kInvalidDeviceType</code> if unconstrained. If <code>target</code> is defined then must match its <code>target-&gt;GetTargetDeviceType()</code>. </td></tr>
     <tr><td class="paramname">virtual_device_id</td><td>The device id for the virtual device, or -1 if unconstrained. </td></tr>
     <tr><td class="paramname">target</td><td>The target describing how to compile for the virtual device, or null if unconstrained. </td></tr>
     <tr><td class="paramname">memory_scope</td><td>The memory scope w.r.t. the virtual device which holds data, or "" if unconstrained. </td></tr>
diff --git a/docs/reference/api/doxygen/classtvm_1_1VirtualDeviceNode.html b/docs/reference/api/doxygen/classtvm_1_1VirtualDeviceNode.html
index 6a7e1375dd..253b910223 100644
--- a/docs/reference/api/doxygen/classtvm_1_1VirtualDeviceNode.html
+++ b/docs/reference/api/doxygen/classtvm_1_1VirtualDeviceNode.html
@@ -255,7 +255,7 @@ Additional Inherited Members</h2></td></tr>
 <li>A <code>target</code> (<code><a class="el" href="classtvm_1_1Target.html" title="Managed reference class to TargetNode. ">Target</a></code>) describing how to compile code for the intended device. May be null if unconstrained.</li>
 <li>A <code>memory_scope</code> (<code>MemoryScope</code>, which is currently just <code>String</code>) describing which memory area is to be used to hold data. May be "" if unconstrained. See "Memory Scopes and Devices" below.</li>
 </ul>
-<p>Some or all of these fields may be unconstrained, signaling that device planning is free to choose a value consistent with the whole program. However if a <code>target</code> is given then the <code>device_type</code> must equal <code>target-&gt;kind-&gt;device_type</code>.</p>
+<p>Some or all of these fields may be unconstrained, signaling that device planning is free to choose a value consistent with the whole program. However if a <code>target</code> is given then the <code>device_type</code> must equal <code>target-&gt;GetTargetDeviceType()</code>.</p>
 <p>Note that currently we assume if a function returns its result on a particular (virtual) device then the function body is also executed on that device. See the overview comment in src/relay/transforms/device_planner.cc for more details.</p>
 <p>By 'data' we include both tensors and additional supporting datastructures such as shapes, Relay ADT items (including tuples), Relay references, and Relay closures. Typically non-tensor data must reside on a 'CPU'-like host device with good support for scalars.</p>
 <p>By 'execution' we include both (fused) primitive operators, and all the Relay expressions surrounding them which coordinates data and control flow. Again, typically non-primitive operators must be executed on a 'CPU'-like device with good support for control flow.</p>
diff --git a/docs/reference/api/doxygen/codegen_8h_source.html b/docs/reference/api/doxygen/codegen_8h_source.html
index e8c5b02d15..6f0f5d4c43 100644
--- a/docs/reference/api/doxygen/codegen_8h_source.html
+++ b/docs/reference/api/doxygen/codegen_8h_source.html
@@ -72,7 +72,7 @@ $(function() {
 <div class="ttc" id="crt_2packed__func_8h_html_ad869d7c5618f982f6841399c216a234c"><div class="ttname"><a href="crt_2packed__func_8h.html#ad869d7c5618f982f6841399c216a234c">TVMArgs</a></div><div class="ttdeci">struct TVMArgs TVMArgs</div></div>
 <div class="ttc" id="tir_2expr_8h_html"><div class="ttname"><a href="tir_2expr_8h.html">expr.h</a></div><div class="ttdoc">TIR expressions. </div></div>
 <div class="ttc" id="namespacetvm_1_1codegen_html_ab2cd2a65bac4b26427a8ca0abe4e0bd6"><div class="ttname"><a href="namespacetvm_1_1codegen.html#ab2cd2a65bac4b26427a8ca0abe4e0bd6">tvm::codegen::PackImportsToLLVM</a></div><div class="ttdeci">runtime::Module PackImportsToLLVM(const runtime::Module &amp;m, bool system_lib, const std::string &amp;target_triple)</div><div class="ttdoc">Pack imported device library to a LLVM module. Compile the LLVM module and link with the host library...</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="namespacetvm_1_1codegen_html_abf02059ebadcdb8bbbe5c840b646d67b"><div class="ttname"><a href="namespacetvm_1_1codegen.html#abf02059ebadcdb8bbbe5c840b646d67b">tvm::codegen::PackImportsToC</a></div><div class="ttdeci">std::string PackImportsToC(const runtime::Module &amp;m, bool system_lib)</div><div class="ttdoc">Pack imported device library to a C file. Compile the C file and link with the host library will allo...</div></div>
 <div class="ttc" id="classtvm_1_1IRModule_html"><div class="ttname"><a href="classtvm_1_1IRModule.html">tvm::IRModule</a></div><div class="ttdoc">Managed reference class to IRModuleNode. </div><div class="ttdef"><b>Definition:</b> module.h:352</div></div>
 <div class="ttc" id="namespacetvm_1_1codegen_html_a0d6322c2dda54a66a3b82022f5f3632c"><div class="ttname"><a href="namespacetvm_1_1codegen.html#a0d6322c2dda54a66a3b82022f5f3632c">tvm::codegen::Build</a></div><div class="ttdeci">runtime::Module Build(IRModule mod, Target target)</div><div class="ttdoc">Build a module from array of lowered function. </div></div>
diff --git a/docs/reference/api/doxygen/compilation__config_8h_source.html b/docs/reference/api/doxygen/compilation__config_8h_source.html
index a511df1651..0c28df1e80 100644
--- a/docs/reference/api/doxygen/compilation__config_8h_source.html
+++ b/docs/reference/api/doxygen/compilation__config_8h_source.html
@@ -84,7 +84,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1CompilationConfigNode_html_aad59e791b8292600a3d86ae182a85269"><div class="ttname"><a href="classtvm_1_1CompilationConfigNode.html#aad59e791b8292600a3d86ae182a85269">tvm::CompilationConfigNode::host_target</a></div><div class="ttdeci">Target host_target</div><div class="ttdoc">The host target. Used for &amp;#39;scalar&amp;#39; data and code (such as shapes and shape functions) and residual Re...</div><div class="ttdef"><b>Definition:</b> compilation_config [...]
 <div class="ttc" id="classtvm_1_1transform_1_1PassContext_html"><div class="ttname"><a href="classtvm_1_1transform_1_1PassContext.html">tvm::transform::PassContext</a></div><div class="ttdoc">PassContext that is used to configure the pass behavior. </div><div class="ttdef"><b>Definition:</b> transform.h:154</div></div>
 <div class="ttc" id="object_8h_html_ac6e7295a4999e2c8e4a2c990beca887a"><div class="ttname"><a href="object_8h.html#ac6e7295a4999e2c8e4a2c990beca887a">TVM_DEFINE_OBJECT_REF_METHODS</a></div><div class="ttdeci">#define TVM_DEFINE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)</div><div class="ttdef"><b>Definition:</b> object.h:713</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="classtvm_1_1CompilationConfigNode_html_abe4569cf32c57b710be99b50e7118876"><div class="ttname"><a href="classtvm_1_1CompilationConfigNode.html#abe4569cf32c57b710be99b50e7118876">tvm::CompilationConfigNode::default_primitive_virtual_device</a></div><div class="ttdeci">VirtualDevice default_primitive_virtual_device</div><div class="ttdoc">VirtualDevice for primitive operators which are not otherwise constrained to a particular device...</div><div class="ttdef"><b>Defini [...]
 <div class="ttc" id="object_8h_html_a3aea9b3f65aeb9150c0fa7800e5573c6"><div class="ttname"><a href="object_8h.html#a3aea9b3f65aeb9150c0fa7800e5573c6">TVM_DECLARE_FINAL_OBJECT_INFO</a></div><div class="ttdeci">#define TVM_DECLARE_FINAL_OBJECT_INFO(TypeName, ParentType)</div><div class="ttdoc">helper macro to declare type information in a final class. </div><div class="ttdef"><b>Definition:</b> object.h:671</div></div>
diff --git a/docs/reference/api/doxygen/cuda_2dense_8h_source.html b/docs/reference/api/doxygen/cuda_2dense_8h_source.html
index 021d8cfa49..c9acfc6f82 100644
--- a/docs/reference/api/doxygen/cuda_2dense_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2dense_8h_source.html
@@ -90,7 +90,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1te_1_1ComputeOpNode_html"><div class="ttname"><a href="classtvm_1_1te_1_1ComputeOpNode.html">tvm::te::ComputeOpNode</a></div><div class="ttdoc">A Compute op that compute a tensor on certain domain. </div><div class="ttdef"><b>Definition:</b> operation.h:226</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1BaseComputeOpNode_html_a21617a643897727c51ded2b7260df4c3"><div class="ttname"><a href="classtvm_1_1te_1_1BaseComputeOpNode.html#a21617a643897727c51ded2b7260df4c3">tvm::te::BaseComputeOpNode::axis</a></div><div class="ttdeci">Array&lt; IterVar &gt; axis</div><div class="ttdoc">IterVar on each axis. </div><div class="ttdef"><b>Definition:</b> operation.h:207</div></div>
 <div class="ttc" id="namespacetvm_1_1te_html_aae384e9b73c2271905486e4a74b69265"><div class="ttname"><a href="namespacetvm_1_1te.html#aae384e9b73c2271905486e4a74b69265">tvm::te::reduce_axis</a></div><div class="ttdeci">IterVar reduce_axis(Range dom, std::string name=&quot;rv&quot;)</div><div class="ttdoc">Create a new IterVar for reduction operations. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="nn_2dense_8h_html"><div class="ttname"><a href="nn_2dense_8h.html">dense.h</a></div><div class="ttdoc">Dense op constructions. </div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
diff --git a/docs/reference/api/doxygen/cuda_2injective_8h_source.html b/docs/reference/api/doxygen/cuda_2injective_8h_source.html
index 248cfc745f..08d8c77c69 100644
--- a/docs/reference/api/doxygen/cuda_2injective_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2injective_8h_source.html
@@ -82,7 +82,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1te_1_1ComputeOpNode_html"><div class="ttname"><a href="classtvm_1_1te_1_1ComputeOpNode.html">tvm::te::ComputeOpNode</a></div><div class="ttdoc">A Compute op that compute a tensor on certain domain. </div><div class="ttdef"><b>Definition:</b> operation.h:226</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1BaseComputeOpNode_html_a21617a643897727c51ded2b7260df4c3"><div class="ttname"><a href="classtvm_1_1te_1_1BaseComputeOpNode.html#a21617a643897727c51ded2b7260df4c3">tvm::te::BaseComputeOpNode::axis</a></div><div class="ttdeci">Array&lt; IterVar &gt; axis</div><div class="ttdoc">IterVar on each axis. </div><div class="ttdef"><b>Definition:</b> operation.h:207</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
diff --git a/docs/reference/api/doxygen/cuda_2pooling_8h_source.html b/docs/reference/api/doxygen/cuda_2pooling_8h_source.html
index 40763e1caf..07b8287c12 100644
--- a/docs/reference/api/doxygen/cuda_2pooling_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2pooling_8h_source.html
@@ -84,7 +84,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1te_1_1ComputeOpNode_html"><div class="ttname"><a href="classtvm_1_1te_1_1ComputeOpNode.html">tvm::te::ComputeOpNode</a></div><div class="ttdoc">A Compute op that compute a tensor on certain domain. </div><div class="ttdef"><b>Definition:</b> operation.h:226</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1BaseComputeOpNode_html_a21617a643897727c51ded2b7260df4c3"><div class="ttname"><a href="classtvm_1_1te_1_1BaseComputeOpNode.html#a21617a643897727c51ded2b7260df4c3">tvm::te::BaseComputeOpNode::axis</a></div><div class="ttdeci">Array&lt; IterVar &gt; axis</div><div class="ttdoc">IterVar on each axis. </div><div class="ttdef"><b>Definition:</b> operation.h:207</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
diff --git a/docs/reference/api/doxygen/cuda_2reduction_8h_source.html b/docs/reference/api/doxygen/cuda_2reduction_8h_source.html
index 46300c2206..72ae52014d 100644
--- a/docs/reference/api/doxygen/cuda_2reduction_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2reduction_8h_source.html
@@ -93,7 +93,7 @@ $(function() {
 <div class="ttc" id="namespacetvm_1_1te_html_aae384e9b73c2271905486e4a74b69265"><div class="ttname"><a href="namespacetvm_1_1te.html#aae384e9b73c2271905486e4a74b69265">tvm::te::reduce_axis</a></div><div class="ttdeci">IterVar reduce_axis(Range dom, std::string name=&quot;rv&quot;)</div><div class="ttdoc">Create a new IterVar for reduction operations. </div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
 <div class="ttc" id="namespacetvm_1_1topi_html_a7b1acf424786ee187f0f19a725b85d8c"><div class="ttname"><a href="namespacetvm_1_1topi.html#a7b1acf424786ee187f0f19a725b85d8c">tvm::topi::kCommReduce</a></div><div class="ttdeci">constexpr auto kCommReduce</div><div class="ttdef"><b>Definition:</b> tags.h:34</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
diff --git a/docs/reference/api/doxygen/cuda_2softmax_8h_source.html b/docs/reference/api/doxygen/cuda_2softmax_8h_source.html
index 75511c81a5..5998697134 100644
--- a/docs/reference/api/doxygen/cuda_2softmax_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2softmax_8h_source.html
@@ -81,7 +81,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1te_1_1BaseComputeOpNode_html_a21617a643897727c51ded2b7260df4c3"><div class="ttname"><a href="classtvm_1_1te_1_1BaseComputeOpNode.html#a21617a643897727c51ded2b7260df4c3">tvm::te::BaseComputeOpNode::axis</a></div><div class="ttdeci">Array&lt; IterVar &gt; axis</div><div class="ttdoc">IterVar on each axis. </div><div class="ttdef"><b>Definition:</b> operation.h:207</div></div>
 <div class="ttc" id="namespacetvm_1_1te_html_aae384e9b73c2271905486e4a74b69265"><div class="ttname"><a href="namespacetvm_1_1te.html#aae384e9b73c2271905486e4a74b69265">tvm::te::reduce_axis</a></div><div class="ttdeci">IterVar reduce_axis(Range dom, std::string name=&quot;rv&quot;)</div><div class="ttdoc">Create a new IterVar for reduction operations. </div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="namespacetvm_1_1topi_1_1rocm_html_ab71ce2b3685f0ce5f30d2d661c5e799b"><div class="ttname"><a href="namespacetvm_1_1topi_1_1rocm.html#ab71ce2b3685f0ce5f30d2d661c5e799b">tvm::topi::rocm::schedule_softmax</a></div><div class="ttdeci">Schedule schedule_softmax(const Target &amp;target, const Array&lt; Tensor &gt; &amp;outs)</div><div class="ttdoc">Create a rocm schedule for the given softmax output tensors. </div><div class="ttdef"><b>Definition:</b> softmax.h:48</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="namespacetvm_1_1topi_html_a466452c7337b11c7237b8756cf7da621"><div class="ttname"><a href="namespacetvm_1_1topi.html#a466452c7337b11c7237b8756cf7da621">tvm::topi::exp</a></div><div class="ttdeci">Tensor exp(const Tensor &amp;x, std::string name=&quot;T_&quot; &quot;exp&quot;, std::string tag=kElementWise)</div><div class="ttdef"><b>Definition:</b> elemwise.h:49</div></div>
diff --git a/docs/reference/api/doxygen/database_8h_source.html b/docs/reference/api/doxygen/database_8h_source.html
index 8f814106ba..f4d6d71119 100644
--- a/docs/reference/api/doxygen/database_8h_source.html
+++ b/docs/reference/api/doxygen/database_8h_source.html
@@ -105,7 +105,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1PyDatabaseNode_html_a65fcb9b59b8ce6e685fb62c4459c57ba"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyDatabaseNode.html#a65fcb9b59b8ce6e685fb62c4459c57ba">tvm::meta_schedule::PyDatabaseNode::f_query_tuning_record</a></div><div class="ttdeci">FQueryTuningRecord f_query_tuning_record</div><div class="ttdoc">The packed function to the QueryTuningRecord function. </div><div class="ttdef"><b>Definition:</b> database.h:315</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1PyDatabaseNode_html_a4a03c70569c9a18059861dfb5c90e845"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyDatabaseNode.html#a4a03c70569c9a18059861dfb5c90e845">tvm::meta_schedule::PyDatabaseNode::f_query_schedule</a></div><div class="ttdeci">FQuerySchedule f_query_schedule</div><div class="ttdoc">The packed function to the QuerySchedule function. </div><div class="ttdef"><b>Definition:</b> database.h:317</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1WorkloadNode_html_ab533ae06bb310ffbd8acb954e253b7db"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1WorkloadNode.html#ab533ae06bb310ffbd8acb954e253b7db">tvm::meta_schedule::WorkloadNode::VisitAttrs</a></div><div class="ttdeci">void VisitAttrs(tvm::AttrVisitor *v)</div><div class="ttdef"><b>Definition:</b> database.h:47</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1PyDatabaseNode_html_ad07d7d9e78771eaa2e6e65f84e032401"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyDatabaseNode.html#ad07d7d9e78771eaa2e6e65f84e032401">tvm::meta_schedule::PyDatabaseNode::GetAllTuningRecords</a></div><div class="ttdeci">Array&lt; TuningRecord &gt; GetAllTuningRecords() final</div><div class="ttdoc">Get all tuning records from the database. </div><div class="ttdef"><b>Definition:</b> database.h:359</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1DatabaseNode_html_adb5dd2d61af2ac335d68b402c057d612"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1DatabaseNode.html#adb5dd2d61af2ac335d68b402c057d612">tvm::meta_schedule::DatabaseNode::QueryTuningRecord</a></div><div class="ttdeci">virtual Optional&lt; TuningRecord &gt; QueryTuningRecord(const IRModule &amp;mod, const Target &amp;target, const String &amp;workload_name)</div><div class="ttdoc">Query the best record of the g [...]
diff --git a/docs/reference/api/doxygen/extracted__task_8h_source.html b/docs/reference/api/doxygen/extracted__task_8h_source.html
index 0f4668bc6c..488751c5fa 100644
--- a/docs/reference/api/doxygen/extracted__task_8h_source.html
+++ b/docs/reference/api/doxygen/extracted__task_8h_source.html
@@ -82,7 +82,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="classtvm_1_1tir_1_1PrimFunc_html"><div class="ttname"><a href="classtvm_1_1tir_1_1PrimFunc.html">tvm::tir::PrimFunc</a></div><div class="ttdoc">Managed reference to PrimFuncNode. </div><div class="ttdef"><b>Definition:</b> function.h:156</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1String_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1String.html">tvm::runtime::String</a></div><div class="ttdoc">Reference to string objects. </div><div class="ttdef"><b>Definition:</b> string.h:97</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1ExtractedTaskNode_html_a50c40aa8beb57d0f31c36ef360042be6"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1ExtractedTaskNode.html#a50c40aa8beb57d0f31c36ef360042be6">tvm::meta_schedule::ExtractedTaskNode::mod</a></div><div class="ttdeci">IRModule mod</div><div class="ttdoc">The high-level IR. </div><div class="ttdef"><b>Definition:</b> extracted_task.h:47</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="classtvm_1_1meta__schedule_1_1ExtractedTaskNode_html_a89729717843a9ea91a4535bafee8b14f"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1ExtractedTaskNode.html#a89729717843a9ea91a4535bafee8b14f">tvm::meta_schedule::ExtractedTaskNode::dispatched</a></div><div class="ttdeci">Array&lt; IRModule &gt; dispatched</div><div class="ttdoc">A list of low-level IRs that the high-level IR could potentially dispatch to. </div><div class="ttdef"><b>Definition:</b> extrac [...]
diff --git a/docs/reference/api/doxygen/functions_d.html b/docs/reference/api/doxygen/functions_d.html
index e998b8e011..b1dadfda79 100644
--- a/docs/reference/api/doxygen/functions_d.html
+++ b/docs/reference/api/doxygen/functions_d.html
@@ -154,6 +154,9 @@ $(function() {
 : <a class="el" href="classtvm_1_1DiagnosticContext.html#ab0a08b05d11230b5108086cd5118f488">tvm::DiagnosticContext</a>
 , <a class="el" href="classtvm_1_1VirtualDevice.html#a73364da6471b4634fb14abf10ce42f3c">tvm::VirtualDevice</a>
 </li>
+<li>default_device_type
+: <a class="el" href="classtvm_1_1TargetKindNode.html#a0d66deaddc1ac8bfe3e39616df811b7e">tvm::TargetKindNode</a>
+</li>
 <li>default_keys
 : <a class="el" href="classtvm_1_1TargetKindNode.html#aa62e049ba158730d9ab88e4c0b173de9">tvm::TargetKindNode</a>
 </li>
@@ -253,7 +256,6 @@ $(function() {
 </li>
 <li>device_type
 : <a class="el" href="classtvm_1_1meta__schedule_1_1RunnerInputNode.html#a5879e387f788cfd90b5a62ef1e55011e">tvm::meta_schedule::RunnerInputNode</a>
-, <a class="el" href="classtvm_1_1TargetKindNode.html#a18459286d8d501892992a4209ad08652">tvm::TargetKindNode</a>
 , <a class="el" href="classtvm_1_1VirtualDeviceNode.html#a5e3f67045652bc27b937acf1ddc677a7">tvm::VirtualDeviceNode</a>
 </li>
 <li>DeviceCopy()
diff --git a/docs/reference/api/doxygen/functions_func_g.html b/docs/reference/api/doxygen/functions_func_g.html
index 48557ef724..c1cbccf435 100644
--- a/docs/reference/api/doxygen/functions_func_g.html
+++ b/docs/reference/api/doxygen/functions_func_g.html
@@ -366,11 +366,14 @@ $(function() {
 : <a class="el" href="classtvm_1_1TypeReporterNode.html#a06af835a761aaa10627a88ac4b712a15">tvm::TypeReporterNode</a>
 </li>
 <li>GetSRef()
-: <a class="el" href="classtvm_1_1tir_1_1ScheduleNode.html#a08f7ed1ef1470fb1c9cfc272e14a1e32">tvm::tir::ScheduleNode</a>
+: <a class="el" href="classtvm_1_1tir_1_1ScheduleNode.html#a2a52c8522a4bfc7d42a189250a462ce8">tvm::tir::ScheduleNode</a>
 </li>
 <li>GetTag()
 : <a class="el" href="structtvm_1_1runtime_1_1vm_1_1Instruction.html#a8b46d1eb3853555b6d3a85f2ef9c0868">tvm::runtime::vm::Instruction</a>
 </li>
+<li>GetTargetDeviceType()
+: <a class="el" href="classtvm_1_1TargetNode.html#a01c985da7b7451518db042094336a4b1">tvm::TargetNode</a>
+</li>
 <li>GetTargetProperty()
 : <a class="el" href="classtvm_1_1runtime_1_1DeviceAPI.html#a8967810939aa24e17c37599c5014e50f">tvm::runtime::DeviceAPI</a>
 </li>
@@ -407,7 +410,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1GlobalVarSupply.html#a0eaceb757342679afa02708f290ff995">tvm::GlobalVarSupply</a>
 </li>
 <li>GlobalVarSupplyNode()
-: <a class="el" href="classtvm_1_1GlobalVarSupplyNode.html#adb8480ebe496dece012c527d292ac046">tvm::GlobalVarSupplyNode</a>
+: <a class="el" href="classtvm_1_1GlobalVarSupplyNode.html#afc86b0c452c4e95050b4e3fc3a19391f">tvm::GlobalVarSupplyNode</a>
 </li>
 <li>Goto()
 : <a class="el" href="structtvm_1_1runtime_1_1vm_1_1Instruction.html#a40b49fe5c05c5fe5f7a5c5c01bf651c8">tvm::runtime::vm::Instruction</a>
diff --git a/docs/reference/api/doxygen/functions_func_s.html b/docs/reference/api/doxygen/functions_func_s.html
index bdd9d1940d..7405f274f1 100644
--- a/docs/reference/api/doxygen/functions_func_s.html
+++ b/docs/reference/api/doxygen/functions_func_s.html
@@ -331,12 +331,12 @@ $(function() {
 , <a class="el" href="structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html#ae88a65b8d90a7c55fc6ea6bb1863b425">tvm::detail::AttrTriggerNonDefaultEntry&lt; T &gt;</a>
 , <a class="el" href="classtvm_1_1GenericFunc.html#a97c34a40c5059bdda64494d61f50602d">tvm::GenericFunc</a>
 </li>
+<li>set_default_device_type()
+: <a class="el" href="classtvm_1_1TargetKindRegEntry.html#aa34789ae275e36dcd6696aa3881bbc92">tvm::TargetKindRegEntry</a>
+</li>
 <li>set_default_keys()
 : <a class="el" href="classtvm_1_1TargetKindRegEntry.html#a2995c32e12246e892f7f4cb621a2819c">tvm::TargetKindRegEntry</a>
 </li>
-<li>set_device_type()
-: <a class="el" href="classtvm_1_1TargetKindRegEntry.html#ae3ce5349493f402b82e755a0a180bd9a">tvm::TargetKindRegEntry</a>
-</li>
 <li>set_dispatch()
 : <a class="el" href="classtvm_1_1NodeFunctor_3_01R_07const_01ObjectRef_01_6n_00_01Args_8_8_8_08_4.html#a2fcc19e5151e9b9e56cafc76231b29fd">tvm::NodeFunctor&lt; R(const ObjectRef &amp;n, Args...)&gt;</a>
 , <a class="el" href="classtvm_1_1script_1_1printer_1_1TracedObjectFunctor.html#a39e23af093ba0ee9dab17de86b6fa58e">tvm::script::printer::TracedObjectFunctor&lt; R, Args &gt;</a>
@@ -740,7 +740,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1tir_1_1ScheduleNode.html#a93d1d23f24d903db844f75f51fe09a36">tvm::tir::ScheduleNode</a>
 </li>
 <li>StorageAlignStep()
-: <a class="el" href="classtvm_1_1auto__scheduler_1_1StorageAlignStep.html#a99dbb8c55d9e7d78268b6d43fd348bc7">tvm::auto_scheduler::StorageAlignStep</a>
+: <a class="el" href="classtvm_1_1auto__scheduler_1_1StorageAlignStep.html#af50b7c2f020f8e0a80f5bcc8e559b394">tvm::auto_scheduler::StorageAlignStep</a>
 </li>
 <li>Store()
 : <a class="el" href="classtvm_1_1tir_1_1Store.html#a2c4278b8bcdae57ada2022ecc7c290c3">tvm::tir::Store</a>
@@ -755,7 +755,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1runtime_1_1DeviceAPI.html#ac29b9295c432a87658392872c644864f">tvm::runtime::DeviceAPI</a>
 </li>
 <li>String()
-: <a class="el" href="classtvm_1_1runtime_1_1String.html#a68df7bab89fca339e3918438dd80300d">tvm::runtime::String</a>
+: <a class="el" href="classtvm_1_1runtime_1_1String.html#ac5d930b522e9fef9c07e51819d96d2f3">tvm::runtime::String</a>
 </li>
 <li>StringImm()
 : <a class="el" href="classtvm_1_1tir_1_1StringImm.html#a0f2830290e055f677c5d5dea98aab726">tvm::tir::StringImm</a>
diff --git a/docs/reference/api/doxygen/functions_func_t.html b/docs/reference/api/doxygen/functions_func_t.html
index 92c89285e3..0176c5e1cd 100644
--- a/docs/reference/api/doxygen/functions_func_t.html
+++ b/docs/reference/api/doxygen/functions_func_t.html
@@ -1184,7 +1184,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1runtime_1_1TVMPODValue__.html#a2f46b59a6c1d5eb4575d7f583b5f1a0c">tvm::runtime::TVMPODValue_</a>
 </li>
 <li>TVMRetValue()
-: <a class="el" href="classtvm_1_1runtime_1_1TVMRetValue.html#a77455a8fe7d27b90a01a64f1cd28e9ec">tvm::runtime::TVMRetValue</a>
+: <a class="el" href="classtvm_1_1runtime_1_1TVMRetValue.html#ab86bf21f214fca72e73a7f6e20ffab8d">tvm::runtime::TVMRetValue</a>
 </li>
 <li>type()
 : <a class="el" href="classtvm_1_1runtime_1_1vm_1_1Allocator.html#a7cfb6d4ea480436801276fe2e7660eb2">tvm::runtime::vm::Allocator</a>
@@ -1213,7 +1213,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1TypedEnvFunc_3_01R_07Args_8_8_8_08_4.html#a0d72a6fa7263821c14bcd37837998ed9">tvm::TypedEnvFunc&lt; R(Args...)&gt;</a>
 </li>
 <li>TypedPackedFunc()
-: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#afd8ee9dd9648c19b468bb4b0b00e8e4e">tvm::runtime::TypedPackedFunc&lt; R(Args...)&gt;</a>
+: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#aa3663a440db7a6951abd767109b9bf90">tvm::runtime::TypedPackedFunc&lt; R(Args...)&gt;</a>
 </li>
 <li>TypeIndex2Key()
 : <a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">tvm::runtime::Object</a>
@@ -1236,7 +1236,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1TypeRelation.html#ac26b1897eab8197ed26606ab81b7403b">tvm::TypeRelation</a>
 </li>
 <li>TypeReporter()
-: <a class="el" href="classtvm_1_1TypeReporter.html#aa3dc38a3c84d324d0b3a9f358460a091">tvm::TypeReporter</a>
+: <a class="el" href="classtvm_1_1TypeReporter.html#a8e7e05a07f9f7ad9bea91f27afac9051">tvm::TypeReporter</a>
 </li>
 <li>TypeVar()
 : <a class="el" href="classtvm_1_1TypeVar.html#adf5ef8e89d162735519b5d125c89e3e3">tvm::TypeVar</a>
diff --git a/docs/reference/api/doxygen/functions_func_u.html b/docs/reference/api/doxygen/functions_func_u.html
index 4b4e0f203d..611cae9ff1 100644
--- a/docs/reference/api/doxygen/functions_func_u.html
+++ b/docs/reference/api/doxygen/functions_func_u.html
@@ -106,7 +106,7 @@ $(function() {
 , <a class="el" href="classtvm_1_1auto__scheduler_1_1CostModelNode.html#ae35b2b678760b8da57a43d3ae9c24da5">tvm::auto_scheduler::CostModelNode</a>
 , <a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#a2d7849df6c7dbe93bf363c1d9f860a26">tvm::auto_scheduler::PythonBasedModelNode</a>
 , <a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModelNode.html#a7febac6c05d8e2d407f466467769ee32">tvm::auto_scheduler::RandomModelNode</a>
-, <a class="el" href="classtvm_1_1IRModuleNode.html#a94a93385e64ce844299729af6a573015">tvm::IRModuleNode</a>
+, <a class="el" href="classtvm_1_1IRModuleNode.html#abdd8936c6fca33ef9b7c086f8fd58f84">tvm::IRModuleNode</a>
 , <a class="el" href="classtvm_1_1meta__schedule_1_1CostModelNode.html#a1bba32eba84db583fe90d1a5bce085f1">tvm::meta_schedule::CostModelNode</a>
 , <a class="el" href="classtvm_1_1meta__schedule_1_1PyCostModelNode.html#a970b00b0eb1bf6b88eea2711b58c4d1d">tvm::meta_schedule::PyCostModelNode</a>
 </li>
diff --git a/docs/reference/api/doxygen/functions_g.html b/docs/reference/api/doxygen/functions_g.html
index 690b4e8400..69e4bf37d5 100644
--- a/docs/reference/api/doxygen/functions_g.html
+++ b/docs/reference/api/doxygen/functions_g.html
@@ -386,6 +386,9 @@ $(function() {
 <li>GetTag()
 : <a class="el" href="structtvm_1_1runtime_1_1vm_1_1Instruction.html#a8b46d1eb3853555b6d3a85f2ef9c0868">tvm::runtime::vm::Instruction</a>
 </li>
+<li>GetTargetDeviceType()
+: <a class="el" href="classtvm_1_1TargetNode.html#a01c985da7b7451518db042094336a4b1">tvm::TargetNode</a>
+</li>
 <li>GetTargetProperty()
 : <a class="el" href="classtvm_1_1runtime_1_1DeviceAPI.html#a8967810939aa24e17c37599c5014e50f">tvm::runtime::DeviceAPI</a>
 </li>
@@ -400,7 +403,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1runtime_1_1Object.html#a5693cbadcc1168b96db7b1cc5c200b86">tvm::runtime::Object</a>
 </li>
 <li>GetVarDoc()
-: <a class="el" href="classtvm_1_1script_1_1printer_1_1VarTableNode.html#a2fa65668c9589c8d061738f902506717">tvm::script::printer::VarTableNode</a>
+: <a class="el" href="classtvm_1_1script_1_1printer_1_1VarTableNode.html#afb76ecf5bd4103f38ec8e3c426f2c63b">tvm::script::printer::VarTableNode</a>
 </li>
 <li>GetVirtualDevices()
 : <a class="el" href="classtvm_1_1runtime_1_1vm_1_1Executable.html#a2f0abfbed7ce24b365470c70db023ad3">tvm::runtime::vm::Executable</a>
diff --git a/docs/reference/api/doxygen/functions_k.html b/docs/reference/api/doxygen/functions_k.html
index 81b83edcd7..c66923d1e8 100644
--- a/docs/reference/api/doxygen/functions_k.html
+++ b/docs/reference/api/doxygen/functions_k.html
@@ -149,13 +149,13 @@ $(function() {
 : <a class="el" href="classtvm_1_1GlobalTypeVarNode.html#a335e232894a68cc1e0ecb766bf4053c7">tvm::GlobalTypeVarNode</a>
 , <a class="el" href="classtvm_1_1IncompleteTypeNode.html#ab5f37175c1fd0dbbbedc2edaa23d33dc">tvm::IncompleteTypeNode</a>
 , <a class="el" href="classtvm_1_1runtime_1_1metadata_1_1MetadataArrayNode.html#a695a21a69be1e72b330abe32c685552e">tvm::runtime::metadata::MetadataArrayNode</a>
-, <a class="el" href="classtvm_1_1script_1_1printer_1_1OperationDocNode.html#a9b996402fa4a7202d749c56ac044e810">tvm::script::printer::OperationDocNode</a>
 </li>
 <li>Kind
 : <a class="el" href="classtvm_1_1script_1_1printer_1_1OperationDocNode.html#ab096bbea749ee994d75230cd8136afc2">tvm::script::printer::OperationDocNode</a>
 </li>
 <li>kind
-: <a class="el" href="classtvm_1_1TargetNode.html#ac19a4ee0f0ec7ea607ec746bc4551b71">tvm::TargetNode</a>
+: <a class="el" href="classtvm_1_1script_1_1printer_1_1OperationDocNode.html#a9b996402fa4a7202d749c56ac044e810">tvm::script::printer::OperationDocNode</a>
+, <a class="el" href="classtvm_1_1TargetNode.html#ac19a4ee0f0ec7ea607ec746bc4551b71">tvm::TargetNode</a>
 , <a class="el" href="classtvm_1_1tir_1_1DependencyNode.html#aeb900845b1ca3fb8787ab183af8389b7">tvm::tir::DependencyNode</a>
 , <a class="el" href="classtvm_1_1tir_1_1ForNode.html#a4fe09a4b1fb71a8ae8d5e7c807d8540b">tvm::tir::ForNode</a>
 , <a class="el" href="classtvm_1_1tir_1_1InstructionNode.html#a85c4921fdf1ebae5c95e5c4f09467355">tvm::tir::InstructionNode</a>
diff --git a/docs/reference/api/doxygen/functions_s.html b/docs/reference/api/doxygen/functions_s.html
index 6534f1d7d0..68c175d6aa 100644
--- a/docs/reference/api/doxygen/functions_s.html
+++ b/docs/reference/api/doxygen/functions_s.html
@@ -419,12 +419,12 @@ $(function() {
 , <a class="el" href="structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html#ae88a65b8d90a7c55fc6ea6bb1863b425">tvm::detail::AttrTriggerNonDefaultEntry&lt; T &gt;</a>
 , <a class="el" href="classtvm_1_1GenericFunc.html#a97c34a40c5059bdda64494d61f50602d">tvm::GenericFunc</a>
 </li>
+<li>set_default_device_type()
+: <a class="el" href="classtvm_1_1TargetKindRegEntry.html#aa34789ae275e36dcd6696aa3881bbc92">tvm::TargetKindRegEntry</a>
+</li>
 <li>set_default_keys()
 : <a class="el" href="classtvm_1_1TargetKindRegEntry.html#a2995c32e12246e892f7f4cb621a2819c">tvm::TargetKindRegEntry</a>
 </li>
-<li>set_device_type()
-: <a class="el" href="classtvm_1_1TargetKindRegEntry.html#ae3ce5349493f402b82e755a0a180bd9a">tvm::TargetKindRegEntry</a>
-</li>
 <li>set_dispatch()
 : <a class="el" href="classtvm_1_1NodeFunctor_3_01R_07const_01ObjectRef_01_6n_00_01Args_8_8_8_08_4.html#a2fcc19e5151e9b9e56cafc76231b29fd">tvm::NodeFunctor&lt; R(const ObjectRef &amp;n, Args...)&gt;</a>
 , <a class="el" href="classtvm_1_1script_1_1printer_1_1TracedObjectFunctor.html#a39e23af093ba0ee9dab17de86b6fa58e">tvm::script::printer::TracedObjectFunctor&lt; R, Args &gt;</a>
@@ -788,7 +788,7 @@ $(function() {
 : <a class="el" href="structtvm_1_1relay_1_1MultiBoxPriorAttrs.html#ad6d089344fa741021584222ffa70a451">tvm::relay::MultiBoxPriorAttrs</a>
 </li>
 <li>SizeVar()
-: <a class="el" href="classtvm_1_1tir_1_1SizeVar.html#a0f8cb8a92feb96343939d223db90f7cd">tvm::tir::SizeVar</a>
+: <a class="el" href="classtvm_1_1tir_1_1SizeVar.html#ac470249315d9e395ad581d35dd5dcb05">tvm::tir::SizeVar</a>
 </li>
 <li>Slice()
 : <a class="el" href="classtvm_1_1te_1_1Tensor_1_1Slice.html#ab314819e8bcca6421e9a4f33e48578c3">tvm::te::Tensor::Slice</a>
@@ -1078,7 +1078,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1te_1_1StageNode.html#a8f4ba7f2931b3541c12734af511600a7">tvm::te::StageNode</a>
 </li>
 <li>Str()
-: <a class="el" href="classtvm_1_1script_1_1printer_1_1LiteralDoc.html#a789d7d73bd4d94612fa2a84c16b26b89">tvm::script::printer::LiteralDoc</a>
+: <a class="el" href="classtvm_1_1script_1_1printer_1_1LiteralDoc.html#a3511ed66e343f6db16cbd72feda03d5c">tvm::script::printer::LiteralDoc</a>
 </li>
 <li>str()
 : <a class="el" href="classtvm_1_1TargetNode.html#a30cd67db46a9c4b098a8ba38fff22e26">tvm::TargetNode</a>
diff --git a/docs/reference/api/doxygen/functions_t.html b/docs/reference/api/doxygen/functions_t.html
index 6802685eae..835a648684 100644
--- a/docs/reference/api/doxygen/functions_t.html
+++ b/docs/reference/api/doxygen/functions_t.html
@@ -81,7 +81,7 @@ $(function() {
 , <a class="el" href="structtvm_1_1runtime_1_1vm_1_1Instruction.html#a46879dbe84105fb621a6167f8d73b223">tvm::runtime::vm::Instruction</a>
 </li>
 <li>Target()
-: <a class="el" href="classtvm_1_1Target.html#a58a5a1e042e265fe5a6973045226fe1a">tvm::Target</a>
+: <a class="el" href="classtvm_1_1Target.html#a77f3d7cc97d8cfd7172af58b4e784d89">tvm::Target</a>
 </li>
 <li>target
 : <a class="el" href="classtvm_1_1VirtualDeviceNode.html#a8b2d427d9e21886ccaeaae5e9cc55aaf">tvm::VirtualDeviceNode</a>
@@ -1393,7 +1393,7 @@ $(function() {
 , <a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::ObjectPtr&lt; T &gt;</a>
 , <a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::ObjectRef</a>
 , <a class="el" href="classtvm_1_1runtime_1_1TVMPODValue__.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::TVMPODValue_</a>
-, <a class="el" href="classtvm_1_1runtime_1_1TVMRetValue.html#ac4a3850c0989e7c2d5cd8e0f096d0997">tvm::runtime::TVMRetValue</a>
+, <a class="el" href="classtvm_1_1runtime_1_1TVMRetValue.html#a77455a8fe7d27b90a01a64f1cd28e9ec">tvm::runtime::TVMRetValue</a>
 , <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::TypedPackedFunc&lt; R(Args...)&gt;</a>
 </li>
 <li>type
@@ -1474,7 +1474,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1TypedEnvFunc_3_01R_07Args_8_8_8_08_4.html#a0d72a6fa7263821c14bcd37837998ed9">tvm::TypedEnvFunc&lt; R(Args...)&gt;</a>
 </li>
 <li>TypedPackedFunc()
-: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#a6b346a6d0b601eff5a100c7a207e9c86">tvm::runtime::TypedPackedFunc&lt; R(Args...)&gt;</a>
+: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#afd8ee9dd9648c19b468bb4b0b00e8e4e">tvm::runtime::TypedPackedFunc&lt; R(Args...)&gt;</a>
 </li>
 <li>TypeIndex2Key()
 : <a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">tvm::runtime::Object</a>
@@ -1497,7 +1497,7 @@ $(function() {
 : <a class="el" href="classtvm_1_1TypeRelation.html#ac26b1897eab8197ed26606ab81b7403b">tvm::TypeRelation</a>
 </li>
 <li>TypeReporter()
-: <a class="el" href="classtvm_1_1TypeReporter.html#a8e7e05a07f9f7ad9bea91f27afac9051">tvm::TypeReporter</a>
+: <a class="el" href="classtvm_1_1TypeReporter.html#aa3dc38a3c84d324d0b3a9f358460a091">tvm::TypeReporter</a>
 </li>
 <li>types
 : <a class="el" href="classtvm_1_1TupleAffineTypeNode.html#a30c834b7e1cb64467e6587ac16ebb187">tvm::TupleAffineTypeNode</a>
diff --git a/docs/reference/api/doxygen/functions_vars_d.html b/docs/reference/api/doxygen/functions_vars_d.html
index 2e5331fbfe..be87239e62 100644
--- a/docs/reference/api/doxygen/functions_vars_d.html
+++ b/docs/reference/api/doxygen/functions_vars_d.html
@@ -123,6 +123,9 @@ $(function() {
 : <a class="el" href="classtvm_1_1script_1_1printer_1_1ClassDocNode.html#a253cf698eba7d39b7345553e646bc8b9">tvm::script::printer::ClassDocNode</a>
 , <a class="el" href="classtvm_1_1script_1_1printer_1_1FunctionDocNode.html#a5bfd7179298fe5bcbc9527af2b3b98e0">tvm::script::printer::FunctionDocNode</a>
 </li>
+<li>default_device_type
+: <a class="el" href="classtvm_1_1TargetKindNode.html#a0d66deaddc1ac8bfe3e39616df811b7e">tvm::TargetKindNode</a>
+</li>
 <li>default_keys
 : <a class="el" href="classtvm_1_1TargetKindNode.html#aa62e049ba158730d9ab88e4c0b173de9">tvm::TargetKindNode</a>
 </li>
@@ -170,7 +173,6 @@ $(function() {
 </li>
 <li>device_type
 : <a class="el" href="classtvm_1_1meta__schedule_1_1RunnerInputNode.html#a5879e387f788cfd90b5a62ef1e55011e">tvm::meta_schedule::RunnerInputNode</a>
-, <a class="el" href="classtvm_1_1TargetKindNode.html#a18459286d8d501892992a4209ad08652">tvm::TargetKindNode</a>
 </li>
 <li>devices_
 : <a class="el" href="classtvm_1_1runtime_1_1vm_1_1VirtualMachine.html#a602daa8d70ae598a833d8601d1ef6d95">tvm::runtime::vm::VirtualMachine</a>
diff --git a/docs/reference/api/doxygen/generic_2default_8h_source.html b/docs/reference/api/doxygen/generic_2default_8h_source.html
index d00b869c52..512d6f7680 100644
--- a/docs/reference/api/doxygen/generic_2default_8h_source.html
+++ b/docs/reference/api/doxygen/generic_2default_8h_source.html
@@ -77,7 +77,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1ComputeOpNode_html"><div class="ttname"><a href="classtvm_1_1te_1_1ComputeOpNode.html">tvm::te::ComputeOpNode</a></div><div class="ttdoc">A Compute op that compute a tensor on certain domain. </div><div class="ttdef"><b>Definition:</b> operation.h:226</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
 <div class="ttc" id="tags_8h_html"><div class="ttname"><a href="tags_8h.html">tags.h</a></div><div class="ttdoc">External function interface to rocBLAS libraries. </div></div>
diff --git a/docs/reference/api/doxygen/generic_2extern_8h_source.html b/docs/reference/api/doxygen/generic_2extern_8h_source.html
index ecbbf7fc4b..b9538ddff0 100644
--- a/docs/reference/api/doxygen/generic_2extern_8h_source.html
+++ b/docs/reference/api/doxygen/generic_2extern_8h_source.html
@@ -75,7 +75,7 @@ $(function() {
 <div class="ttc" id="schedule__pass_8h_html"><div class="ttname"><a href="schedule__pass_8h.html">schedule_pass.h</a></div><div class="ttdoc">Collection of Schedule pass functions. </div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="generic_2injective_8h_html"><div class="ttname"><a href="generic_2injective_8h.html">injective.h</a></div><div class="ttdoc">Generic schedule for injective operations. </div></div>
 <div class="ttc" id="tags_8h_html"><div class="ttname"><a href="tags_8h.html">tags.h</a></div><div class="ttdoc">External function interface to rocBLAS libraries. </div></div>
diff --git a/docs/reference/api/doxygen/generic_2injective_8h_source.html b/docs/reference/api/doxygen/generic_2injective_8h_source.html
index bd4bb3776c..3c279d5cd5 100644
--- a/docs/reference/api/doxygen/generic_2injective_8h_source.html
+++ b/docs/reference/api/doxygen/generic_2injective_8h_source.html
@@ -77,7 +77,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1te_1_1ComputeOpNode_html"><div class="ttname"><a href="classtvm_1_1te_1_1ComputeOpNode.html">tvm::te::ComputeOpNode</a></div><div class="ttdoc">A Compute op that compute a tensor on certain domain. </div><div class="ttdef"><b>Definition:</b> operation.h:226</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1BaseComputeOpNode_html_a21617a643897727c51ded2b7260df4c3"><div class="ttname"><a href="classtvm_1_1te_1_1BaseComputeOpNode.html#a21617a643897727c51ded2b7260df4c3">tvm::te::BaseComputeOpNode::axis</a></div><div class="ttdeci">Array&lt; IterVar &gt; axis</div><div class="ttdoc">IterVar on each axis. </div><div class="ttdef"><b>Definition:</b> operation.h:207</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
diff --git a/docs/reference/api/doxygen/interpreter_8h_source.html b/docs/reference/api/doxygen/interpreter_8h_source.html
index 681c8fa2cf..55e089f4d8 100644
--- a/docs/reference/api/doxygen/interpreter_8h_source.html
+++ b/docs/reference/api/doxygen/interpreter_8h_source.html
@@ -99,7 +99,7 @@ $(function() {
 <div class="ttc" id="object_8h_html_ac6e7295a4999e2c8e4a2c990beca887a"><div class="ttname"><a href="object_8h.html#ac6e7295a4999e2c8e4a2c990beca887a">TVM_DEFINE_OBJECT_REF_METHODS</a></div><div class="ttdeci">#define TVM_DEFINE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)</div><div class="ttdef"><b>Definition:</b> object.h:713</div></div>
 <div class="ttc" id="structtvm_1_1relay_1_1RefValueObj_html_a33d9d47dac60dde31a80e3d6c433fec8"><div class="ttname"><a href="structtvm_1_1relay_1_1RefValueObj.html#a33d9d47dac60dde31a80e3d6c433fec8">tvm::relay::RefValueObj::VisitAttrs</a></div><div class="ttdeci">void VisitAttrs(tvm::AttrVisitor *v)</div><div class="ttdef"><b>Definition:</b> interpreter.h:110</div></div>
 <div class="ttc" id="namespacetvm_html_a7c2095aed90b2129ba631b90103313a2"><div class="ttname"><a href="namespacetvm.html#a7c2095aed90b2129ba631b90103313a2">tvm::Device</a></div><div class="ttdeci">DLDevice Device</div><div class="ttdef"><b>Definition:</b> ndarray.h:43</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="structtvm_1_1relay_1_1ConstructorValueObj_html_aa062bc1cf9c1ccdb3256b98b50899fe3"><div class="ttname"><a href="structtvm_1_1relay_1_1ConstructorValueObj.html#aa062bc1cf9c1ccdb3256b98b50899fe3">tvm::relay::ConstructorValueObj::VisitAttrs</a></div><div class="ttdeci">void VisitAttrs(tvm::AttrVisitor *v)</div><div class="ttdef"><b>Definition:</b> interpreter.h:130</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ClosureObj_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ClosureObj.html">tvm::runtime::ClosureObj</a></div><div class="ttdoc">An object representing a closure. This object is used by both the Relay VM and interpreter. </div><div class="ttdef"><b>Definition:</b> closure.h:36</div></div>
diff --git a/docs/reference/api/doxygen/op__strategy_8h_source.html b/docs/reference/api/doxygen/op__strategy_8h_source.html
index d97e4faafc..6a657dc120 100644
--- a/docs/reference/api/doxygen/op__strategy_8h_source.html
+++ b/docs/reference/api/doxygen/op__strategy_8h_source.html
@@ -92,7 +92,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1String_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1String.html">tvm::runtime::String</a></div><div class="ttdoc">Reference to string objects. </div><div class="ttdef"><b>Definition:</b> string.h:97</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1TypedPackedFunc_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1TypedPackedFunc.html">tvm::runtime::TypedPackedFunc&lt; Array&lt; te::Tensor &gt;(const Attrs &amp;attrs, const Array&lt; te::Tensor &gt; &amp;inputs, const Type &amp;out_type)&gt;</a></div></div>
 <div class="ttc" id="object_8h_html_ac6e7295a4999e2c8e4a2c990beca887a"><div class="ttname"><a href="object_8h.html#ac6e7295a4999e2c8e4a2c990beca887a">TVM_DEFINE_OBJECT_REF_METHODS</a></div><div class="ttdeci">#define TVM_DEFINE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)</div><div class="ttdef"><b>Definition:</b> object.h:713</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="classtvm_1_1relay_1_1OpImplementationNode_html_a53fd916957cb15e070d736d12d8ced62"><div class="ttname"><a href="classtvm_1_1relay_1_1OpImplementationNode.html#a53fd916957cb15e070d736d12d8ced62">tvm::relay::OpImplementationNode::fschedule</a></div><div class="ttdeci">FTVMSchedule fschedule</div><div class="ttdoc">Schedule function. </div><div class="ttdef"><b>Definition:</b> op_strategy.h:47</div></div>
 <div class="ttc" id="target_8h_html"><div class="ttname"><a href="target_8h.html">target.h</a></div><div class="ttdoc">Compilation target object. </div></div>
diff --git a/docs/reference/api/doxygen/relay_2op__attr__types_8h_source.html b/docs/reference/api/doxygen/relay_2op__attr__types_8h_source.html
index 64740e6cb2..a41c757ec2 100644
--- a/docs/reference/api/doxygen/relay_2op__attr__types_8h_source.html
+++ b/docs/reference/api/doxygen/relay_2op__attr__types_8h_source.html
@@ -85,7 +85,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1TypedPackedFunc_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1TypedPackedFunc.html">tvm::runtime::TypedPackedFunc&lt; Array&lt; te::Tensor &gt;(const Attrs &amp;attrs, const Array&lt; te::Tensor &gt; &amp;inputs, const Type &amp;out_type)&gt;</a></div></div>
 <div class="ttc" id="classtvm_1_1RelayExpr_html"><div class="ttname"><a href="classtvm_1_1RelayExpr.html">tvm::RelayExpr</a></div><div class="ttdoc">Managed reference to RelayExprNode. </div><div class="ttdef"><b>Definition:</b> expr.h:431</div></div>
 <div class="ttc" id="namespacetvm_1_1relay_html_afb8a8d4dd43830d4ce7d566abcd1c450"><div class="ttname"><a href="namespacetvm_1_1relay.html#afb8a8d4dd43830d4ce7d566abcd1c450">tvm::relay::TOpIsStateful</a></div><div class="ttdeci">bool TOpIsStateful</div><div class="ttdoc">Whether operator is stateful or contain internal state. </div><div class="ttdef"><b>Definition:</b> op_attr_types.h:78</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="namespacetvm_1_1relay_html_a5b84e3790f89bb3fad5c7911eeb99531"><div class="ttname"><a href="namespacetvm_1_1relay.html#a5b84e3790f89bb3fad5c7911eeb99531">tvm::relay::Expr</a></div><div class="ttdeci">tvm::RelayExpr Expr</div><div class="ttdef"><b>Definition:</b> expr.h:54</div></div>
 <div class="ttc" id="namespacetvm_1_1relay_html_a5dab2ddae20ac7564a81ab3a0a9aba76"><div class="ttname"><a href="namespacetvm_1_1relay.html#a5dab2ddae20ac7564a81ab3a0a9aba76">tvm::relay::TOpPattern</a></div><div class="ttdeci">int TOpPattern</div><div class="ttdoc">the operator pattern </div><div class="ttdef"><b>Definition:</b> op_attr_types.h:68</div></div>
diff --git a/docs/reference/api/doxygen/rocm_2dense_8h_source.html b/docs/reference/api/doxygen/rocm_2dense_8h_source.html
index f499b11602..d75318c8af 100644
--- a/docs/reference/api/doxygen/rocm_2dense_8h_source.html
+++ b/docs/reference/api/doxygen/rocm_2dense_8h_source.html
@@ -80,7 +80,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1DataType_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1DataType.html">tvm::runtime::DataType</a></div><div class="ttdoc">Runtime primitive data type. </div><div class="ttdef"><b>Definition:</b> data_type.h:41</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="namespacetvm_1_1topi_1_1cuda_html_a67def722e608bf15e836cec8181f75ff"><div class="ttname"><a href="namespacetvm_1_1topi_1_1cuda.html#a67def722e608bf15e836cec8181f75ff">tvm::topi::cuda::schedule_dense</a></div><div class="ttdeci">Schedule schedule_dense(const Target &amp;target, const Array&lt; Tensor &gt; &amp;outs)</div><div class="ttdoc">Create a CUDA schedule for dense. </div><div class="ttdef"><b>Definition:</b> dense.h:88</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="nn_2dense_8h_html"><div class="ttname"><a href="nn_2dense_8h.html">dense.h</a></div><div class="ttdoc">Dense op constructions. </div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
diff --git a/docs/reference/api/doxygen/rocm_2injective_8h_source.html b/docs/reference/api/doxygen/rocm_2injective_8h_source.html
index 684511de7f..314b60fceb 100644
--- a/docs/reference/api/doxygen/rocm_2injective_8h_source.html
+++ b/docs/reference/api/doxygen/rocm_2injective_8h_source.html
@@ -73,7 +73,7 @@ $(function() {
 <div class="ttc" id="namespacetvm_1_1topi_1_1x86_html_afde6f5b6bb1825d127238b9a55a29337"><div class="ttname"><a href="namespacetvm_1_1topi_1_1x86.html#afde6f5b6bb1825d127238b9a55a29337">tvm::topi::x86::schedule_injective_from_existing</a></div><div class="ttdeci">Schedule schedule_injective_from_existing(Schedule sch, const Tensor &amp;out)</div><div class="ttdoc">Updates an existing schedule for the given injective ops. </div><div class="ttdef"><b>Definition:</b> injective.h:47</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="namespacetvm_1_1topi_1_1cuda_html_a9a137fa03c26e87448d89697f344c9ce"><div class="ttname"><a href="namespacetvm_1_1topi_1_1cuda.html#a9a137fa03c26e87448d89697f344c9ce">tvm::topi::cuda::schedule_injective</a></div><div class="ttdeci">Schedule schedule_injective(const Target &amp;target, const Array&lt; Tensor &gt; &amp;outs)</div><div class="ttdoc">Create a CUDA schedule for the given output tensors. </div><div class="ttdef"><b>Definition:</b> injective.h:67</div></div>
diff --git a/docs/reference/api/doxygen/rocm_2pooling_8h_source.html b/docs/reference/api/doxygen/rocm_2pooling_8h_source.html
index 5464733900..e81dbcb1d1 100644
--- a/docs/reference/api/doxygen/rocm_2pooling_8h_source.html
+++ b/docs/reference/api/doxygen/rocm_2pooling_8h_source.html
@@ -75,7 +75,7 @@ $(function() {
 <div class="ttc" id="namespacetvm_1_1topi_1_1rocm_html_a45aee34b0000f98aafd958ffe9baebc0"><div class="ttname"><a href="namespacetvm_1_1topi_1_1rocm.html#a45aee34b0000f98aafd958ffe9baebc0">tvm::topi::rocm::schedule_global_pool</a></div><div class="ttdeci">Schedule schedule_global_pool(const Target &amp;target, const Array&lt; Tensor &gt; &amp;outs)</div><div class="ttdoc">Create a rocm schedule for global_pool. </div><div class="ttdef"><b>Definition:</b> pooling.h:61</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="cuda_2pooling_8h_html"><div class="ttname"><a href="cuda_2pooling_8h.html">pooling.h</a></div><div class="ttdoc">CUDA schedule for pooling operations. </div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="namespacetvm_1_1topi_1_1cuda_html_ad29a3518671a48fab5b0eb18de35e787"><div class="ttname"><a href="namespacetvm_1_1topi_1_1cuda.html#ad29a3518671a48fab5b0eb18de35e787">tvm::topi::cuda::schedule_global_pool</a></div><div class="ttdeci">Schedule schedule_global_pool(const Target &amp;target, const Array&lt; Tensor &gt; &amp;outs)</div><div class="ttdoc">Create a CUDA schedule for global_pool. </div><div class="ttdef"><b>Definition:</b> pooling.h:116</div></div>
diff --git a/docs/reference/api/doxygen/rocm_2reduction_8h_source.html b/docs/reference/api/doxygen/rocm_2reduction_8h_source.html
index ca41e03967..9cfb4fe3ec 100644
--- a/docs/reference/api/doxygen/rocm_2reduction_8h_source.html
+++ b/docs/reference/api/doxygen/rocm_2reduction_8h_source.html
@@ -73,7 +73,7 @@ $(function() {
 <div class="ttc" id="namespacetvm_1_1topi_1_1cuda_html_a674cabb64c0a45fd58c595389beb4919"><div class="ttname"><a href="namespacetvm_1_1topi_1_1cuda.html#a674cabb64c0a45fd58c595389beb4919">tvm::topi::cuda::schedule_reduce</a></div><div class="ttdeci">Schedule schedule_reduce(const Target &amp;target, Array&lt; Tensor &gt; outs)</div><div class="ttdoc">Create a CUDA schedule for a reduce operation. </div><div class="ttdef"><b>Definition:</b> reduction.h:185</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="tags_8h_html"><div class="ttname"><a href="tags_8h.html">tags.h</a></div><div class="ttdoc">External function interface to rocBLAS libraries. </div></div>
 <div class="ttc" id="namespacetvm_1_1topi_1_1rocm_html_aa4e0bacdd895904427bfc300ca9ace32"><div class="ttname"><a href="namespacetvm_1_1topi_1_1rocm.html#aa4e0bacdd895904427bfc300ca9ace32">tvm::topi::rocm::schedule_reduce</a></div><div class="ttdeci">Schedule schedule_reduce(const Target &amp;target, Array&lt; Tensor &gt; outs)</div><div class="ttdoc">Create a rocm schedule for a reduce operation. </div><div class="ttdef"><b>Definition:</b> reduction.h:47</div></div>
diff --git a/docs/reference/api/doxygen/rocm_2softmax_8h_source.html b/docs/reference/api/doxygen/rocm_2softmax_8h_source.html
index 087ad84682..37ccd6dc3d 100644
--- a/docs/reference/api/doxygen/rocm_2softmax_8h_source.html
+++ b/docs/reference/api/doxygen/rocm_2softmax_8h_source.html
@@ -73,7 +73,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="cuda_2softmax_8h_html"><div class="ttname"><a href="cuda_2softmax_8h.html">softmax.h</a></div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="namespacetvm_1_1topi_1_1rocm_html_ab71ce2b3685f0ce5f30d2d661c5e799b"><div class="ttname"><a href="namespacetvm_1_1topi_1_1rocm.html#ab71ce2b3685f0ce5f30d2d661c5e799b">tvm::topi::rocm::schedule_softmax</a></div><div class="ttdeci">Schedule schedule_softmax(const Target &amp;target, const Array&lt; Tensor &gt; &amp;outs)</div><div class="ttdoc">Create a rocm schedule for the given softmax output tensors. </div><div class="ttdef"><b>Definition:</b> softmax.h:48</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="tags_8h_html"><div class="ttname"><a href="tags_8h.html">tags.h</a></div><div class="ttdoc">External function interface to rocBLAS libraries. </div></div>
diff --git a/docs/reference/api/doxygen/search/all_10.js b/docs/reference/api/doxygen/search/all_10.js
index 9ce5090c6f..2f8964351d 100644
--- a/docs/reference/api/doxygen/search/all_10.js
+++ b/docs/reference/api/doxygen/search/all_10.js
@@ -33,7 +33,7 @@ var searchData=
   ['onehotattrs',['OneHotAttrs',['../structtvm_1_1relay_1_1OneHotAttrs.html',1,'tvm::relay']]],
   ['onesided',['onesided',['../structtvm_1_1relay_1_1StftAttrs.html#a23bb87eed8fca94613a4e2d8d7f22858',1,'tvm::relay::StftAttrs']]],
   ['oobchecker',['OOBChecker',['../namespacetvm_1_1tir_1_1transform.html#aea27d24b6e7852652d258268d8537b66',1,'tvm::tir::transform']]],
-  ['op',['Op',['../classtvm_1_1Op.html',1,'tvm::Op'],['../classtvm_1_1auto__scheduler_1_1StageNode.html#a97824d055f598a0dc93d601d9881797e',1,'tvm::auto_scheduler::StageNode::op()'],['../classtvm_1_1relay_1_1CallPatternNode.html#af0827599611846bb2952ffbfe3a9a60e',1,'tvm::relay::CallPatternNode::op()'],['../classtvm_1_1relay_1_1CallNode.html#ade66944f5a2f064e4eb07ad9f9438306',1,'tvm::relay::CallNode::op()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#aa16a3e7e4030a69da0def6465d65e7 [...]
+  ['op',['Op',['../classtvm_1_1Op.html',1,'tvm::Op'],['../classtvm_1_1OpAttrMap.html#a2c31e8a3c11caeb061d69db14ebb0e95',1,'tvm::OpAttrMap::Op()'],['../classtvm_1_1auto__scheduler_1_1StageNode.html#a97824d055f598a0dc93d601d9881797e',1,'tvm::auto_scheduler::StageNode::op()'],['../classtvm_1_1relay_1_1CallPatternNode.html#af0827599611846bb2952ffbfe3a9a60e',1,'tvm::relay::CallPatternNode::op()'],['../classtvm_1_1relay_1_1CallNode.html#ade66944f5a2f064e4eb07ad9f9438306',1,'tvm::relay::CallNod [...]
   ['op_2eh',['op.h',['../ir_2op_8h.html',1,'(Global Namespace)'],['../relay_2op_8h.html',1,'(Global Namespace)'],['../tir_2op_8h.html',1,'(Global Namespace)']]],
   ['op2stage_5fcache_5f',['op2stage_cache_',['../classtvm_1_1te_1_1ScheduleNode.html#adbc8bfb6812add2173dcc7a6adb85d5c',1,'tvm::te::ScheduleNode']]],
   ['op_5fattr_5ftypes_2eh',['op_attr_types.h',['../relay_2op__attr__types_8h.html',1,'(Global Namespace)'],['../tir_2op__attr__types_8h.html',1,'(Global Namespace)']]],
diff --git a/docs/reference/api/doxygen/search/all_13.js b/docs/reference/api/doxygen/search/all_13.js
index 6ab51c6cce..71768b1ff9 100644
--- a/docs/reference/api/doxygen/search/all_13.js
+++ b/docs/reference/api/doxygen/search/all_13.js
@@ -12,7 +12,7 @@ var searchData=
   ['randomcomputelocation',['RandomComputeLocation',['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a1bf485537817533eaf711226f687778c',1,'tvm::meta_schedule::ScheduleRule']]],
   ['randommodel',['RandomModel',['../classtvm_1_1auto__scheduler_1_1RandomModel.html',1,'tvm::auto_scheduler::RandomModel'],['../classtvm_1_1auto__scheduler_1_1RandomModel.html#aa456abf1dc91cbf76935189424d8954f',1,'tvm::auto_scheduler::RandomModel::RandomModel()'],['../classtvm_1_1auto__scheduler_1_1RandomModel.html#ac2b355e61135f2ff57d4f96fe2fba845',1,'tvm::auto_scheduler::RandomModel::RandomModel(::tvm::runtime::ObjectPtr&lt;::tvm::runtime::Object &gt; n)']]],
   ['randommodelnode',['RandomModelNode',['../classtvm_1_1auto__scheduler_1_1RandomModelNode.html',1,'tvm::auto_scheduler']]],
-  ['range',['Range',['../classtvm_1_1Range.html',1,'tvm::Range'],['../classtvm_1_1Range.html#a9d58cccc53897fee0c80ab1437da1f0f',1,'tvm::Range::Range()'],['../classtvm_1_1auto__scheduler_1_1IteratorNode.html#a2751c3164971b3154ffc506e3aebaf91',1,'tvm::auto_scheduler::IteratorNode::range()']]],
+  ['range',['Range',['../classtvm_1_1Range.html',1,'tvm::Range'],['../classtvm_1_1auto__scheduler_1_1IteratorNode.html#a2751c3164971b3154ffc506e3aebaf91',1,'tvm::auto_scheduler::IteratorNode::range()'],['../classtvm_1_1Range.html#a9d58cccc53897fee0c80ab1437da1f0f',1,'tvm::Range::Range()']]],
   ['rangenode',['RangeNode',['../classtvm_1_1RangeNode.html',1,'tvm::RangeNode'],['../classtvm_1_1RangeNode.html#ab845f7ed4ed85e360b730df3450d1aab',1,'tvm::RangeNode::RangeNode()'],['../classtvm_1_1RangeNode.html#a4bbc33969cb484c20306da1d2b9fa1fd',1,'tvm::RangeNode::RangeNode(PrimExpr min, PrimExpr extent, Span span=Span())']]],
   ['ranges',['ranges',['../classtvm_1_1arith_1_1IntConstraintsNode.html#ab23d4d806766c88b0df69dbfb5ebd63c',1,'tvm::arith::IntConstraintsNode']]],
   ['rate',['rate',['../structtvm_1_1relay_1_1DropoutAttrs.html#a0b5a52c24a1be53dbb122a1df9fe22af',1,'tvm::relay::DropoutAttrs']]],
@@ -198,7 +198,7 @@ var searchData=
   ['rewritetensorize',['RewriteTensorize',['../classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff',1,'tvm::meta_schedule::Postproc']]],
   ['rewriteunboundblock',['RewriteUnboundBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980',1,'tvm::meta_schedule::Postproc']]],
   ['rewriteunsafeselect',['RewriteUnsafeSelect',['../namespacetvm_1_1tir_1_1transform.html#a4fe43327c4454dd05b6e925577443f49',1,'tvm::tir::transform']]],
-  ['rfactor',['rfactor',['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()']]],
+  ['rfactor',['RFactor',['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()'],['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()']]],
   ['rfactorstep',['RfactorStep',['../classtvm_1_1auto__scheduler_1_1RfactorStep.html',1,'tvm::auto_scheduler::RfactorStep'],['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a26e6f85b55307f18fab4469e3bd4be0c',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(int stage_id, int iter_id, int factor_iter_id)'],['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a95575c21441177634178245ab562cb4f',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(dmlc::JSONReader *reader)']]],
   ['rfactorstepnode',['RfactorStepNode',['../classtvm_1_1auto__scheduler_1_1RfactorStepNode.html',1,'tvm::auto_scheduler']]],
   ['rhs',['rhs',['../classtvm_1_1relay_1_1ClauseNode.html#a93217eeea15c1f7c1a659da3da86d3bd',1,'tvm::relay::ClauseNode::rhs()'],['../classtvm_1_1script_1_1printer_1_1AssignDocNode.html#a436fcace00d445213fc367ece59c4067',1,'tvm::script::printer::AssignDocNode::rhs()'],['../classtvm_1_1script_1_1printer_1_1ForDocNode.html#aa72614136675287310ea08520f596642',1,'tvm::script::printer::ForDocNode::rhs()'],['../classtvm_1_1script_1_1printer_1_1ScopeDocNode.html#abf3636ac2820118a3d48f2fea32b2b0b' [...]
diff --git a/docs/reference/api/doxygen/search/all_14.js b/docs/reference/api/doxygen/search/all_14.js
index 2a9bc4fef4..e84b11d1f0 100644
--- a/docs/reference/api/doxygen/search/all_14.js
+++ b/docs/reference/api/doxygen/search/all_14.js
@@ -98,7 +98,7 @@ var searchData=
   ['selectshashreduce_3c_20t_2c_20traitname_2c_20false_20_3e',['SelectSHashReduce&lt; T, TraitName, false &gt;',['../structtvm_1_1detail_1_1SelectSHashReduce_3_01T_00_01TraitName_00_01false_01_4.html',1,'tvm::detail']]],
   ['selectvisitattrs',['SelectVisitAttrs',['../structtvm_1_1detail_1_1SelectVisitAttrs.html',1,'tvm::detail']]],
   ['selectvisitattrs_3c_20t_2c_20traitname_2c_20false_20_3e',['SelectVisitAttrs&lt; T, TraitName, false &gt;',['../structtvm_1_1detail_1_1SelectVisitAttrs_3_01T_00_01TraitName_00_01false_01_4.html',1,'tvm::detail']]],
-  ['self',['self',['../classtvm_1_1runtime_1_1MapNode_1_1iterator.html#a5bac4439279428fb3c0d44aa6b1cc798',1,'tvm::runtime::MapNode::iterator::self()'],['../classtvm_1_1runtime_1_1InplaceArrayBase.html#ae447f7c7a742fb3f5613a632706509df',1,'tvm::runtime::InplaceArrayBase::Self()']]],
+  ['self',['Self',['../classtvm_1_1runtime_1_1InplaceArrayBase.html#ae447f7c7a742fb3f5613a632706509df',1,'tvm::runtime::InplaceArrayBase::Self()'],['../classtvm_1_1runtime_1_1MapNode_1_1iterator.html#a5bac4439279428fb3c0d44aa6b1cc798',1,'tvm::runtime::MapNode::iterator::self()']]],
   ['sendbodychunk',['SendBodyChunk',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#a37b77101825145283cced6cd05eb502c',1,'tvm::runtime::micro_rpc::Session']]],
   ['sendmessage',['SendMessage',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#a6e540521a7e9188564da712c0641619c',1,'tvm::runtime::micro_rpc::Session']]],
   ['seq',['seq',['../classtvm_1_1tir_1_1SeqStmtNode.html#a0e548955529d35c56e646fcaac38f865',1,'tvm::tir::SeqStmtNode']]],
@@ -135,8 +135,8 @@ var searchData=
   ['set_5fconfig',['set_config',['../classtvm_1_1TargetTagRegEntry.html#a3c1b66885a103360f56a17ef1e4dde2e',1,'tvm::TargetTagRegEntry']]],
   ['set_5fcreator',['set_creator',['../classtvm_1_1ReflectionVTable_1_1Registry.html#a33948eae2c61e1c80c637f08b516594a',1,'tvm::ReflectionVTable::Registry']]],
   ['set_5fdefault',['set_default',['../structtvm_1_1detail_1_1AttrNopEntry.html#a370e92bafbada9ba805a52e72881f98b',1,'tvm::detail::AttrNopEntry::set_default()'],['../structtvm_1_1detail_1_1AttrInitEntry.html#ae6f6e6264a5b6373b2daada1f55a1dca',1,'tvm::detail::AttrInitEntry::set_default()'],['../classtvm_1_1detail_1_1AttrDocEntry.html#a2a0d680fbaaef688f3ffb9e5d897e417',1,'tvm::detail::AttrDocEntry::set_default()'],['../structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html#ae88a65b8d90a7c5 [...]
+  ['set_5fdefault_5fdevice_5ftype',['set_default_device_type',['../classtvm_1_1TargetKindRegEntry.html#aa34789ae275e36dcd6696aa3881bbc92',1,'tvm::TargetKindRegEntry']]],
   ['set_5fdefault_5fkeys',['set_default_keys',['../classtvm_1_1TargetKindRegEntry.html#a2995c32e12246e892f7f4cb621a2819c',1,'tvm::TargetKindRegEntry']]],
-  ['set_5fdevice_5ftype',['set_device_type',['../classtvm_1_1TargetKindRegEntry.html#ae3ce5349493f402b82e755a0a180bd9a',1,'tvm::TargetKindRegEntry']]],
   ['set_5fdispatch',['set_dispatch',['../classtvm_1_1NodeFunctor_3_01R_07const_01ObjectRef_01_6n_00_01Args_8_8_8_08_4.html#a2fcc19e5151e9b9e56cafc76231b29fd',1,'tvm::NodeFunctor&lt; R(const ObjectRef &amp;n, Args...)&gt;::set_dispatch()'],['../classtvm_1_1script_1_1printer_1_1TracedObjectFunctor.html#a39e23af093ba0ee9dab17de86b6fa58e',1,'tvm::script::printer::TracedObjectFunctor::set_dispatch(String token, uint32_t type_index, runtime::PackedFunc f)'],['../classtvm_1_1script_1_1printer_1 [...]
   ['set_5fis_5fpure',['set_is_pure',['../classtvm_1_1tir_1_1InstructionKindRegEntry.html#ade332453b008e4fce49a3e3ebb4721c5',1,'tvm::tir::InstructionKindRegEntry']]],
   ['set_5flower_5fbound',['set_lower_bound',['../structtvm_1_1detail_1_1AttrNopEntry.html#a36da34fc54009d63283d07e9d41657f7',1,'tvm::detail::AttrNopEntry::set_lower_bound()'],['../structtvm_1_1detail_1_1AttrInitEntry.html#a5608a2a457a397bf11f2be2776ec0653',1,'tvm::detail::AttrInitEntry::set_lower_bound()'],['../classtvm_1_1detail_1_1AttrDocEntry.html#a201e9d6c937d2f444d91fcc8185f8309',1,'tvm::detail::AttrDocEntry::set_lower_bound()'],['../structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry [...]
@@ -177,7 +177,7 @@ var searchData=
   ['setvalue_3c_20uint64_5ft_20_3e',['SetValue&lt; uint64_t &gt;',['../namespacetvm_1_1detail.html#acb3382242cbf538f64edae13e4ec5a84',1,'tvm::detail']]],
   ['shallowcopy',['ShallowCopy',['../classtvm_1_1IRModuleNode.html#a86bbdc4b857ce5958a2b5f29e1d6fcb6',1,'tvm::IRModuleNode']]],
   ['shallowcopyirmodule',['ShallowCopyIRModule',['../classtvm_1_1IRModule.html#aea8b821cf92cf525bd87bf15f5d31889',1,'tvm::IRModule']]],
-  ['shape',['Shape',['../classtvm_1_1runtime_1_1NDArray.html#ad273c7bc59b73fb026fd64fc764cbebc',1,'tvm::runtime::NDArray::Shape()'],['../classtvm_1_1TensorTypeNode.html#a98fa347833e4504dd6f8056d9863a708',1,'tvm::TensorTypeNode::shape()'],['../classtvm_1_1meta__schedule_1_1TensorInfoNode.html#ac16d3b10f7c68eefb27e55e865bb304c',1,'tvm::meta_schedule::TensorInfoNode::shape()'],['../structtvm_1_1relay_1_1InitOpAttrs.html#aaaec76cc5ea9a543c4ea174a6b38bf5e',1,'tvm::relay::InitOpAttrs::shape()' [...]
+  ['shape',['shape',['../classtvm_1_1TensorTypeNode.html#a98fa347833e4504dd6f8056d9863a708',1,'tvm::TensorTypeNode::shape()'],['../classtvm_1_1meta__schedule_1_1TensorInfoNode.html#ac16d3b10f7c68eefb27e55e865bb304c',1,'tvm::meta_schedule::TensorInfoNode::shape()'],['../structtvm_1_1relay_1_1InitOpAttrs.html#aaaec76cc5ea9a543c4ea174a6b38bf5e',1,'tvm::relay::InitOpAttrs::shape()'],['../classtvm_1_1relay_1_1ShapePatternNode.html#a749813cbbd38f8021a7df897d527d6e0',1,'tvm::relay::ShapePattern [...]
   ['shape_5f',['shape_',['../classtvm_1_1runtime_1_1NDArray_1_1ContainerBase.html#aa5597a1760c9f8c9d1fd51584b1283fb',1,'tvm::runtime::NDArray::ContainerBase']]],
   ['shape_5fbackward_5frule',['shape_backward_rule',['../classtvm_1_1tir_1_1BijectiveLayoutNode.html#a0befdd0a2371c0d12970e8ac6623b59b',1,'tvm::tir::BijectiveLayoutNode']]],
   ['shape_5fcount',['shape_count',['../structTVMGraphExecutorGraphAttr.html#a182b228582f1186f2a15de50a25b3375',1,'TVMGraphExecutorGraphAttr']]],
@@ -225,7 +225,7 @@ var searchData=
   ['singleton',['Singleton',['../classtvm_1_1te_1_1Singleton.html',1,'tvm::te::Singleton'],['../classtvm_1_1te_1_1Singleton.html#a94450b853dcd5e9865546d8c8fe351a1',1,'tvm::te::Singleton::Singleton()']]],
   ['singletonnode',['SingletonNode',['../classtvm_1_1te_1_1SingletonNode.html',1,'tvm::te']]],
   ['sinh',['sinh',['../namespacetvm.html#ad828bc801c73df761c58d9f8877d52ee',1,'tvm::sinh()'],['../namespacetvm_1_1topi.html#af9694f5470ba2cabc19866be3b00fe8d',1,'tvm::topi::sinh()']]],
-  ['size',['size',['../structtvm_1_1relay_1_1Resize1DAttrs.html#afb1175c0ff019e485ed65d98305b5f62',1,'tvm::relay::Resize1DAttrs::size()'],['../structtvm_1_1relay_1_1Resize2DAttrs.html#ab3e26dbbc2dc1da40764832a99459c30',1,'tvm::relay::Resize2DAttrs::size()'],['../structtvm_1_1relay_1_1Resize3DAttrs.html#aab61649fe8417a8a7fbc849090bac083',1,'tvm::relay::Resize3DAttrs::size()'],['../structtvm_1_1relay_1_1LRNAttrs.html#a3758ed1f8a8bcf73008ae1dd2bfa148e',1,'tvm::relay::LRNAttrs::size()'],['.. [...]
+  ['size',['Size',['../classtvm_1_1TensorTypeNode.html#a1f08dac86ae8aea81d058ef64cfd38b4',1,'tvm::TensorTypeNode::Size()'],['../classtvm_1_1meta__schedule_1_1DatabaseNode.html#aae5b9ab9f7e497654b90c23a2159a5cc',1,'tvm::meta_schedule::DatabaseNode::Size()'],['../classtvm_1_1meta__schedule_1_1PyDatabaseNode.html#a36817d04978253571fef7d01427ce9c0',1,'tvm::meta_schedule::PyDatabaseNode::Size()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1FrameBuffer.html#ae395a0f1c6e79e825aa7a244c74a5d7b',1,' [...]
   ['size_5f',['size_',['../classtvm_1_1runtime_1_1MapNode.html#a2285f106f6afa29f512a7818ad59e9e5',1,'tvm::runtime::MapNode']]],
   ['size_5fbytes',['size_bytes',['../structtvm_1_1tir_1_1usmp_1_1BufferInfoNode.html#a0a5d4bd6072c268df05b90d267b4c0a0',1,'tvm::tir::usmp::BufferInfoNode']]],
   ['size_5fhint_5fbytes',['size_hint_bytes',['../structtvm_1_1PoolInfoNode.html#ac073aeb75bf031ff8687e132bc112f92',1,'tvm::PoolInfoNode::size_hint_bytes()'],['../structtvm_1_1PoolInfoPropertiesNode.html#aed7c5573ffc8db9424e77e3a85cad120',1,'tvm::PoolInfoPropertiesNode::size_hint_bytes()']]],
@@ -254,7 +254,7 @@ var searchData=
   ['solvelinearequations',['SolveLinearEquations',['../namespacetvm_1_1arith.html#ae0290f04432523ab8e5f76edde80071a',1,'tvm::arith']]],
   ['solvelinearinequalities',['SolveLinearInequalities',['../namespacetvm_1_1arith.html#ac59d63560e04431f108e81457b212fdc',1,'tvm::arith']]],
   ['sorted',['sorted',['../structtvm_1_1relay_1_1UniqueAttrs.html#aef434799646533ec9d796393ba01db44',1,'tvm::relay::UniqueAttrs']]],
-  ['source',['Source',['../classtvm_1_1parser_1_1Source.html',1,'tvm::parser::Source'],['../classtvm_1_1parser_1_1Source.html#a0ef9f726abcc6c4c9e81b3a257055df8',1,'tvm::parser::Source::Source()'],['../classtvm_1_1arith_1_1IterMarkNode.html#a8b885a675c88e5a5d142fa68bcba048a',1,'tvm::arith::IterMarkNode::source()'],['../classtvm_1_1arith_1_1IterSplitExprNode.html#a7a129dc9b432359a07c1a1e286c3c66f',1,'tvm::arith::IterSplitExprNode::source()'],['../classtvm_1_1parser_1_1SourceNode.html#a51cc [...]
+  ['source',['Source',['../classtvm_1_1parser_1_1Source.html',1,'tvm::parser::Source'],['../classtvm_1_1arith_1_1IterMarkNode.html#a8b885a675c88e5a5d142fa68bcba048a',1,'tvm::arith::IterMarkNode::source()'],['../classtvm_1_1arith_1_1IterSplitExprNode.html#a7a129dc9b432359a07c1a1e286c3c66f',1,'tvm::arith::IterSplitExprNode::source()'],['../classtvm_1_1parser_1_1SourceNode.html#a51cc3c98e4cdacf0ffdc643c848e09af',1,'tvm::parser::SourceNode::source()'],['../classtvm_1_1tir_1_1ReduceNode.html# [...]
   ['source_5fmap',['source_map',['../classtvm_1_1IRModuleNode.html#a49470c0bfb4b85d9eda7576a837b7031',1,'tvm::IRModuleNode::source_map()'],['../classtvm_1_1parser_1_1SourceMapNode.html#ae22bc1181b066f17f8938868ef22610a',1,'tvm::parser::SourceMapNode::source_map()']]],
   ['source_5fmap_2eh',['source_map.h',['../source__map_8h.html',1,'']]],
   ['source_5fname',['source_name',['../classtvm_1_1DiagnosticBuilder.html#a92d320e1ede24fe5ff47862365002691',1,'tvm::DiagnosticBuilder::source_name()'],['../classtvm_1_1SpanNode.html#ad573167f93facbfbee19983b08bbba3d',1,'tvm::SpanNode::source_name()'],['../classtvm_1_1parser_1_1SourceNode.html#a8d4c50a18eb3e99b14d73d7db2a52af3',1,'tvm::parser::SourceNode::source_name()']]],
@@ -271,7 +271,7 @@ var searchData=
   ['spacegeneratornode',['SpaceGeneratorNode',['../classtvm_1_1meta__schedule_1_1SpaceGeneratorNode.html',1,'tvm::meta_schedule']]],
   ['spacegeneratorunion',['SpaceGeneratorUnion',['../classtvm_1_1meta__schedule_1_1SpaceGenerator.html#aa13f2244870b18f3e9788d41a400636e',1,'tvm::meta_schedule::SpaceGenerator']]],
   ['spacetobatchndattrs',['SpaceToBatchNDAttrs',['../structtvm_1_1relay_1_1SpaceToBatchNDAttrs.html',1,'tvm::relay']]],
-  ['span',['Span',['../classtvm_1_1support_1_1Span.html',1,'tvm::support::Span&lt; T, W &gt;'],['../classtvm_1_1Span.html',1,'tvm::Span'],['../classtvm_1_1AffineTypeNode.html#aa45c91e3c8ebcff609d10f6a921f3fa2',1,'tvm::AffineTypeNode::span()'],['../classtvm_1_1DiagnosticNode.html#af5469f228f87711ad8bd3f4f78f3bb54',1,'tvm::DiagnosticNode::span()'],['../classtvm_1_1DiagnosticBuilder.html#a52d9cc3cb33e655c5d82af47daa74c66',1,'tvm::DiagnosticBuilder::span()'],['../classtvm_1_1CompileError.htm [...]
+  ['span',['Span',['../classtvm_1_1support_1_1Span.html',1,'tvm::support::Span&lt; T, W &gt;'],['../classtvm_1_1Span.html',1,'tvm::Span'],['../classtvm_1_1Span.html#a5216631b639e8c802263d87d3fe9e5f6',1,'tvm::Span::Span()'],['../classtvm_1_1support_1_1Span.html#a77653730a2542edf93b7c4413a72f3ec',1,'tvm::support::Span::Span(T *begin, int num_elements)'],['../classtvm_1_1support_1_1Span.html#a3c22dd06856e7029e7107adf38eb72f5',1,'tvm::support::Span::Span(T *begin, T *end)'],['../classtvm_1_1 [...]
   ['span_2eh',['span.h',['../ir_2span_8h.html',1,'(Global Namespace)'],['../support_2span_8h.html',1,'(Global Namespace)']]],
   ['spannode',['SpanNode',['../classtvm_1_1SpanNode.html',1,'tvm::SpanNode'],['../namespacetvm_1_1relay.html#a7d0fa6578e97d0d64b08865f94f04827',1,'tvm::relay::SpanNode()']]],
   ['sparse_5flhs',['sparse_lhs',['../structtvm_1_1relay_1_1SparseDenseAttrs.html#ae52d5465cb3421f342607abcc1cb1d5c',1,'tvm::relay::SparseDenseAttrs']]],
@@ -328,13 +328,13 @@ var searchData=
   ['stagenode',['StageNode',['../classtvm_1_1auto__scheduler_1_1StageNode.html',1,'tvm::auto_scheduler::StageNode'],['../classtvm_1_1te_1_1StageNode.html',1,'tvm::te::StageNode']]],
   ['stages',['stages',['../classtvm_1_1auto__scheduler_1_1StateNode.html#a881e14990bf228ee3fddb3721c451b9e',1,'tvm::auto_scheduler::StateNode::stages()'],['../classtvm_1_1te_1_1ScheduleNode.html#ab5649969db603d6b7b4d155c0d09cdd5',1,'tvm::te::ScheduleNode::stages()']]],
   ['stagetoaxesmap',['StageToAxesMap',['../namespacetvm_1_1auto__scheduler.html#a8f12e558fc4b8fbb990e7e204c06beeb',1,'tvm::auto_scheduler']]],
-  ['start',['start',['../structtvm_1_1relay_1_1ArangeAttrs.html#ae8ae5bc1551b406a4f52395af343c2ce',1,'tvm::relay::ArangeAttrs::start()'],['../classtvm_1_1script_1_1printer_1_1SliceDocNode.html#a16de0189a979a6cf9d8f14b39cb5fb54',1,'tvm::script::printer::SliceDocNode::start()'],['../classtvm_1_1runtime_1_1TimerNode.html#aa11fc338c39ee2137448e54a10efe0ae',1,'tvm::runtime::TimerNode::Start()'],['../classtvm_1_1runtime_1_1Timer.html#a89bcaa433499bc68902cb473d5eba6ca',1,'tvm::runtime::Timer::S [...]
+  ['start',['Start',['../classtvm_1_1runtime_1_1TimerNode.html#aa11fc338c39ee2137448e54a10efe0ae',1,'tvm::runtime::TimerNode::Start()'],['../classtvm_1_1runtime_1_1Timer.html#a89bcaa433499bc68902cb473d5eba6ca',1,'tvm::runtime::Timer::Start()'],['../classtvm_1_1runtime_1_1profiling_1_1MetricCollectorNode.html#a44fadfb7b0f961a7fb2275e3b5dbcd88',1,'tvm::runtime::profiling::MetricCollectorNode::Start()'],['../classtvm_1_1runtime_1_1profiling_1_1Profiler.html#aee5452075c8e022b8aaa6fb365f68e14 [...]
   ['start_5findex',['start_index',['../namespacetvm_1_1topi_1_1nn.html#a752c4130dac73fd2de0390c5f6b24b15',1,'tvm::topi::nn']]],
   ['startcall',['StartCall',['../classtvm_1_1runtime_1_1profiling_1_1Profiler.html#a1fe322f7ba92be44d7e7c8cb184f3833',1,'tvm::runtime::profiling::Profiler']]],
   ['startmessage',['StartMessage',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#acd512b977c6dd888f90c4fd6d2b9500f',1,'tvm::runtime::micro_rpc::Session']]],
   ['startpacket',['StartPacket',['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#ade10d3bd3a26e3b7af881ae134e9a998',1,'tvm::runtime::micro_rpc::Framer']]],
   ['startsession',['StartSession',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#a15d3f9ecb8b22bf2d330f6f0a16c5239',1,'tvm::runtime::micro_rpc::Session']]],
-  ['state',['State',['../classtvm_1_1auto__scheduler_1_1State.html',1,'tvm::auto_scheduler::State'],['../classtvm_1_1auto__scheduler_1_1MeasureInputNode.html#afb23aaf6133189687d2541ec6e1352f4',1,'tvm::auto_scheduler::MeasureInputNode::state()'],['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()'],['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()']]],
+  ['state',['State',['../classtvm_1_1auto__scheduler_1_1State.html',1,'tvm::auto_scheduler::State'],['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()'],['../classtvm_1_1auto__scheduler_1_1MeasureInputNode.html#afb23aaf6133189687d2541ec6e1352f4',1,'tvm::auto_scheduler::MeasureInputNode::state()'],['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()']]],
   ['state_2eh',['state.h',['../state_8h.html',1,'']]],
   ['state_5fplaceholder',['state_placeholder',['../classtvm_1_1te_1_1ScanOpNode.html#a69105f6a84dd4fb912a16bfaa68aebf6',1,'tvm::te::ScanOpNode']]],
   ['statenode',['StateNode',['../classtvm_1_1auto__scheduler_1_1StateNode.html',1,'tvm::auto_scheduler']]],
diff --git a/docs/reference/api/doxygen/search/all_15.js b/docs/reference/api/doxygen/search/all_15.js
index 6c4f66ade6..6a1b4fe2a7 100644
--- a/docs/reference/api/doxygen/search/all_15.js
+++ b/docs/reference/api/doxygen/search/all_15.js
@@ -40,7 +40,7 @@ var searchData=
   ['takeattrs',['TakeAttrs',['../structtvm_1_1relay_1_1TakeAttrs.html',1,'tvm::relay']]],
   ['tan',['tan',['../namespacetvm.html#af99838098788d40c80b402f29b3c2e8c',1,'tvm::tan()'],['../namespacetvm_1_1topi.html#a13b757fe52775f43a58d91c0a1330f97',1,'tvm::topi::tan()']]],
   ['tanh',['tanh',['../namespacetvm.html#a12c5457301d8a2c03a2ba1163edd7cee',1,'tvm::tanh()'],['../namespacetvm_1_1topi.html#aec153e599d33c78a7592007cde1c02cb',1,'tvm::topi::tanh()']]],
-  ['target',['Target',['../classtvm_1_1Target.html',1,'tvm::Target'],['../classtvm_1_1Target.html#a58a5a1e042e265fe5a6973045226fe1a',1,'tvm::Target::Target(std::nullptr_t)'],['../classtvm_1_1Target.html#a77f3d7cc97d8cfd7172af58b4e784d89',1,'tvm::Target::Target(const String &amp;tag_or_config_or_target_str)'],['../classtvm_1_1Target.html#ab825b350cf478bf948d807b6fdf636a0',1,'tvm::Target::Target(const Map&lt; String, ObjectRef &gt; &amp;config)'],['../classtvm_1_1Target.html#a1abb29217d8e3 [...]
+  ['target',['Target',['../classtvm_1_1Target.html',1,'tvm::Target'],['../classtvm_1_1auto__scheduler_1_1SearchTaskNode.html#acf4407e0c8dced81b05b34ec0426c933',1,'tvm::auto_scheduler::SearchTaskNode::target()'],['../classtvm_1_1meta__schedule_1_1BuilderInputNode.html#afc001f3e427cfc8c05236b615cfd2868',1,'tvm::meta_schedule::BuilderInputNode::target()'],['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html#a45a380cfa2edfd63056fb1a00f9aac35',1,'tvm::meta_schedule::TuningRecordNode::targ [...]
   ['target_2eh',['target.h',['../target_8h.html',1,'']]],
   ['target_5fburst_5fbytes',['target_burst_bytes',['../structtvm_1_1PoolInfoNode.html#a747c03e3eafc83b053637b735244c6d7',1,'tvm::PoolInfoNode::target_burst_bytes()'],['../structtvm_1_1PoolInfoPropertiesNode.html#aa1efe29e920f5b003894a2ae3304da17',1,'tvm::PoolInfoPropertiesNode::target_burst_bytes()']]],
   ['target_5fhost',['target_host',['../classtvm_1_1auto__scheduler_1_1SearchTaskNode.html#afe27bf8cb82dc8a1b6fffb9e5a3e6c20',1,'tvm::auto_scheduler::SearchTaskNode']]],
@@ -74,7 +74,7 @@ var searchData=
   ['te',['te',['../namespacetvm_1_1te.html',1,'tvm']]],
   ['tempexpr',['TempExpr',['../classtvm_1_1relay_1_1TempExpr.html',1,'tvm::relay']]],
   ['tempexprnode',['TempExprNode',['../classtvm_1_1relay_1_1TempExprNode.html',1,'tvm::relay']]],
-  ['tensor',['Tensor',['../classtvm_1_1te_1_1Tensor.html',1,'tvm::te::Tensor'],['../classtvm_1_1te_1_1Tensor.html#afc8d8e74d1c840359661b39514d6fecf',1,'tvm::te::Tensor::Tensor()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a22de469ea5521ba12e14f1e8181bae56',1,'tvm::runtime::vm::Instruction::tensor()']]],
+  ['tensor',['Tensor',['../classtvm_1_1te_1_1Tensor.html',1,'tvm::te::Tensor'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a22de469ea5521ba12e14f1e8181bae56',1,'tvm::runtime::vm::Instruction::tensor()'],['../classtvm_1_1te_1_1Tensor.html#afc8d8e74d1c840359661b39514d6fecf',1,'tvm::te::Tensor::Tensor()']]],
   ['tensor_2eh',['tensor.h',['../tensor_8h.html',1,'']]],
   ['tensor_5fintrin',['tensor_intrin',['../classtvm_1_1te_1_1IterVarAttrNode.html#a6a0d96bbebfd716f851b2ad01738cb3f',1,'tvm::te::IterVarAttrNode']]],
   ['tensor_5fintrin_2eh',['tensor_intrin.h',['../tensor__intrin_8h.html',1,'']]],
@@ -93,7 +93,7 @@ var searchData=
   ['tensorintrincall',['TensorIntrinCall',['../classtvm_1_1te_1_1TensorIntrinCall.html',1,'tvm::te::TensorIntrinCall'],['../classtvm_1_1te_1_1TensorIntrinCall.html#a91c10074ce6babeba78fe72a0aab4b52',1,'tvm::te::TensorIntrinCall::TensorIntrinCall()']]],
   ['tensorintrincallnode',['TensorIntrinCallNode',['../classtvm_1_1te_1_1TensorIntrinCallNode.html',1,'tvm::te']]],
   ['tensorintrinnode',['TensorIntrinNode',['../classtvm_1_1te_1_1TensorIntrinNode.html',1,'tvm::te::TensorIntrinNode'],['../classtvm_1_1tir_1_1TensorIntrinNode.html',1,'tvm::tir::TensorIntrinNode'],['../classtvm_1_1te_1_1TensorIntrinNode.html#ad59e7f2b881fc798a8c64fd3959f929c',1,'tvm::te::TensorIntrinNode::TensorIntrinNode()']]],
-  ['tensorize',['Tensorize',['../classtvm_1_1tir_1_1ScheduleNode.html#ae3794a03b566e5b1721b44c564992975',1,'tvm::tir::ScheduleNode::Tensorize(const LoopRV &amp;loop_rv, const String &amp;intrin)=0'],['../classtvm_1_1tir_1_1ScheduleNode.html#aaca1621ab9c3db0ddd04ac57de79d37f',1,'tvm::tir::ScheduleNode::Tensorize(const BlockRV &amp;block_rv, const String &amp;intrin)=0'],['../classtvm_1_1te_1_1Stage.html#ab5fe485e1d730c36b096c060b8d2ef9d',1,'tvm::te::Stage::tensorize()']]],
+  ['tensorize',['tensorize',['../classtvm_1_1te_1_1Stage.html#ab5fe485e1d730c36b096c060b8d2ef9d',1,'tvm::te::Stage::tensorize()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ae3794a03b566e5b1721b44c564992975',1,'tvm::tir::ScheduleNode::Tensorize(const LoopRV &amp;loop_rv, const String &amp;intrin)=0'],['../classtvm_1_1tir_1_1ScheduleNode.html#aaca1621ab9c3db0ddd04ac57de79d37f',1,'tvm::tir::ScheduleNode::Tensorize(const BlockRV &amp;block_rv, const String &amp;intrin)=0']]],
   ['tensornode',['TensorNode',['../classtvm_1_1te_1_1TensorNode.html',1,'tvm::te::TensorNode'],['../classtvm_1_1te_1_1TensorNode.html#a153569448cb1bf9d2924d35639c3b8b8',1,'tvm::te::TensorNode::TensorNode()']]],
   ['tensors',['tensors',['../classtvm_1_1auto__scheduler_1_1ComputeDAGNode.html#afc71b9ecc0d6b82a5c2ab3250f01514b',1,'tvm::auto_scheduler::ComputeDAGNode::tensors()'],['../classtvm_1_1te_1_1TensorIntrinCallNode.html#a92b543750ea55b9cfd6852139e2ddbd6',1,'tvm::te::TensorIntrinCallNode::tensors()']]],
   ['tensortype',['TensorType',['../classtvm_1_1TensorType.html',1,'tvm::TensorType'],['../classtvm_1_1TensorType.html#ade4460e9b02b42757a83808dec478b87',1,'tvm::TensorType::TensorType()'],['../namespacetvm_1_1relay.html#a52c13723bba53f4953dfd10c34d480f8',1,'tvm::relay::TensorType()']]],
@@ -168,7 +168,7 @@ var searchData=
   ['touchtask',['TouchTask',['../classtvm_1_1meta__schedule_1_1TaskSchedulerNode.html#af6fa276674945d3432c129bdf9cea599',1,'tvm::meta_schedule::TaskSchedulerNode::TouchTask()'],['../classtvm_1_1meta__schedule_1_1PyTaskSchedulerNode.html#a7de09f81c8aceb580b43107f266e6b40',1,'tvm::meta_schedule::PyTaskSchedulerNode::TouchTask()']]],
   ['tovar',['ToVar',['../classtvm_1_1tir_1_1AnyNode.html#ae01ebbba2378afb6509a22de97f8fb30',1,'tvm::tir::AnyNode']]],
   ['tparent',['TParent',['../classtvm_1_1OpAttrMap.html#a316480ca7450209650fc1a62f7ce4a14',1,'tvm::OpAttrMap::TParent()'],['../classtvm_1_1TargetKindAttrMap.html#a37eb6bfb0d881cf897147b17ff7d3265',1,'tvm::TargetKindAttrMap::TParent()']]],
-  ['trace',['Trace',['../classtvm_1_1tir_1_1Trace.html',1,'tvm::tir::Trace'],['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html#a8cc2d64f796593a1a774eef259f17b29',1,'tvm::meta_schedule::TuningRecordNode::trace()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a953bca4123b5a758adfdcd65634a5f3b',1,'tvm::tir::ScheduleNode::trace()'],['../classtvm_1_1tir_1_1Trace.html#a8e09abffd0b9b1afac7b832cf16c142d',1,'tvm::tir::Trace::Trace()'],['../classtvm_1_1tir_1_1Trace.html#af79bccf1bde25efea387bb [...]
+  ['trace',['Trace',['../classtvm_1_1tir_1_1Trace.html',1,'tvm::tir::Trace'],['../classtvm_1_1tir_1_1Trace.html#a8e09abffd0b9b1afac7b832cf16c142d',1,'tvm::tir::Trace::Trace()'],['../classtvm_1_1tir_1_1Trace.html#af79bccf1bde25efea387bb1b82dacaa6',1,'tvm::tir::Trace::Trace(Array&lt; Instruction &gt; insts, Map&lt; Instruction, ObjectRef &gt; decisions)'],['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html#a8cc2d64f796593a1a774eef259f17b29',1,'tvm::meta_schedule::TuningRecordNode::tra [...]
   ['trace_2eh',['trace.h',['../trace_8h.html',1,'']]],
   ['traced',['Traced',['../classtvm_1_1tir_1_1Schedule.html#a295d432b86621101f67b20fadb367b91',1,'tvm::tir::Schedule']]],
   ['traced_5fobject_2eh',['traced_object.h',['../traced__object_8h.html',1,'']]],
diff --git a/docs/reference/api/doxygen/search/all_16.js b/docs/reference/api/doxygen/search/all_16.js
index 2dfa3b02d4..d0cc5f1bbf 100644
--- a/docs/reference/api/doxygen/search/all_16.js
+++ b/docs/reference/api/doxygen/search/all_16.js
@@ -29,7 +29,7 @@ var searchData=
   ['unknownattributeaccesspathnode',['UnknownAttributeAccessPathNode',['../classtvm_1_1UnknownAttributeAccessPathNode.html',1,'tvm::UnknownAttributeAccessPathNode'],['../classtvm_1_1UnknownAttributeAccessPathNode.html#a1882e9e591466a2785acc761dc63d56e',1,'tvm::UnknownAttributeAccessPathNode::UnknownAttributeAccessPathNode()']]],
   ['unmatchedcases',['UnmatchedCases',['../namespacetvm_1_1relay.html#aa3a8cace40f8056fd6412f39c3eaa605',1,'tvm::relay']]],
   ['unravel_5findex',['unravel_index',['../namespacetvm_1_1topi.html#a8811a02532bbe3047986bf1a8449ac0e',1,'tvm::topi']]],
-  ['unroll',['unroll',['../classtvm_1_1auto__scheduler_1_1State.html#aa68a9d2e226bae38a36e4be4af1d1ae4',1,'tvm::auto_scheduler::State::unroll()'],['../classtvm_1_1te_1_1Stage.html#af83ad8672660403504f472228b044b33',1,'tvm::te::Stage::unroll()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a84ec742f6295f59390592a6d0d90a552',1,'tvm::tir::ScheduleNode::Unroll()'],['../namespacetvm_1_1script_1_1ir__builder_1_1tir.html#acd41556b0c4088d0f309ef5495aaebe3',1,'tvm::script::ir_builder::tir::Unroll()']]],
+  ['unroll',['Unroll',['../classtvm_1_1tir_1_1ScheduleNode.html#a84ec742f6295f59390592a6d0d90a552',1,'tvm::tir::ScheduleNode::Unroll()'],['../classtvm_1_1auto__scheduler_1_1State.html#aa68a9d2e226bae38a36e4be4af1d1ae4',1,'tvm::auto_scheduler::State::unroll()'],['../classtvm_1_1te_1_1Stage.html#af83ad8672660403504f472228b044b33',1,'tvm::te::Stage::unroll()'],['../namespacetvm_1_1script_1_1ir__builder_1_1tir.html#acd41556b0c4088d0f309ef5495aaebe3',1,'tvm::script::ir_builder::tir::Unroll()']]],
   ['unrollloop',['UnrollLoop',['../namespacetvm_1_1tir_1_1transform.html#ab2f279e91071fa96a1edb24fa004ea6a',1,'tvm::tir::transform']]],
   ['update',['Update',['../classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html#a5ae0699196c4bbc754bbdd4c3a6c7ca7',1,'tvm::arith::ConstIntBoundAnalyzer::Update()'],['../classtvm_1_1arith_1_1ModularSetAnalyzer.html#a04156fac580981f3005af3b8e676720d',1,'tvm::arith::ModularSetAnalyzer::Update()'],['../classtvm_1_1arith_1_1RewriteSimplifier.html#a5e6752c0702dc2d3e4235797d9d3ac7b',1,'tvm::arith::RewriteSimplifier::Update()'],['../classtvm_1_1arith_1_1CanonicalSimplifier.html#a790c032e12c7d93e9e940 [...]
   ['update_5ffunc',['update_func',['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#ade9364c152a36501d4f24fa4f0111519',1,'tvm::auto_scheduler::PythonBasedModelNode']]],
diff --git a/docs/reference/api/doxygen/search/all_18.js b/docs/reference/api/doxygen/search/all_18.js
index f65dce6fc7..783ee22565 100644
--- a/docs/reference/api/doxygen/search/all_18.js
+++ b/docs/reference/api/doxygen/search/all_18.js
@@ -35,7 +35,7 @@ var searchData=
   ['withframe',['WithFrame',['../classtvm_1_1script_1_1printer_1_1IRDocsifierNode.html#aeb321e859e30f7a3917a4ca8db71d472',1,'tvm::script::printer::IRDocsifierNode']]],
   ['withhost',['WithHost',['../classtvm_1_1Target.html#a509ce63995f082c80742ea5ca6ac112f',1,'tvm::Target']]],
   ['withoutattr',['WithoutAttr',['../namespacetvm.html#a7e2bc626db8be997b1562c79df3d9e11',1,'tvm']]],
-  ['workload',['Workload',['../classtvm_1_1meta__schedule_1_1Workload.html',1,'tvm::meta_schedule::Workload'],['../classtvm_1_1meta__schedule_1_1Workload.html#a21ccf9c956b82d50a2579f1c0f592fd0',1,'tvm::meta_schedule::Workload::Workload(IRModule mod)'],['../classtvm_1_1meta__schedule_1_1Workload.html#a8880877517679c82ae63520e28d5e1d8',1,'tvm::meta_schedule::Workload::Workload(IRModule mod, THashCode shash)'],['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html#a42c87f1ec62dae6806c3fe9 [...]
+  ['workload',['Workload',['../classtvm_1_1meta__schedule_1_1Workload.html',1,'tvm::meta_schedule::Workload'],['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html#a42c87f1ec62dae6806c3fe9629c5e7f0',1,'tvm::meta_schedule::TuningRecordNode::workload()'],['../classtvm_1_1meta__schedule_1_1Workload.html#a21ccf9c956b82d50a2579f1c0f592fd0',1,'tvm::meta_schedule::Workload::Workload(IRModule mod)'],['../classtvm_1_1meta__schedule_1_1Workload.html#a8880877517679c82ae63520e28d5e1d8',1,'tvm::me [...]
   ['workload_5fkey',['workload_key',['../classtvm_1_1auto__scheduler_1_1SearchTaskNode.html#a20045d677ba2bc5c5ce461e78543b3e2',1,'tvm::auto_scheduler::SearchTaskNode']]],
   ['workloadequal',['WorkloadEqual',['../structtvm_1_1meta__schedule_1_1WorkloadEqual.html',1,'tvm::meta_schedule']]],
   ['workloadhash',['WorkloadHash',['../structtvm_1_1meta__schedule_1_1WorkloadHash.html',1,'tvm::meta_schedule']]],
diff --git a/docs/reference/api/doxygen/search/all_5.js b/docs/reference/api/doxygen/search/all_5.js
index ff0761c300..3e365d9932 100644
--- a/docs/reference/api/doxygen/search/all_5.js
+++ b/docs/reference/api/doxygen/search/all_5.js
@@ -43,6 +43,7 @@ var searchData=
   ['dedup',['DeDup',['../namespacetvm_1_1relay.html#a1ecbcbe35c7abd82b9eabf94f6b797d2',1,'tvm::relay']]],
   ['default',['Default',['../classtvm_1_1DiagnosticContext.html#ab0a08b05d11230b5108086cd5118f488',1,'tvm::DiagnosticContext::Default()'],['../classtvm_1_1VirtualDevice.html#a73364da6471b4634fb14abf10ce42f3c',1,'tvm::VirtualDevice::Default()']]],
   ['default_2eh',['default.h',['../generic_2default_8h.html',1,'(Global Namespace)'],['../x86_2default_8h.html',1,'(Global Namespace)']]],
+  ['default_5fdevice_5ftype',['default_device_type',['../classtvm_1_1TargetKindNode.html#a0d66deaddc1ac8bfe3e39616df811b7e',1,'tvm::TargetKindNode']]],
   ['default_5fkeys',['default_keys',['../classtvm_1_1TargetKindNode.html#aa62e049ba158730d9ab88e4c0b173de9',1,'tvm::TargetKindNode']]],
   ['default_5fmax_5fcontinuous_5ferror',['DEFAULT_MAX_CONTINUOUS_ERROR',['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a6600d5b819e6c7e9ef3f6c32c355e3db',1,'tvm::auto_scheduler::ProgramMeasurerNode']]],
   ['default_5fprimitive_5fvirtual_5fdevice',['default_primitive_virtual_device',['../classtvm_1_1CompilationConfigNode.html#abe4569cf32c57b710be99b50e7118876',1,'tvm::CompilationConfigNode']]],
@@ -97,7 +98,7 @@ var searchData=
   ['device_5findex',['device_index',['../structTVMGraphExecutorGraphAttr.html#ae55c2e6d56c07fc475c44d82ba1de012',1,'TVMGraphExecutorGraphAttr::device_index()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#af91776ae1a16f3545bc4749599d62643',1,'tvm::runtime::vm::Instruction::device_index()']]],
   ['device_5fmetrics',['device_metrics',['../classtvm_1_1runtime_1_1profiling_1_1ReportNode.html#ababc1b17ad3a7f9bfe9a8006cc2c4cd0',1,'tvm::runtime::profiling::ReportNode']]],
   ['device_5fscope',['device_scope',['../namespacetvm_1_1tir_1_1attr.html#a36db026f638ad3d951c302796ddcae24',1,'tvm::tir::attr']]],
-  ['device_5ftype',['device_type',['../classtvm_1_1meta__schedule_1_1RunnerInputNode.html#a5879e387f788cfd90b5a62ef1e55011e',1,'tvm::meta_schedule::RunnerInputNode::device_type()'],['../classtvm_1_1TargetKindNode.html#a18459286d8d501892992a4209ad08652',1,'tvm::TargetKindNode::device_type()'],['../classtvm_1_1VirtualDeviceNode.html#a5e3f67045652bc27b937acf1ddc677a7',1,'tvm::VirtualDeviceNode::device_type()'],['../namespacetvm_1_1tir_1_1attr.html#a7e4e7cd47471a9089022214d63d24206',1,'tvm:: [...]
+  ['device_5ftype',['device_type',['../classtvm_1_1meta__schedule_1_1RunnerInputNode.html#a5879e387f788cfd90b5a62ef1e55011e',1,'tvm::meta_schedule::RunnerInputNode::device_type()'],['../classtvm_1_1VirtualDeviceNode.html#a5e3f67045652bc27b937acf1ddc677a7',1,'tvm::VirtualDeviceNode::device_type()'],['../namespacetvm_1_1tir_1_1attr.html#a7e4e7cd47471a9089022214d63d24206',1,'tvm::tir::attr::device_type()']]],
   ['deviceapi',['DeviceAPI',['../classtvm_1_1runtime_1_1DeviceAPI.html',1,'tvm::runtime']]],
   ['deviceattrkind',['DeviceAttrKind',['../namespacetvm_1_1runtime.html#a46fef1ca0ccc05473e9bb0a8c6b66619',1,'tvm::runtime']]],
   ['devicecopy',['DeviceCopy',['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#ad38748aeb7650b185d8548e491aa9da6',1,'tvm::runtime::vm::Instruction::DeviceCopy()'],['../namespacetvm_1_1runtime_1_1vm.html#a8d8d95ce8d629c7213f2f595917870ecaf695012a8c440065a5e913a682e77b5c',1,'tvm::runtime::vm::DeviceCopy()']]],
diff --git a/docs/reference/api/doxygen/search/all_8.js b/docs/reference/api/doxygen/search/all_8.js
index 8b12b866f2..c7aa4421b5 100644
--- a/docs/reference/api/doxygen/search/all_8.js
+++ b/docs/reference/api/doxygen/search/all_8.js
@@ -110,6 +110,7 @@ var searchData=
   ['getspan',['GetSpan',['../classtvm_1_1TypeReporterNode.html#a06af835a761aaa10627a88ac4b712a15',1,'tvm::TypeReporterNode']]],
   ['getsref',['GetSRef',['../classtvm_1_1tir_1_1ScheduleNode.html#a3b6d659b1a0a4b8175d7495afc3a791c',1,'tvm::tir::ScheduleNode::GetSRef(const BlockRV &amp;block_rv) const =0'],['../classtvm_1_1tir_1_1ScheduleNode.html#a08f7ed1ef1470fb1c9cfc272e14a1e32',1,'tvm::tir::ScheduleNode::GetSRef(const LoopRV &amp;loop_rv) const =0'],['../classtvm_1_1tir_1_1ScheduleNode.html#a34d50e4b429557302c5c6575bcc706d5',1,'tvm::tir::ScheduleNode::GetSRef(const StmtNode *stmt) const'],['../classtvm_1_1tir_1_1 [...]
   ['gettag',['GetTag',['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a8b46d1eb3853555b6d3a85f2ef9c0868',1,'tvm::runtime::vm::Instruction::GetTag()'],['../namespacetvm_1_1runtime_1_1vm.html#a8d8d95ce8d629c7213f2f595917870ecabbfa589e414211911a8254ba7896b127',1,'tvm::runtime::vm::GetTag()']]],
+  ['gettargetdevicetype',['GetTargetDeviceType',['../classtvm_1_1TargetNode.html#a01c985da7b7451518db042094336a4b1',1,'tvm::TargetNode']]],
   ['gettargetproperty',['GetTargetProperty',['../classtvm_1_1runtime_1_1DeviceAPI.html#a8967810939aa24e17c37599c5014e50f',1,'tvm::runtime::DeviceAPI']]],
   ['gettopk',['GetTopK',['../classtvm_1_1meta__schedule_1_1DatabaseNode.html#a27a9519109fd970572be75bc277a1fb2',1,'tvm::meta_schedule::DatabaseNode::GetTopK()'],['../classtvm_1_1meta__schedule_1_1PyDatabaseNode.html#adebe3a6bfb55e5ce0807b97f12a6c39e',1,'tvm::meta_schedule::PyDatabaseNode::GetTopK()']]],
   ['gettype',['GetType',['../namespacetvm.html#a48fb9755f38ffcfcd03592a47ffbbd14',1,'tvm']]],
diff --git a/docs/reference/api/doxygen/search/all_c.js b/docs/reference/api/doxygen/search/all_c.js
index 2db73b53a4..a906c2c973 100644
--- a/docs/reference/api/doxygen/search/all_c.js
+++ b/docs/reference/api/doxygen/search/all_c.js
@@ -113,7 +113,7 @@ var searchData=
   ['khelp',['kHelp',['../namespacetvm.html#a908c332516a33fdc106cd9ee2ebc2b9ea244ce4b6c7f56eaa446d64fc2d068bbb',1,'tvm']]],
   ['kifthenelse',['kIfThenElse',['../classtvm_1_1script_1_1printer_1_1OperationDocNode.html#ab096bbea749ee994d75230cd8136afc2aa543c0b05baacb26eb09a6539da0215e',1,'tvm::script::printer::OperationDocNode']]],
   ['killregister',['KillRegister',['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a8a0d04f104703b4b7932acba981401a9',1,'tvm::runtime::vm::Instruction::KillRegister()'],['../namespacetvm_1_1runtime_1_1vm.html#a8d8d95ce8d629c7213f2f595917870eca4dbbdb7762429945fba6aa5906b473ed',1,'tvm::runtime::vm::KillRegister()']]],
-  ['kind',['kind',['../classtvm_1_1TypeVarNode.html#afc08e151afef3c4644ba8d2cd796106a',1,'tvm::TypeVarNode::kind()'],['../classtvm_1_1GlobalTypeVarNode.html#a335e232894a68cc1e0ecb766bf4053c7',1,'tvm::GlobalTypeVarNode::kind()'],['../classtvm_1_1IncompleteTypeNode.html#ab5f37175c1fd0dbbbedc2edaa23d33dc',1,'tvm::IncompleteTypeNode::kind()'],['../classtvm_1_1runtime_1_1metadata_1_1MetadataArrayNode.html#a695a21a69be1e72b330abe32c685552e',1,'tvm::runtime::metadata::MetadataArrayNode::kind()' [...]
+  ['kind',['Kind',['../classtvm_1_1script_1_1printer_1_1OperationDocNode.html#ab096bbea749ee994d75230cd8136afc2',1,'tvm::script::printer::OperationDocNode::Kind()'],['../classtvm_1_1TypeVarNode.html#afc08e151afef3c4644ba8d2cd796106a',1,'tvm::TypeVarNode::kind()'],['../classtvm_1_1GlobalTypeVarNode.html#a335e232894a68cc1e0ecb766bf4053c7',1,'tvm::GlobalTypeVarNode::kind()'],['../classtvm_1_1IncompleteTypeNode.html#ab5f37175c1fd0dbbbedc2edaa23d33dc',1,'tvm::IncompleteTypeNode::kind()'],['.. [...]
   ['kindcheck',['KindCheck',['../namespacetvm_1_1relay.html#a9c09d2d83aa356218069b1def8046ee7',1,'tvm::relay']]],
   ['kinjective',['kInjective',['../namespacetvm_1_1relay.html#ab5f4d382bf1bee69c3e484ea6c837578a7f703d1ae163ba4e6bef88357a232e00',1,'tvm::relay::kInjective()'],['../namespacetvm_1_1topi.html#a29e22aa45900dad3b6f9f705bb1dc688',1,'tvm::topi::kInjective()']]],
   ['kinline',['kInline',['../namespacetvm_1_1relay_1_1attr.html#ad294262b6b1ca1b7bf3924a139f17562',1,'tvm::relay::attr::kInline()'],['../namespacetvm_1_1te.html#a7693a274748dadfa2eaa35f5ce9008a5a6472eda35fc70bd00e3ce3b3ce3047fc',1,'tvm::te::kInline()']]],
diff --git a/docs/reference/api/doxygen/search/all_e.js b/docs/reference/api/doxygen/search/all_e.js
index f6547a252d..8ce0e9bbc1 100644
--- a/docs/reference/api/doxygen/search/all_e.js
+++ b/docs/reference/api/doxygen/search/all_e.js
@@ -73,7 +73,7 @@ var searchData=
   ['matmulattrs',['MatmulAttrs',['../structtvm_1_1relay_1_1MatmulAttrs.html',1,'tvm::relay']]],
   ['matrix_5fset_5fdiag',['matrix_set_diag',['../namespacetvm_1_1topi.html#aead477c6c9d4f4589d22b8acff82040c',1,'tvm::topi']]],
   ['matrixsetdiagattrs',['MatrixSetDiagAttrs',['../structtvm_1_1relay_1_1MatrixSetDiagAttrs.html',1,'tvm::relay']]],
-  ['max',['Max',['../classtvm_1_1tir_1_1Max.html',1,'tvm::tir::Max'],['../classtvm_1_1arith_1_1IntSet.html#ac215840d3e9fb2817f1e5648e31317c5',1,'tvm::arith::IntSet::max()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#a2c5ea87b1155aa7810e0beb3b69b955b',1,'tvm::support::LinearCongruentialEngine::max()'],['../classtvm_1_1tir_1_1Max.html#a7dff11b4dea01bfc7a03eacd077f0729',1,'tvm::tir::Max::Max()'],['../namespacetvm.html#a0df5ca82d2c566f628ebb2f1e84a3fcb',1,'tvm::max(PrimExpr a, [...]
+  ['max',['Max',['../classtvm_1_1tir_1_1Max.html',1,'tvm::tir::Max'],['../classtvm_1_1tir_1_1Max.html#a7dff11b4dea01bfc7a03eacd077f0729',1,'tvm::tir::Max::Max()'],['../classtvm_1_1arith_1_1IntSet.html#ac215840d3e9fb2817f1e5648e31317c5',1,'tvm::arith::IntSet::max()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#a2c5ea87b1155aa7810e0beb3b69b955b',1,'tvm::support::LinearCongruentialEngine::max()'],['../namespacetvm.html#a0df5ca82d2c566f628ebb2f1e84a3fcb',1,'tvm::max(PrimExpr a, [...]
   ['max_5fcontinuous_5ferror',['max_continuous_error',['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#abdc38da91bcdf77be765c1e3d5af3648',1,'tvm::auto_scheduler::ProgramMeasurerNode']]],
   ['max_5fdisplacement',['max_displacement',['../structtvm_1_1relay_1_1CorrelationAttrs.html#ad1d16e2ba537736c8baee2553e1e32bf',1,'tvm::relay::CorrelationAttrs']]],
   ['max_5ffunctions',['max_functions',['../structTVMMutableFuncRegistry.html#a41745f8e0f73f8e4fb2074f5b154b49c',1,'TVMMutableFuncRegistry']]],
diff --git a/docs/reference/api/doxygen/search/all_f.js b/docs/reference/api/doxygen/search/all_f.js
index 5b92697f42..c990042e7a 100644
--- a/docs/reference/api/doxygen/search/all_f.js
+++ b/docs/reference/api/doxygen/search/all_f.js
@@ -3,7 +3,7 @@ var searchData=
   ['n_5ffft',['n_fft',['../structtvm_1_1relay_1_1StftAttrs.html#a4363adcb567c32e1b814ece68d321b3e',1,'tvm::relay::StftAttrs']]],
   ['n_5fparallel',['n_parallel',['../classtvm_1_1auto__scheduler_1_1ProgramBuilderNode.html#aaa056bbd3887751729ca207de8c6c792',1,'tvm::auto_scheduler::ProgramBuilderNode::n_parallel()'],['../classtvm_1_1auto__scheduler_1_1RPCRunnerNode.html#a33dc20205dc39f9b23a4d25939298923',1,'tvm::auto_scheduler::RPCRunnerNode::n_parallel()']]],
   ['n_5fsplit',['n_split',['../classtvm_1_1auto__scheduler_1_1FollowSplitStepNode.html#a365ea1b81b21e382ac350a66326f1b86',1,'tvm::auto_scheduler::FollowSplitStepNode']]],
-  ['name',['name',['../classtvm_1_1auto__scheduler_1_1IteratorNode.html#afb44a9dd3077356de0545daf7899b402',1,'tvm::auto_scheduler::IteratorNode::name()'],['../classtvm_1_1AttrFieldInfoNode.html#ad918a5375ed2f8db7557ba50f95e026b',1,'tvm::AttrFieldInfoNode::name()'],['../classtvm_1_1EnvFuncNode.html#a02d22636502da6754661d3698ea338ad',1,'tvm::EnvFuncNode::name()'],['../classtvm_1_1instrument_1_1PassInstrumentNode.html#abb8ed0c496c5f9d85b33903a10954cbc',1,'tvm::instrument::PassInstrumentNode [...]
+  ['name',['Name',['../classtvm_1_1script_1_1ir__builder_1_1IRBuilder.html#ace475c7a85ef508d912beba48ef4183a',1,'tvm::script::ir_builder::IRBuilder::Name()'],['../classtvm_1_1script_1_1ir__builder_1_1details_1_1Namer.html#a22da1264beaa8681bab998a5c597369f',1,'tvm::script::ir_builder::details::Namer::Name()'],['../classtvm_1_1auto__scheduler_1_1IteratorNode.html#afb44a9dd3077356de0545daf7899b402',1,'tvm::auto_scheduler::IteratorNode::name()'],['../classtvm_1_1AttrFieldInfoNode.html#ad918a [...]
   ['name_5f',['name_',['../classtvm_1_1runtime_1_1Registry.html#a4d8221b67729bafee4c2c5b424ed80ea',1,'tvm::runtime::Registry::name_()'],['../classtvm_1_1GenericFuncNode.html#ade1da360d3e314360fd5399b2d76d3a1',1,'tvm::GenericFuncNode::name_()']]],
   ['name_5fhint',['name_hint',['../classtvm_1_1ConstructorNode.html#ad94b373e9c4669fc5d472a9194483d66',1,'tvm::ConstructorNode::name_hint()'],['../classtvm_1_1GlobalVarNode.html#ab82974132026f07d89afcf409a2ca616',1,'tvm::GlobalVarNode::name_hint()'],['../structtvm_1_1ConstantInfoNode.html#a054eb75fb628ae03d5cb4c4c7f7e8846',1,'tvm::ConstantInfoNode::name_hint()'],['../classtvm_1_1TypeVarNode.html#a8f040892b1484a503c58870b4f0b70f6',1,'tvm::TypeVarNode::name_hint()'],['../classtvm_1_1Global [...]
   ['name_5fsupply_2eh',['name_supply.h',['../name__supply_8h.html',1,'']]],
diff --git a/docs/reference/api/doxygen/search/functions_12.js b/docs/reference/api/doxygen/search/functions_12.js
index 73daabc016..4ca8e8a36b 100644
--- a/docs/reference/api/doxygen/search/functions_12.js
+++ b/docs/reference/api/doxygen/search/functions_12.js
@@ -99,7 +99,7 @@ var searchData=
   ['rewritetensorize',['RewriteTensorize',['../classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff',1,'tvm::meta_schedule::Postproc']]],
   ['rewriteunboundblock',['RewriteUnboundBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980',1,'tvm::meta_schedule::Postproc']]],
   ['rewriteunsafeselect',['RewriteUnsafeSelect',['../namespacetvm_1_1tir_1_1transform.html#a4fe43327c4454dd05b6e925577443f49',1,'tvm::tir::transform']]],
-  ['rfactor',['rfactor',['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()']]],
+  ['rfactor',['RFactor',['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()'],['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()']]],
   ['rfactorstep',['RfactorStep',['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a26e6f85b55307f18fab4469e3bd4be0c',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(int stage_id, int iter_id, int factor_iter_id)'],['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a95575c21441177634178245ab562cb4f',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(dmlc::JSONReader *reader)']]],
   ['right_5fshift',['right_shift',['../namespacetvm.html#ae8ecc0382685a855187bede0c97d93e6',1,'tvm::right_shift(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.html#af49dde9dfdeea62e8ad3a6d8db53de0b',1,'tvm::right_shift(const PrimExpr &amp;a, int b, Span span=Span())'],['../namespacetvm.html#a98ff4361d0a24570f8dc32d03cde972a',1,'tvm::right_shift(int a, const PrimExpr &amp;b, Span span=Span())'],['../namespacetvm_1_1topi.html#a9673b9caffb46404b566c3f04a492dfe',1,'tvm::topi:: [...]
   ['rocblas_5fbatch_5fmatmul',['rocblas_batch_matmul',['../namespacetvm_1_1topi_1_1contrib.html#abf1113dd429e1285752b48f62fe12848',1,'tvm::topi::contrib']]],
diff --git a/docs/reference/api/doxygen/search/functions_13.js b/docs/reference/api/doxygen/search/functions_13.js
index e9da775751..be7879d4a3 100644
--- a/docs/reference/api/doxygen/search/functions_13.js
+++ b/docs/reference/api/doxygen/search/functions_13.js
@@ -68,8 +68,8 @@ var searchData=
   ['set_5fconfig',['set_config',['../classtvm_1_1TargetTagRegEntry.html#a3c1b66885a103360f56a17ef1e4dde2e',1,'tvm::TargetTagRegEntry']]],
   ['set_5fcreator',['set_creator',['../classtvm_1_1ReflectionVTable_1_1Registry.html#a33948eae2c61e1c80c637f08b516594a',1,'tvm::ReflectionVTable::Registry']]],
   ['set_5fdefault',['set_default',['../structtvm_1_1detail_1_1AttrNopEntry.html#a370e92bafbada9ba805a52e72881f98b',1,'tvm::detail::AttrNopEntry::set_default()'],['../structtvm_1_1detail_1_1AttrInitEntry.html#ae6f6e6264a5b6373b2daada1f55a1dca',1,'tvm::detail::AttrInitEntry::set_default()'],['../classtvm_1_1detail_1_1AttrDocEntry.html#a2a0d680fbaaef688f3ffb9e5d897e417',1,'tvm::detail::AttrDocEntry::set_default()'],['../structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html#ae88a65b8d90a7c5 [...]
+  ['set_5fdefault_5fdevice_5ftype',['set_default_device_type',['../classtvm_1_1TargetKindRegEntry.html#aa34789ae275e36dcd6696aa3881bbc92',1,'tvm::TargetKindRegEntry']]],
   ['set_5fdefault_5fkeys',['set_default_keys',['../classtvm_1_1TargetKindRegEntry.html#a2995c32e12246e892f7f4cb621a2819c',1,'tvm::TargetKindRegEntry']]],
-  ['set_5fdevice_5ftype',['set_device_type',['../classtvm_1_1TargetKindRegEntry.html#ae3ce5349493f402b82e755a0a180bd9a',1,'tvm::TargetKindRegEntry']]],
   ['set_5fdispatch',['set_dispatch',['../classtvm_1_1NodeFunctor_3_01R_07const_01ObjectRef_01_6n_00_01Args_8_8_8_08_4.html#a2fcc19e5151e9b9e56cafc76231b29fd',1,'tvm::NodeFunctor&lt; R(const ObjectRef &amp;n, Args...)&gt;::set_dispatch()'],['../classtvm_1_1script_1_1printer_1_1TracedObjectFunctor.html#a39e23af093ba0ee9dab17de86b6fa58e',1,'tvm::script::printer::TracedObjectFunctor::set_dispatch(String token, uint32_t type_index, runtime::PackedFunc f)'],['../classtvm_1_1script_1_1printer_1 [...]
   ['set_5fis_5fpure',['set_is_pure',['../classtvm_1_1tir_1_1InstructionKindRegEntry.html#ade332453b008e4fce49a3e3ebb4721c5',1,'tvm::tir::InstructionKindRegEntry']]],
   ['set_5flower_5fbound',['set_lower_bound',['../structtvm_1_1detail_1_1AttrNopEntry.html#a36da34fc54009d63283d07e9d41657f7',1,'tvm::detail::AttrNopEntry::set_lower_bound()'],['../structtvm_1_1detail_1_1AttrInitEntry.html#a5608a2a457a397bf11f2be2776ec0653',1,'tvm::detail::AttrInitEntry::set_lower_bound()'],['../classtvm_1_1detail_1_1AttrDocEntry.html#a201e9d6c937d2f444d91fcc8185f8309',1,'tvm::detail::AttrDocEntry::set_lower_bound()'],['../structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry [...]
@@ -108,7 +108,7 @@ var searchData=
   ['setvalue_3c_20uint64_5ft_20_3e',['SetValue&lt; uint64_t &gt;',['../namespacetvm_1_1detail.html#acb3382242cbf538f64edae13e4ec5a84',1,'tvm::detail']]],
   ['shallowcopy',['ShallowCopy',['../classtvm_1_1IRModuleNode.html#a86bbdc4b857ce5958a2b5f29e1d6fcb6',1,'tvm::IRModuleNode']]],
   ['shallowcopyirmodule',['ShallowCopyIRModule',['../classtvm_1_1IRModule.html#aea8b821cf92cf525bd87bf15f5d31889',1,'tvm::IRModule']]],
-  ['shape',['Shape',['../classtvm_1_1runtime_1_1NDArray.html#ad273c7bc59b73fb026fd64fc764cbebc',1,'tvm::runtime::NDArray::Shape()'],['../classtvm_1_1runtime_1_1metadata_1_1TensorInfoNode.html#a5ddcd966b82c4df89084dbdf92d3108e',1,'tvm::runtime::metadata::TensorInfoNode::shape()'],['../namespacetvm_1_1topi.html#af30c02f3a3f37c7963b3af60fb9c72a1',1,'tvm::topi::shape()']]],
+  ['shape',['shape',['../classtvm_1_1runtime_1_1metadata_1_1TensorInfoNode.html#a5ddcd966b82c4df89084dbdf92d3108e',1,'tvm::runtime::metadata::TensorInfoNode::shape()'],['../classtvm_1_1runtime_1_1NDArray.html#ad273c7bc59b73fb026fd64fc764cbebc',1,'tvm::runtime::NDArray::Shape()'],['../namespacetvm_1_1topi.html#af30c02f3a3f37c7963b3af60fb9c72a1',1,'tvm::topi::shape()']]],
   ['shapediv',['shapediv',['../namespacetvm.html#a15f25703cfce73c75cb4cd33c74ea8f0',1,'tvm']]],
   ['shapeindex',['ShapeIndex',['../classtvm_1_1runtime_1_1DataType.html#a04f0e069017af3f0da47bc0c1fd80916',1,'tvm::runtime::DataType']]],
   ['shapeof',['ShapeOf',['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a5f278c637580946bc06b020f5852e44a',1,'tvm::runtime::vm::Instruction']]],
@@ -136,7 +136,7 @@ var searchData=
   ['singlepoint',['SinglePoint',['../classtvm_1_1arith_1_1IntSet.html#a58aeb0d34656b1b43ac2532e4dfa12ed',1,'tvm::arith::IntSet']]],
   ['singleton',['Singleton',['../classtvm_1_1te_1_1Singleton.html#a94450b853dcd5e9865546d8c8fe351a1',1,'tvm::te::Singleton']]],
   ['sinh',['sinh',['../namespacetvm.html#ad828bc801c73df761c58d9f8877d52ee',1,'tvm::sinh()'],['../namespacetvm_1_1topi.html#af9694f5470ba2cabc19866be3b00fe8d',1,'tvm::topi::sinh()']]],
-  ['size',['size',['../classtvm_1_1runtime_1_1ADT.html#af51613add20f67643684b1c7fdd5569a',1,'tvm::runtime::ADT::size()'],['../classtvm_1_1runtime_1_1ArrayNode.html#a3e88cee6eb31d0e495f7debd94b7573d',1,'tvm::runtime::ArrayNode::size()'],['../classtvm_1_1runtime_1_1Array.html#aed6387e67d18b9d5ad18f510fd600a25',1,'tvm::runtime::Array::size()'],['../classtvm_1_1runtime_1_1MapNode.html#a5c0c770f7667f911aa8bec879e3ac214',1,'tvm::runtime::MapNode::size()'],['../classtvm_1_1runtime_1_1Map.html#a [...]
+  ['size',['Size',['../classtvm_1_1TensorTypeNode.html#a1f08dac86ae8aea81d058ef64cfd38b4',1,'tvm::TensorTypeNode::Size()'],['../classtvm_1_1meta__schedule_1_1DatabaseNode.html#aae5b9ab9f7e497654b90c23a2159a5cc',1,'tvm::meta_schedule::DatabaseNode::Size()'],['../classtvm_1_1meta__schedule_1_1PyDatabaseNode.html#a36817d04978253571fef7d01427ce9c0',1,'tvm::meta_schedule::PyDatabaseNode::Size()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1FrameBuffer.html#ae395a0f1c6e79e825aa7a244c74a5d7b',1,' [...]
   ['sizevar',['SizeVar',['../classtvm_1_1tir_1_1SizeVar.html#ac470249315d9e395ad581d35dd5dcb05',1,'tvm::tir::SizeVar::SizeVar(ObjectPtr&lt; Object &gt; n)'],['../classtvm_1_1tir_1_1SizeVar.html#a0f8cb8a92feb96343939d223db90f7cd',1,'tvm::tir::SizeVar::SizeVar(String name_hint=&quot;s&quot;, DataType t=DataType::Int(32), Span span=Span())']]],
   ['skipassert',['SkipAssert',['../namespacetvm_1_1tir_1_1transform.html#a6fdd5910b00af823071dcdddd21cd2d3',1,'tvm::tir::transform']]],
   ['slice',['Slice',['../classtvm_1_1te_1_1Tensor_1_1Slice.html#ab314819e8bcca6421e9a4f33e48578c3',1,'tvm::te::Tensor::Slice']]],
@@ -179,7 +179,7 @@ var searchData=
   ['startmessage',['StartMessage',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#acd512b977c6dd888f90c4fd6d2b9500f',1,'tvm::runtime::micro_rpc::Session']]],
   ['startpacket',['StartPacket',['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#ade10d3bd3a26e3b7af881ae134e9a998',1,'tvm::runtime::micro_rpc::Framer']]],
   ['startsession',['StartSession',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#a15d3f9ecb8b22bf2d330f6f0a16c5239',1,'tvm::runtime::micro_rpc::Session']]],
-  ['state',['state',['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()'],['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()']]],
+  ['state',['State',['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()'],['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()']]],
   ['stats',['Stats',['../classtvm_1_1runtime_1_1vm_1_1Executable.html#a5445bd71aa14ec97552fa099dc3bd787',1,'tvm::runtime::vm::Executable']]],
   ['stepapplytoschedule',['StepApplyToSchedule',['../namespacetvm_1_1auto__scheduler.html#ac58f7548a94b92f801b2b9a6f65bd785',1,'tvm::auto_scheduler']]],
   ['stepapplytostate',['StepApplyToState',['../namespacetvm_1_1auto__scheduler.html#a6909bc5a99d1cc8372201e9392717832',1,'tvm::auto_scheduler']]],
diff --git a/docs/reference/api/doxygen/search/functions_14.js b/docs/reference/api/doxygen/search/functions_14.js
index 3c3bbbe90a..a1b12d1f8d 100644
--- a/docs/reference/api/doxygen/search/functions_14.js
+++ b/docs/reference/api/doxygen/search/functions_14.js
@@ -20,7 +20,7 @@ var searchData=
   ['tensorintrin',['TensorIntrin',['../classtvm_1_1te_1_1TensorIntrin.html#a4ff4237911227bf80b3076906dc3b7ea',1,'tvm::te::TensorIntrin::TensorIntrin()'],['../classtvm_1_1tir_1_1TensorIntrin.html#af5a94c7b098b56056e02eaf187e6871c',1,'tvm::tir::TensorIntrin::TensorIntrin()']]],
   ['tensorintrincall',['TensorIntrinCall',['../classtvm_1_1te_1_1TensorIntrinCall.html#a91c10074ce6babeba78fe72a0aab4b52',1,'tvm::te::TensorIntrinCall']]],
   ['tensorintrinnode',['TensorIntrinNode',['../classtvm_1_1te_1_1TensorIntrinNode.html#ad59e7f2b881fc798a8c64fd3959f929c',1,'tvm::te::TensorIntrinNode']]],
-  ['tensorize',['Tensorize',['../classtvm_1_1tir_1_1ScheduleNode.html#ae3794a03b566e5b1721b44c564992975',1,'tvm::tir::ScheduleNode::Tensorize(const LoopRV &amp;loop_rv, const String &amp;intrin)=0'],['../classtvm_1_1tir_1_1ScheduleNode.html#aaca1621ab9c3db0ddd04ac57de79d37f',1,'tvm::tir::ScheduleNode::Tensorize(const BlockRV &amp;block_rv, const String &amp;intrin)=0'],['../classtvm_1_1te_1_1Stage.html#ab5fe485e1d730c36b096c060b8d2ef9d',1,'tvm::te::Stage::tensorize()']]],
+  ['tensorize',['tensorize',['../classtvm_1_1te_1_1Stage.html#ab5fe485e1d730c36b096c060b8d2ef9d',1,'tvm::te::Stage::tensorize()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ae3794a03b566e5b1721b44c564992975',1,'tvm::tir::ScheduleNode::Tensorize(const LoopRV &amp;loop_rv, const String &amp;intrin)=0'],['../classtvm_1_1tir_1_1ScheduleNode.html#aaca1621ab9c3db0ddd04ac57de79d37f',1,'tvm::tir::ScheduleNode::Tensorize(const BlockRV &amp;block_rv, const String &amp;intrin)=0']]],
   ['tensornode',['TensorNode',['../classtvm_1_1te_1_1TensorNode.html#a153569448cb1bf9d2924d35639c3b8b8',1,'tvm::te::TensorNode']]],
   ['tensortype',['TensorType',['../classtvm_1_1TensorType.html#ade4460e9b02b42757a83808dec478b87',1,'tvm::TensorType']]],
   ['terminalrenderer',['TerminalRenderer',['../namespacetvm.html#a69a0e3f559d3a3b98d42701117d93ed0',1,'tvm']]],
@@ -56,7 +56,7 @@ var searchData=
   ['totupletype',['ToTupleType',['../namespacetvm_1_1relay.html#ae6757a008816e31cce4109e8dfc2bc16',1,'tvm::relay']]],
   ['touchtask',['TouchTask',['../classtvm_1_1meta__schedule_1_1TaskSchedulerNode.html#af6fa276674945d3432c129bdf9cea599',1,'tvm::meta_schedule::TaskSchedulerNode::TouchTask()'],['../classtvm_1_1meta__schedule_1_1PyTaskSchedulerNode.html#a7de09f81c8aceb580b43107f266e6b40',1,'tvm::meta_schedule::PyTaskSchedulerNode::TouchTask()']]],
   ['tovar',['ToVar',['../classtvm_1_1tir_1_1AnyNode.html#ae01ebbba2378afb6509a22de97f8fb30',1,'tvm::tir::AnyNode']]],
-  ['trace',['trace',['../classtvm_1_1tir_1_1ScheduleNode.html#a953bca4123b5a758adfdcd65634a5f3b',1,'tvm::tir::ScheduleNode::trace()'],['../classtvm_1_1tir_1_1Trace.html#a8e09abffd0b9b1afac7b832cf16c142d',1,'tvm::tir::Trace::Trace()'],['../classtvm_1_1tir_1_1Trace.html#af79bccf1bde25efea387bb1b82dacaa6',1,'tvm::tir::Trace::Trace(Array&lt; Instruction &gt; insts, Map&lt; Instruction, ObjectRef &gt; decisions)']]],
+  ['trace',['Trace',['../classtvm_1_1tir_1_1Trace.html#a8e09abffd0b9b1afac7b832cf16c142d',1,'tvm::tir::Trace::Trace()'],['../classtvm_1_1tir_1_1Trace.html#af79bccf1bde25efea387bb1b82dacaa6',1,'tvm::tir::Trace::Trace(Array&lt; Instruction &gt; insts, Map&lt; Instruction, ObjectRef &gt; decisions)'],['../classtvm_1_1tir_1_1ScheduleNode.html#a953bca4123b5a758adfdcd65634a5f3b',1,'tvm::tir::ScheduleNode::trace()']]],
   ['traced',['Traced',['../classtvm_1_1tir_1_1Schedule.html#a295d432b86621101f67b20fadb367b91',1,'tvm::tir::Schedule']]],
   ['tracedarray',['TracedArray',['../classtvm_1_1TracedArray.html#a7b1ab76aea02b3357239cbe6b521bc39',1,'tvm::TracedArray']]],
   ['tracedarrayiterator',['TracedArrayIterator',['../classtvm_1_1TracedArrayIterator.html#a684a4dfb9a548bff64120cf40822a3b9',1,'tvm::TracedArrayIterator']]],
diff --git a/docs/reference/api/doxygen/search/functions_15.js b/docs/reference/api/doxygen/search/functions_15.js
index ee28582590..249d84d88a 100644
--- a/docs/reference/api/doxygen/search/functions_15.js
+++ b/docs/reference/api/doxygen/search/functions_15.js
@@ -22,7 +22,7 @@ var searchData=
   ['unknownattributeaccesspathnode',['UnknownAttributeAccessPathNode',['../classtvm_1_1UnknownAttributeAccessPathNode.html#a1882e9e591466a2785acc761dc63d56e',1,'tvm::UnknownAttributeAccessPathNode']]],
   ['unmatchedcases',['UnmatchedCases',['../namespacetvm_1_1relay.html#aa3a8cace40f8056fd6412f39c3eaa605',1,'tvm::relay']]],
   ['unravel_5findex',['unravel_index',['../namespacetvm_1_1topi.html#a8811a02532bbe3047986bf1a8449ac0e',1,'tvm::topi']]],
-  ['unroll',['unroll',['../classtvm_1_1auto__scheduler_1_1State.html#aa68a9d2e226bae38a36e4be4af1d1ae4',1,'tvm::auto_scheduler::State::unroll()'],['../classtvm_1_1te_1_1Stage.html#af83ad8672660403504f472228b044b33',1,'tvm::te::Stage::unroll()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a84ec742f6295f59390592a6d0d90a552',1,'tvm::tir::ScheduleNode::Unroll()'],['../namespacetvm_1_1script_1_1ir__builder_1_1tir.html#acd41556b0c4088d0f309ef5495aaebe3',1,'tvm::script::ir_builder::tir::Unroll()']]],
+  ['unroll',['Unroll',['../classtvm_1_1tir_1_1ScheduleNode.html#a84ec742f6295f59390592a6d0d90a552',1,'tvm::tir::ScheduleNode::Unroll()'],['../classtvm_1_1auto__scheduler_1_1State.html#aa68a9d2e226bae38a36e4be4af1d1ae4',1,'tvm::auto_scheduler::State::unroll()'],['../classtvm_1_1te_1_1Stage.html#af83ad8672660403504f472228b044b33',1,'tvm::te::Stage::unroll()'],['../namespacetvm_1_1script_1_1ir__builder_1_1tir.html#acd41556b0c4088d0f309ef5495aaebe3',1,'tvm::script::ir_builder::tir::Unroll()']]],
   ['unrollloop',['UnrollLoop',['../namespacetvm_1_1tir_1_1transform.html#ab2f279e91071fa96a1edb24fa004ea6a',1,'tvm::tir::transform']]],
   ['update',['Update',['../classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html#a5ae0699196c4bbc754bbdd4c3a6c7ca7',1,'tvm::arith::ConstIntBoundAnalyzer::Update()'],['../classtvm_1_1arith_1_1ModularSetAnalyzer.html#a04156fac580981f3005af3b8e676720d',1,'tvm::arith::ModularSetAnalyzer::Update()'],['../classtvm_1_1arith_1_1RewriteSimplifier.html#a5e6752c0702dc2d3e4235797d9d3ac7b',1,'tvm::arith::RewriteSimplifier::Update()'],['../classtvm_1_1arith_1_1CanonicalSimplifier.html#a790c032e12c7d93e9e940 [...]
   ['updatecostmodel',['UpdateCostModel',['../classtvm_1_1meta__schedule_1_1MeasureCallback.html#afdf5503c6e6f53767de132d91a7b53f9',1,'tvm::meta_schedule::MeasureCallback']]],
diff --git a/docs/reference/api/doxygen/search/functions_7.js b/docs/reference/api/doxygen/search/functions_7.js
index 2c9a035665..fe034100d1 100644
--- a/docs/reference/api/doxygen/search/functions_7.js
+++ b/docs/reference/api/doxygen/search/functions_7.js
@@ -100,6 +100,7 @@ var searchData=
   ['getspan',['GetSpan',['../classtvm_1_1TypeReporterNode.html#a06af835a761aaa10627a88ac4b712a15',1,'tvm::TypeReporterNode']]],
   ['getsref',['GetSRef',['../classtvm_1_1tir_1_1ScheduleNode.html#a3b6d659b1a0a4b8175d7495afc3a791c',1,'tvm::tir::ScheduleNode::GetSRef(const BlockRV &amp;block_rv) const =0'],['../classtvm_1_1tir_1_1ScheduleNode.html#a08f7ed1ef1470fb1c9cfc272e14a1e32',1,'tvm::tir::ScheduleNode::GetSRef(const LoopRV &amp;loop_rv) const =0'],['../classtvm_1_1tir_1_1ScheduleNode.html#a34d50e4b429557302c5c6575bcc706d5',1,'tvm::tir::ScheduleNode::GetSRef(const StmtNode *stmt) const'],['../classtvm_1_1tir_1_1 [...]
   ['gettag',['GetTag',['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a8b46d1eb3853555b6d3a85f2ef9c0868',1,'tvm::runtime::vm::Instruction']]],
+  ['gettargetdevicetype',['GetTargetDeviceType',['../classtvm_1_1TargetNode.html#a01c985da7b7451518db042094336a4b1',1,'tvm::TargetNode']]],
   ['gettargetproperty',['GetTargetProperty',['../classtvm_1_1runtime_1_1DeviceAPI.html#a8967810939aa24e17c37599c5014e50f',1,'tvm::runtime::DeviceAPI']]],
   ['gettopk',['GetTopK',['../classtvm_1_1meta__schedule_1_1DatabaseNode.html#a27a9519109fd970572be75bc277a1fb2',1,'tvm::meta_schedule::DatabaseNode::GetTopK()'],['../classtvm_1_1meta__schedule_1_1PyDatabaseNode.html#adebe3a6bfb55e5ce0807b97f12a6c39e',1,'tvm::meta_schedule::PyDatabaseNode::GetTopK()']]],
   ['gettype',['GetType',['../namespacetvm.html#a48fb9755f38ffcfcd03592a47ffbbd14',1,'tvm']]],
diff --git a/docs/reference/api/doxygen/search/functions_d.js b/docs/reference/api/doxygen/search/functions_d.js
index 0e4e05ff5a..c62c0111db 100644
--- a/docs/reference/api/doxygen/search/functions_d.js
+++ b/docs/reference/api/doxygen/search/functions_d.js
@@ -37,7 +37,7 @@ var searchData=
   ['matchrange',['MatchRange',['../classtvm_1_1arith_1_1IntSet.html#a2f2999336fbba4f436b66bdddce5c57a',1,'tvm::arith::IntSet']]],
   ['matmul',['matmul',['../namespacetvm_1_1topi.html#adae7dcb7e951109ba72192202d182994',1,'tvm::topi']]],
   ['matrix_5fset_5fdiag',['matrix_set_diag',['../namespacetvm_1_1topi.html#aead477c6c9d4f4589d22b8acff82040c',1,'tvm::topi']]],
-  ['max',['max',['../classtvm_1_1arith_1_1IntSet.html#ac215840d3e9fb2817f1e5648e31317c5',1,'tvm::arith::IntSet::max()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#a2c5ea87b1155aa7810e0beb3b69b955b',1,'tvm::support::LinearCongruentialEngine::max()'],['../classtvm_1_1tir_1_1Max.html#a7dff11b4dea01bfc7a03eacd077f0729',1,'tvm::tir::Max::Max()'],['../namespacetvm.html#a0df5ca82d2c566f628ebb2f1e84a3fcb',1,'tvm::max(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.ht [...]
+  ['max',['Max',['../classtvm_1_1tir_1_1Max.html#a7dff11b4dea01bfc7a03eacd077f0729',1,'tvm::tir::Max::Max()'],['../classtvm_1_1arith_1_1IntSet.html#ac215840d3e9fb2817f1e5648e31317c5',1,'tvm::arith::IntSet::max()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#a2c5ea87b1155aa7810e0beb3b69b955b',1,'tvm::support::LinearCongruentialEngine::max()'],['../namespacetvm.html#a0df5ca82d2c566f628ebb2f1e84a3fcb',1,'tvm::max(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.ht [...]
   ['max_5fvalue',['max_value',['../namespacetvm.html#a4f1398024c0af23699447ef910b654b8',1,'tvm']]],
   ['maxconcurrency',['MaxConcurrency',['../namespacetvm_1_1runtime_1_1threading.html#af8c1c389a74e67bcc3680555288219f8',1,'tvm::runtime::threading']]],
   ['maximum',['maximum',['../namespacetvm_1_1topi.html#afd64bc3e27dfc97002d3add5d7ce4174',1,'tvm::topi::maximum(const tvm::PrimExpr &amp;a, const tvm::PrimExpr &amp;b)'],['../namespacetvm_1_1topi.html#a5338e9297463bc745027fca67daa2ebb',1,'tvm::topi::maximum(const tvm::te::Tensor &amp;A, const tvm::te::Tensor &amp;B, std::string name=&quot;T_&quot; &quot;maximum&quot;, std::string tag=kBroadcast)'],['../namespacetvm_1_1topi.html#a4076a8d6a2b243c548d741e9f6bcfe69',1,'tvm::topi::maximum(con [...]
diff --git a/docs/reference/api/doxygen/search/functions_e.js b/docs/reference/api/doxygen/search/functions_e.js
index dc588bb6e8..5b129aaf8f 100644
--- a/docs/reference/api/doxygen/search/functions_e.js
+++ b/docs/reference/api/doxygen/search/functions_e.js
@@ -1,6 +1,6 @@
 var searchData=
 [
-  ['name',['name',['../classtvm_1_1runtime_1_1metadata_1_1TensorInfoNode.html#a2b889fc061097f72dcab73f090574c65',1,'tvm::runtime::metadata::TensorInfoNode::name()'],['../classtvm_1_1tir_1_1LayoutAxis.html#a0e1b9f9e05162787ccc40a7daa54ac2e',1,'tvm::tir::LayoutAxis::name()'],['../classtvm_1_1tir_1_1Layout.html#a8beb5a8b6259d9c0e916185e9e5eee52',1,'tvm::tir::Layout::name()'],['../classtvm_1_1script_1_1ir__builder_1_1IRBuilder.html#ace475c7a85ef508d912beba48ef4183a',1,'tvm::script::ir_builde [...]
+  ['name',['Name',['../classtvm_1_1script_1_1ir__builder_1_1IRBuilder.html#ace475c7a85ef508d912beba48ef4183a',1,'tvm::script::ir_builder::IRBuilder::Name()'],['../classtvm_1_1script_1_1ir__builder_1_1details_1_1Namer.html#a22da1264beaa8681bab998a5c597369f',1,'tvm::script::ir_builder::details::Namer::Name()'],['../classtvm_1_1runtime_1_1metadata_1_1TensorInfoNode.html#a2b889fc061097f72dcab73f090574c65',1,'tvm::runtime::metadata::TensorInfoNode::name()'],['../classtvm_1_1tir_1_1LayoutAxis. [...]
   ['name_5fhint',['name_hint',['../classtvm_1_1relay_1_1VarPatternNode.html#aa2e698d6bb7d29f6db7aa07070029d42',1,'tvm::relay::VarPatternNode::name_hint()'],['../classtvm_1_1relay_1_1VarNode.html#a8e10dd556c96138aa5320df2fe947199',1,'tvm::relay::VarNode::name_hint()'],['../classtvm_1_1runtime_1_1metadata_1_1ConstantInfoMetadataNode.html#aae9679a8d5d63547239addeb13ba9c17',1,'tvm::runtime::metadata::ConstantInfoMetadataNode::name_hint()']]],
   ['namesupply',['NameSupply',['../classtvm_1_1NameSupply.html#ad6c6c3d3a4698ee50ab2fd2062566820',1,'tvm::NameSupply']]],
   ['namesupplynode',['NameSupplyNode',['../classtvm_1_1NameSupplyNode.html#aba5bb3284648f9f8740eb76500499a12',1,'tvm::NameSupplyNode::NameSupplyNode()=default'],['../classtvm_1_1NameSupplyNode.html#a06784e30bad359af91cb7cd65ec79ca7',1,'tvm::NameSupplyNode::NameSupplyNode(const String &amp;prefix, std::unordered_map&lt; std::string, int &gt; name_map)']]],
diff --git a/docs/reference/api/doxygen/search/variables_4.js b/docs/reference/api/doxygen/search/variables_4.js
index d16efa9f9d..a8b142a0b9 100644
--- a/docs/reference/api/doxygen/search/variables_4.js
+++ b/docs/reference/api/doxygen/search/variables_4.js
@@ -13,6 +13,7 @@ var searchData=
   ['debug_5fmask',['debug_mask',['../classtvm_1_1tir_1_1ScheduleStateNode.html#a33ab5d3859aaf065c35e561d17b3ca48',1,'tvm::tir::ScheduleStateNode']]],
   ['decisions',['decisions',['../classtvm_1_1tir_1_1TraceNode.html#a28bd8da64eaa35b0150c3b2a08a0e9e4',1,'tvm::tir::TraceNode']]],
   ['decorators',['decorators',['../classtvm_1_1script_1_1printer_1_1FunctionDocNode.html#a5bfd7179298fe5bcbc9527af2b3b98e0',1,'tvm::script::printer::FunctionDocNode::decorators()'],['../classtvm_1_1script_1_1printer_1_1ClassDocNode.html#a253cf698eba7d39b7345553e646bc8b9',1,'tvm::script::printer::ClassDocNode::decorators()']]],
+  ['default_5fdevice_5ftype',['default_device_type',['../classtvm_1_1TargetKindNode.html#a0d66deaddc1ac8bfe3e39616df811b7e',1,'tvm::TargetKindNode']]],
   ['default_5fkeys',['default_keys',['../classtvm_1_1TargetKindNode.html#aa62e049ba158730d9ab88e4c0b173de9',1,'tvm::TargetKindNode']]],
   ['default_5fmax_5fcontinuous_5ferror',['DEFAULT_MAX_CONTINUOUS_ERROR',['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a6600d5b819e6c7e9ef3f6c32c355e3db',1,'tvm::auto_scheduler::ProgramMeasurerNode']]],
   ['default_5fprimitive_5fvirtual_5fdevice',['default_primitive_virtual_device',['../classtvm_1_1CompilationConfigNode.html#abe4569cf32c57b710be99b50e7118876',1,'tvm::CompilationConfigNode']]],
@@ -28,7 +29,7 @@ var searchData=
   ['device_5findex',['device_index',['../structTVMGraphExecutorGraphAttr.html#ae55c2e6d56c07fc475c44d82ba1de012',1,'TVMGraphExecutorGraphAttr::device_index()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#af91776ae1a16f3545bc4749599d62643',1,'tvm::runtime::vm::Instruction::device_index()']]],
   ['device_5fmetrics',['device_metrics',['../classtvm_1_1runtime_1_1profiling_1_1ReportNode.html#ababc1b17ad3a7f9bfe9a8006cc2c4cd0',1,'tvm::runtime::profiling::ReportNode']]],
   ['device_5fscope',['device_scope',['../namespacetvm_1_1tir_1_1attr.html#a36db026f638ad3d951c302796ddcae24',1,'tvm::tir::attr']]],
-  ['device_5ftype',['device_type',['../classtvm_1_1meta__schedule_1_1RunnerInputNode.html#a5879e387f788cfd90b5a62ef1e55011e',1,'tvm::meta_schedule::RunnerInputNode::device_type()'],['../classtvm_1_1TargetKindNode.html#a18459286d8d501892992a4209ad08652',1,'tvm::TargetKindNode::device_type()'],['../namespacetvm_1_1tir_1_1attr.html#a7e4e7cd47471a9089022214d63d24206',1,'tvm::tir::attr::device_type()']]],
+  ['device_5ftype',['device_type',['../classtvm_1_1meta__schedule_1_1RunnerInputNode.html#a5879e387f788cfd90b5a62ef1e55011e',1,'tvm::meta_schedule::RunnerInputNode::device_type()'],['../namespacetvm_1_1tir_1_1attr.html#a7e4e7cd47471a9089022214d63d24206',1,'tvm::tir::attr::device_type()']]],
   ['devices_5f',['devices_',['../classtvm_1_1runtime_1_1vm_1_1VirtualMachine.html#a602daa8d70ae598a833d8601d1ef6d95',1,'tvm::runtime::vm::VirtualMachine']]],
   ['diag_5fctx',['diag_ctx',['../classtvm_1_1transform_1_1PassContextNode.html#aa7bfc5ab1cf83d43a9b9bf4f1e62dd8c',1,'tvm::transform::PassContextNode']]],
   ['diagnostics',['diagnostics',['../classtvm_1_1DiagnosticContextNode.html#ada207669f235f6aa8dbf310583a92339',1,'tvm::DiagnosticContextNode']]],
diff --git a/docs/reference/api/doxygen/search__task_8h_source.html b/docs/reference/api/doxygen/search__task_8h_source.html
index 806da3d6c3..c714fdfceb 100644
--- a/docs/reference/api/doxygen/search__task_8h_source.html
+++ b/docs/reference/api/doxygen/search__task_8h_source.html
@@ -91,7 +91,7 @@ $(function() {
 <div class="ttc" id="object_8h_html_ac6e7295a4999e2c8e4a2c990beca887a"><div class="ttname"><a href="object_8h.html#ac6e7295a4999e2c8e4a2c990beca887a">TVM_DEFINE_OBJECT_REF_METHODS</a></div><div class="ttdeci">#define TVM_DEFINE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)</div><div class="ttdef"><b>Definition:</b> object.h:713</div></div>
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1HardwareParamsNode_html_ae5bc9c8d5e48ac5c1a40460782fbd9d7"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1HardwareParamsNode.html#ae5bc9c8d5e48ac5c1a40460782fbd9d7">tvm::auto_scheduler::HardwareParamsNode::_type_key</a></div><div class="ttdeci">static constexpr const char * _type_key</div><div class="ttdef"><b>Definition:</b> search_task.h:78</div></div>
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1SearchTaskNode_html_acf4407e0c8dced81b05b34ec0426c933"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1SearchTaskNode.html#acf4407e0c8dced81b05b34ec0426c933">tvm::auto_scheduler::SearchTaskNode::target</a></div><div class="ttdeci">Target target</div><div class="ttdoc">The target device of this search task. </div><div class="ttdef"><b>Definition:</b> search_task.h:119</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="compute__dag_8h_html"><div class="ttname"><a href="compute__dag_8h.html">compute_dag.h</a></div><div class="ttdoc">The auto-scheduler&amp;#39;s computational graph and related program analyses. </div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="object_8h_html_af8330e3864503fb7c4133ae4d48fe4a2"><div class="ttname"><a href="object_8h.html#af8330e3864503fb7c4133ae4d48fe4a2">TVM_DEFINE_OBJECT_REF_COW_METHOD</a></div><div class="ttdeci">#define TVM_DEFINE_OBJECT_REF_COW_METHOD(ObjectName)</div><div class="ttdoc">Define CopyOnWrite function in an ObjectRef. </div><div class="ttdef"><b>Definition:</b> object.h:785</div></div>
diff --git a/docs/reference/api/doxygen/tag_8h_source.html b/docs/reference/api/doxygen/tag_8h_source.html
index 074f6cdfd1..a4a2bf0cde 100644
--- a/docs/reference/api/doxygen/tag_8h_source.html
+++ b/docs/reference/api/doxygen/tag_8h_source.html
@@ -80,7 +80,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1String_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1String.html">tvm::runtime::String</a></div><div class="ttdoc">Reference to string objects. </div><div class="ttdef"><b>Definition:</b> string.h:97</div></div>
 <div class="ttc" id="classtvm_1_1TargetTagRegEntry_html"><div class="ttname"><a href="classtvm_1_1TargetTagRegEntry.html">tvm::TargetTagRegEntry</a></div><div class="ttdef"><b>Definition:</b> tag.h:100</div></div>
 <div class="ttc" id="object_8h_html_ac6e7295a4999e2c8e4a2c990beca887a"><div class="ttname"><a href="object_8h.html#ac6e7295a4999e2c8e4a2c990beca887a">TVM_DEFINE_OBJECT_REF_METHODS</a></div><div class="ttdeci">#define TVM_DEFINE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)</div><div class="ttdef"><b>Definition:</b> object.h:713</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="attr__registry__map_8h_html"><div class="ttname"><a href="attr__registry__map_8h.html">attr_registry_map.h</a></div><div class="ttdoc">Attribute map used in registry. </div></div>
 <div class="ttc" id="target_8h_html"><div class="ttname"><a href="target_8h.html">target.h</a></div><div class="ttdoc">Compilation target object. </div></div>
diff --git a/docs/reference/api/doxygen/target_8h_source.html b/docs/reference/api/doxygen/target_8h_source.html
index 58ef68c1fc..5172a69269 100644
--- a/docs/reference/api/doxygen/target_8h_source.html
+++ b/docs/reference/api/doxygen/target_8h_source.html
@@ -66,13 +66,13 @@ $(function() {
 <div class="title">target.h</div>  </div>
 </div><!--header-->
 <div class="contents">
-<a href="target_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more cont [...]
+<a href="target_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more cont [...]
 <div class="ttc" id="classtvm_1_1TargetNode_html_a1bd600905c1a4469726184adbc9087b0"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a1bd600905c1a4469726184adbc9087b0">tvm::TargetNode::GetLibs</a></div><div class="ttdeci">std::unordered_set&lt; std::string &gt; GetLibs() const</div><div class="ttdoc">Get the keys for this target as an unordered_set of string. </div></div>
 <div class="ttc" id="classtvm_1_1TargetNode_html_a8602fa00bc833f39fa16b682acd704b7"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a8602fa00bc833f39fa16b682acd704b7">tvm::TargetNode::TVM_DECLARE_FINAL_OBJECT_INFO</a></div><div class="ttdeci">TVM_DECLARE_FINAL_OBJECT_INFO(TargetNode, Object)</div></div>
 <div class="ttc" id="node_8h_html"><div class="ttname"><a href="node_8h.html">node.h</a></div><div class="ttdoc">Definitions and helper macros for IR/AST nodes. </div></div>
 <div class="ttc" id="classtvm_1_1TargetNode_html"><div class="ttname"><a href="classtvm_1_1TargetNode.html">tvm::TargetNode</a></div><div class="ttdoc">Compilation target. </div><div class="ttdef"><b>Definition:</b> target.h:46</div></div>
 <div class="ttc" id="classtvm_1_1SEqualReducer_html"><div class="ttname"><a href="classtvm_1_1SEqualReducer.html">tvm::SEqualReducer</a></div><div class="ttdoc">A Reducer class to reduce the structural equality result of two objects. </div><div class="ttdef"><b>Definition:</b> structural_equal.h:124</div></div>
-<div class="ttc" id="classtvm_1_1Target_html_a58a5a1e042e265fe5a6973045226fe1a"><div class="ttname"><a href="classtvm_1_1Target.html#a58a5a1e042e265fe5a6973045226fe1a">tvm::Target::Target</a></div><div class="ttdeci">Target(std::nullptr_t)</div><div class="ttdoc">Construct a null Target. </div><div class="ttdef"><b>Definition:</b> target.h:184</div></div>
+<div class="ttc" id="classtvm_1_1Target_html_a58a5a1e042e265fe5a6973045226fe1a"><div class="ttname"><a href="classtvm_1_1Target.html#a58a5a1e042e265fe5a6973045226fe1a">tvm::Target::Target</a></div><div class="ttdeci">Target(std::nullptr_t)</div><div class="ttdoc">Construct a null Target. </div><div class="ttdef"><b>Definition:</b> target.h:186</div></div>
 <div class="ttc" id="ir_2expr_8h_html"><div class="ttname"><a href="ir_2expr_8h.html">expr.h</a></div><div class="ttdoc">Base expr nodes in TVM. </div></div>
 <div class="ttc" id="ir_2module_8h_html"><div class="ttname"><a href="ir_2module_8h.html">module.h</a></div><div class="ttdoc">IRModule that holds the functions and type definitions. </div></div>
 <div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
@@ -90,30 +90,31 @@ $(function() {
 <div class="ttc" id="classtvm_1_1AttrVisitor_html"><div class="ttname"><a href="classtvm_1_1AttrVisitor.html">tvm::AttrVisitor</a></div><div class="ttdoc">Visitor class to get the attributes of an AST/IR node. The content is going to be called for each fie...</div><div class="ttdef"><b>Definition:</b> reflection.h:52</div></div>
 <div class="ttc" id="target__kind_8h_html"><div class="ttname"><a href="target__kind_8h.html">target_kind.h</a></div><div class="ttdoc">Target kind registry. </div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
-<div class="ttc" id="classtvm_1_1TargetNode_html_aa4f6c3e10daa0e968360e258029a9860"><div class="ttname"><a href="classtvm_1_1TargetNode.html#aa4f6c3e10daa0e968360e258029a9860">tvm::TargetNode::GetFeature</a></div><div class="ttdeci">Optional&lt; TObjectRef &gt; GetFeature(const std::string &amp;attr_key, TObjectRef default_value) const</div><div class="ttdef"><b>Definition:</b> target.h:153</div></div>
+<div class="ttc" id="classtvm_1_1TargetNode_html_aa4f6c3e10daa0e968360e258029a9860"><div class="ttname"><a href="classtvm_1_1TargetNode.html#aa4f6c3e10daa0e968360e258029a9860">tvm::TargetNode::GetFeature</a></div><div class="ttdeci">Optional&lt; TObjectRef &gt; GetFeature(const std::string &amp;attr_key, TObjectRef default_value) const</div><div class="ttdef"><b>Definition:</b> target.h:155</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Map_html_abce8c6206f11edfd3c493b843d52685f"><div class="ttname"><a href="classtvm_1_1runtime_1_1Map.html#abce8c6206f11edfd3c493b843d52685f">tvm::runtime::Map::find</a></div><div class="ttdeci">iterator find(const K &amp;key) const</div><div class="ttdef"><b>Definition:</b> map.h:1383</div></div>
 <div class="ttc" id="classtvm_1_1TargetNode_html_a41181a3757227725abc614e976b264ad"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a41181a3757227725abc614e976b264ad">tvm::TargetNode::ToDebugString</a></div><div class="ttdeci">String ToDebugString() const</div><div class="ttdoc">Returns a human readable representation of Target which includes all fields, especially the host...</div></div>
-<div class="ttc" id="classtvm_1_1TargetNode_html_a65394b35be247cafb4376da9d6c81440"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a65394b35be247cafb4376da9d6c81440">tvm::TargetNode::_type_has_method_sequal_reduce</a></div><div class="ttdeci">static constexpr const bool _type_has_method_sequal_reduce</div><div class="ttdef"><b>Definition:</b> target.h:166</div></div>
+<div class="ttc" id="classtvm_1_1TargetNode_html_a65394b35be247cafb4376da9d6c81440"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a65394b35be247cafb4376da9d6c81440">tvm::TargetNode::_type_has_method_sequal_reduce</a></div><div class="ttdeci">static constexpr const bool _type_has_method_sequal_reduce</div><div class="ttdef"><b>Definition:</b> target.h:168</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1String_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1String.html">tvm::runtime::String</a></div><div class="ttdoc">Reference to string objects. </div><div class="ttdef"><b>Definition:</b> string.h:97</div></div>
-<div class="ttc" id="classtvm_1_1TargetNode_html_a0214077090c7210fc72645324fbf25cf"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a0214077090c7210fc72645324fbf25cf">tvm::TargetNode::GetAttr</a></div><div class="ttdeci">Optional&lt; TObjectRef &gt; GetAttr(const std::string &amp;attr_key, TObjectRef default_value) const</div><div class="ttdoc">Get an entry from attrs of the target. </div><div class="ttdef"><b>Definition:</b> target.h:118</div></div>
+<div class="ttc" id="classtvm_1_1TargetNode_html_a0214077090c7210fc72645324fbf25cf"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a0214077090c7210fc72645324fbf25cf">tvm::TargetNode::GetAttr</a></div><div class="ttdeci">Optional&lt; TObjectRef &gt; GetAttr(const std::string &amp;attr_key, TObjectRef default_value) const</div><div class="ttdoc">Get an entry from attrs of the target. </div><div class="ttdef"><b>Definition:</b> target.h:120</div></div>
 <div class="ttc" id="object_8h_html_ac6e7295a4999e2c8e4a2c990beca887a"><div class="ttname"><a href="object_8h.html#ac6e7295a4999e2c8e4a2c990beca887a">TVM_DEFINE_OBJECT_REF_METHODS</a></div><div class="ttdeci">#define TVM_DEFINE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)</div><div class="ttdef"><b>Definition:</b> object.h:713</div></div>
 <div class="ttc" id="classtvm_1_1TargetNode_html_abd05b2c258974b13af1192c911ccb12b"><div class="ttname"><a href="classtvm_1_1TargetNode.html#abd05b2c258974b13af1192c911ccb12b">tvm::TargetNode::GetKeys</a></div><div class="ttdeci">std::vector&lt; std::string &gt; GetKeys() const</div><div class="ttdoc">Get the keys for this target as a vector of string. </div></div>
 <div class="ttc" id="classtvm_1_1With_html"><div class="ttname"><a href="classtvm_1_1With.html">tvm::With</a></div><div class="ttdoc">RAII wrapper function to enter and exit a context object similar to python&amp;#39;s with syntax...</div><div class="ttdef"><b>Definition:</b> with.h:58</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1TargetNode_html_a688f038a13ea17f6699e57078f0b0f2f"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a688f038a13ea17f6699e57078f0b0f2f">tvm::TargetNode::attrs</a></div><div class="ttdeci">Map&lt; String, ObjectRef &gt; attrs</div><div class="ttdoc">Collection of attributes. </div><div class="ttdef"><b>Definition:</b> target.h:57</div></div>
 <div class="ttc" id="classtvm_1_1TargetNode_html_aec9e821b23172eb9460f46df0dc346fb"><div class="ttname"><a href="classtvm_1_1TargetNode.html#aec9e821b23172eb9460f46df0dc346fb">tvm::TargetNode::keys</a></div><div class="ttdeci">Array&lt; String &gt; keys</div><div class="ttdoc">Keys for this target. </div><div class="ttdef"><b>Definition:</b> target.h:55</div></div>
 <div class="ttc" id="classtvm_1_1TargetNode_html_af313f5aedbe162374d424358d34d3c7e"><div class="ttname"><a href="classtvm_1_1TargetNode.html#af313f5aedbe162374d424358d34d3c7e">tvm::TargetNode::Export</a></div><div class="ttdeci">Map&lt; String, ObjectRef &gt; Export() const</div></div>
-<div class="ttc" id="classtvm_1_1TargetNode_html_a7735df0c36621e5c9045a882991a4b32"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a7735df0c36621e5c9045a882991a4b32">tvm::TargetNode::GetFeature</a></div><div class="ttdeci">Optional&lt; TObjectRef &gt; GetFeature(const std::string &amp;feature_key, Optional&lt; TObjectRef &gt; default_value=Optional&lt; TObjectRef &gt;(nullptr)) const</div><div class="ttdoc">Get a Target feature. </div><div class="ttdef"><b>Definition:</b> targe [...]
+<div class="ttc" id="classtvm_1_1TargetNode_html_a7735df0c36621e5c9045a882991a4b32"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a7735df0c36621e5c9045a882991a4b32">tvm::TargetNode::GetFeature</a></div><div class="ttdeci">Optional&lt; TObjectRef &gt; GetFeature(const std::string &amp;feature_key, Optional&lt; TObjectRef &gt; default_value=Optional&lt; TObjectRef &gt;(nullptr)) const</div><div class="ttdoc">Get a Target feature. </div><div class="ttdef"><b>Definition:</b> targe [...]
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
-<div class="ttc" id="classtvm_1_1TargetNode_html_ad4a9f21d97d244c2055e9ba2eca71ee5"><div class="ttname"><a href="classtvm_1_1TargetNode.html#ad4a9f21d97d244c2055e9ba2eca71ee5">tvm::TargetNode::VisitAttrs</a></div><div class="ttdeci">void VisitAttrs(AttrVisitor *v)</div><div class="ttdef"><b>Definition:</b> target.h:81</div></div>
-<div class="ttc" id="classtvm_1_1TargetNode_html_a13d1def3992d37107a7fd7c75e4370d3"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a13d1def3992d37107a7fd7c75e4370d3">tvm::TargetNode::_type_has_method_shash_reduce</a></div><div class="ttdeci">static constexpr const bool _type_has_method_shash_reduce</div><div class="ttdef"><b>Definition:</b> target.h:167</div></div>
+<div class="ttc" id="classtvm_1_1TargetNode_html_ad4a9f21d97d244c2055e9ba2eca71ee5"><div class="ttname"><a href="classtvm_1_1TargetNode.html#ad4a9f21d97d244c2055e9ba2eca71ee5">tvm::TargetNode::VisitAttrs</a></div><div class="ttdeci">void VisitAttrs(AttrVisitor *v)</div><div class="ttdef"><b>Definition:</b> target.h:83</div></div>
+<div class="ttc" id="classtvm_1_1TargetNode_html_a13d1def3992d37107a7fd7c75e4370d3"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a13d1def3992d37107a7fd7c75e4370d3">tvm::TargetNode::_type_has_method_shash_reduce</a></div><div class="ttdeci">static constexpr const bool _type_has_method_shash_reduce</div><div class="ttdef"><b>Definition:</b> target.h:169</div></div>
 <div class="ttc" id="classtvm_1_1TargetNode_html_abdeae1bf6e037771b1b931f26dba15c6"><div class="ttname"><a href="classtvm_1_1TargetNode.html#abdeae1bf6e037771b1b931f26dba15c6">tvm::TargetNode::host</a></div><div class="ttdeci">Optional&lt; ObjectRef &gt; host</div><div class="ttdoc">Target host information, must be Target type. </div><div class="ttdef"><b>Definition:</b> target.h:51</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Map_html_a60c1dac32729c4bf8351972da11793e4"><div class="ttname"><a href="classtvm_1_1runtime_1_1Map.html#a60c1dac32729c4bf8351972da11793e4">tvm::runtime::Map::end</a></div><div class="ttdeci">iterator end() const</div><div class="ttdef"><b>Definition:</b> map.h:1381</div></div>
-<div class="ttc" id="classtvm_1_1TargetNode_html_a496626468eac236e9e046cb77a5f697e"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a496626468eac236e9e046cb77a5f697e">tvm::TargetNode::_type_key</a></div><div class="ttdeci">static constexpr const char * _type_key</div><div class="ttdef"><b>Definition:</b> target.h:165</div></div>
+<div class="ttc" id="classtvm_1_1TargetNode_html_a496626468eac236e9e046cb77a5f697e"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a496626468eac236e9e046cb77a5f697e">tvm::TargetNode::_type_key</a></div><div class="ttdeci">static constexpr const char * _type_key</div><div class="ttdef"><b>Definition:</b> target.h:167</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Map_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Map.html">tvm::runtime::Map</a></div><div class="ttdoc">Map container of NodeRef-&gt;NodeRef in DSL graph. Map implements copy on write semantics, which means map is mutable but copy will happen when array is referenced in more than two places. </div><div class="ttdef"><b>Definition:</b> map.h:1271</div></div>
 <div class="ttc" id="classtvm_1_1TargetNode_html_a998369eed05aa80140564c2f29742d46"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a998369eed05aa80140564c2f29742d46">tvm::TargetNode::features</a></div><div class="ttdeci">Map&lt; String, ObjectRef &gt; features</div><div class="ttdoc">Target features. </div><div class="ttdef"><b>Definition:</b> target.h:59</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Optional_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Optional.html">tvm::runtime::Optional</a></div><div class="ttdoc">Optional container that to represent to a Nullable variant of T. </div><div class="ttdef"><b>Definition:</b> optional.h:51</div></div>
-<div class="ttc" id="classtvm_1_1TargetNode_html_a008fae4839d63a3a7a9bc7e0f0e40480"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a008fae4839d63a3a7a9bc7e0f0e40480">tvm::TargetNode::GetAttr</a></div><div class="ttdeci">Optional&lt; TObjectRef &gt; GetAttr(const std::string &amp;attr_key, Optional&lt; TObjectRef &gt; default_value=Optional&lt; TObjectRef &gt;(nullptr)) const</div><div class="ttdoc">Get an entry from attrs of the target. </div><div class="ttdef"><b>Definition:</ [...]
+<div class="ttc" id="classtvm_1_1TargetNode_html_a01c985da7b7451518db042094336a4b1"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a01c985da7b7451518db042094336a4b1">tvm::TargetNode::GetTargetDeviceType</a></div><div class="ttdeci">int GetTargetDeviceType() const</div></div>
+<div class="ttc" id="classtvm_1_1TargetNode_html_a008fae4839d63a3a7a9bc7e0f0e40480"><div class="ttname"><a href="classtvm_1_1TargetNode.html#a008fae4839d63a3a7a9bc7e0f0e40480">tvm::TargetNode::GetAttr</a></div><div class="ttdeci">Optional&lt; TObjectRef &gt; GetAttr(const std::string &amp;attr_key, Optional&lt; TObjectRef &gt; default_value=Optional&lt; TObjectRef &gt;(nullptr)) const</div><div class="ttdoc">Get an entry from attrs of the target. </div><div class="ttdef"><b>Definition:</ [...]
 <div class="ttc" id="with_8h_html"><div class="ttname"><a href="with_8h.html">with.h</a></div><div class="ttdoc">RAII wrapper function to enter and exit a context object similar to python&amp;#39;s with syntax...</div></div>
 <div class="ttc" id="namespacetvm_html_a741dec82c75bea850290cf8bc412c006"><div class="ttname"><a href="namespacetvm.html#a741dec82c75bea850290cf8bc412c006">tvm::CheckAndUpdateHostConsistency</a></div><div class="ttdeci">void CheckAndUpdateHostConsistency(Target *target, Target *host)</div><div class="ttdoc">Check and update host field of the given legacy target and target host pair. Note that this function ...</div></div>
 </div><!-- fragment --></div><!-- contents -->
diff --git a/docs/reference/api/doxygen/target__kind_8h.html b/docs/reference/api/doxygen/target__kind_8h.html
index 8d7cc52b3a..bb83f0da32 100644
--- a/docs/reference/api/doxygen/target__kind_8h.html
+++ b/docs/reference/api/doxygen/target__kind_8h.html
@@ -196,12 +196,12 @@ Variables</h2></td></tr>
         </tr>
       </table>
 </div><div class="memdoc">
-<b>Value:</b><div class="fragment"><div class="line"><a class="code" href="object_8h.html#a73bf3e57b9d7a6e0dd55d901321d01ed">TVM_STR_CONCAT</a>(<a class="code" href="target__kind_8h.html#a2341708a81fcee611c3c5a156596522c">TVM_TARGET_KIND_REGISTER_VAR_DEF</a>, __COUNTER__) = <a class="code" href="classtvm_1_1TargetKindRegEntry.html#a478c1bd27f0b8dd1b95c58808f8d0c70">\</a></div><div class="line"><a class="code" href="classtvm_1_1TargetKindRegEntry.html#a478c1bd27f0b8dd1b95c58808f8d0c70">   [...]
-<div class="ttc" id="classtvm_1_1TargetKindRegEntry_html_ae3ce5349493f402b82e755a0a180bd9a"><div class="ttname"><a href="classtvm_1_1TargetKindRegEntry.html#ae3ce5349493f402b82e755a0a180bd9a">tvm::TargetKindRegEntry::set_device_type</a></div><div class="ttdeci">TargetKindRegEntry &amp; set_device_type(int device_type)</div><div class="ttdoc">Set DLPack&amp;#39;s device_type the target. </div><div class="ttdef"><b>Definition:</b> target_kind.h:366</div></div>
+<b>Value:</b><div class="fragment"><div class="line"><a class="code" href="object_8h.html#a73bf3e57b9d7a6e0dd55d901321d01ed">TVM_STR_CONCAT</a>(<a class="code" href="target__kind_8h.html#a2341708a81fcee611c3c5a156596522c">TVM_TARGET_KIND_REGISTER_VAR_DEF</a>, __COUNTER__) = <a class="code" href="classtvm_1_1TargetKindRegEntry.html#a478c1bd27f0b8dd1b95c58808f8d0c70">\</a></div><div class="line"><a class="code" href="classtvm_1_1TargetKindRegEntry.html#a478c1bd27f0b8dd1b95c58808f8d0c70">   [...]
 <div class="ttc" id="classtvm_1_1TargetKindRegEntry_html_a36f21402bccb03300478d6c85bd05512"><div class="ttname"><a href="classtvm_1_1TargetKindRegEntry.html#a36f21402bccb03300478d6c85bd05512">tvm::TargetKindRegEntry::set_name</a></div><div class="ttdeci">TargetKindRegEntry &amp; set_name()</div><div class="ttdoc">Set name of the TargetKind to be the same as registry if it is empty. </div><div class="ttdef"><b>Definition:</b> target_kind.h:405</div></div>
 <div class="ttc" id="classtvm_1_1TargetKindRegEntry_html_a478c1bd27f0b8dd1b95c58808f8d0c70"><div class="ttname"><a href="classtvm_1_1TargetKindRegEntry.html#a478c1bd27f0b8dd1b95c58808f8d0c70">tvm::TargetKindRegEntry::RegisterOrGet</a></div><div class="ttdeci">static TargetKindRegEntry &amp; RegisterOrGet(const String &amp;target_kind_name)</div><div class="ttdoc">Register or get a new entry. </div></div>
 <div class="ttc" id="classtvm_1_1TargetKindRegEntry_html_accd2e15133cf6e6fe2703f57464eae89"><div class="ttname"><a href="classtvm_1_1TargetKindRegEntry.html#accd2e15133cf6e6fe2703f57464eae89">tvm::TargetKindRegEntry::add_attr_option</a></div><div class="ttdeci">TargetKindRegEntry &amp; add_attr_option(const String &amp;key)</div><div class="ttdoc">Register a valid configuration option and its ValueType for validation. </div><div class="ttdef"><b>Definition:</b> target_kind.h:390</div></div>
 <div class="ttc" id="target__kind_8h_html_a2341708a81fcee611c3c5a156596522c"><div class="ttname"><a href="target__kind_8h.html#a2341708a81fcee611c3c5a156596522c">TVM_TARGET_KIND_REGISTER_VAR_DEF</a></div><div class="ttdeci">#define TVM_TARGET_KIND_REGISTER_VAR_DEF</div><div class="ttdef"><b>Definition:</b> target_kind.h:412</div></div>
+<div class="ttc" id="classtvm_1_1TargetKindRegEntry_html_aa34789ae275e36dcd6696aa3881bbc92"><div class="ttname"><a href="classtvm_1_1TargetKindRegEntry.html#aa34789ae275e36dcd6696aa3881bbc92">tvm::TargetKindRegEntry::set_default_device_type</a></div><div class="ttdeci">TargetKindRegEntry &amp; set_default_device_type(int device_type)</div><div class="ttdoc">Set DLPack&amp;#39;s device_type the target. </div><div class="ttdef"><b>Definition:</b> target_kind.h:366</div></div>
 </div><!-- fragment -->
 <p>Register a new target kind, or set attribute of the corresponding target kind. </p>
 <dl class="params"><dt>Parameters</dt><dd>
diff --git a/docs/reference/api/doxygen/target__kind_8h_source.html b/docs/reference/api/doxygen/target__kind_8h_source.html
index 79f64aaed6..793071450d 100644
--- a/docs/reference/api/doxygen/target__kind_8h_source.html
+++ b/docs/reference/api/doxygen/target__kind_8h_source.html
@@ -66,7 +66,7 @@ $(function() {
 <div class="title">target_kind.h</div>  </div>
 </div><!--header-->
 <div class="contents">
-<a href="target__kind_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or mor [...]
+<a href="target__kind_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or mor [...]
 <div class="ttc" id="classtvm_1_1runtime_1_1TVMRetValue_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1TVMRetValue.html">tvm::runtime::TVMRetValue</a></div><div class="ttdoc">Return Value container, Unlike TVMArgValue, which only holds reference and do not delete the underlyi...</div><div class="ttdef"><b>Definition:</b> packed_func.h:799</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1attr_html_a7e4e7cd47471a9089022214d63d24206"><div class="ttname"><a href="namespacetvm_1_1tir_1_1attr.html#a7e4e7cd47471a9089022214d63d24206">tvm::tir::attr::device_type</a></div><div class="ttdeci">constexpr const char * device_type</div><div class="ttdoc">The device type. </div><div class="ttdef"><b>Definition:</b> stmt.h:1392</div></div>
 <div class="ttc" id="structtvm_1_1detail_1_1is__specialized_html"><div class="ttname"><a href="structtvm_1_1detail_1_1is__specialized.html">tvm::detail::is_specialized</a></div><div class="ttdef"><b>Definition:</b> target_kind.h:289</div></div>
@@ -101,7 +101,6 @@ $(function() {
 <div class="ttc" id="attr__registry__map_8h_html"><div class="ttname"><a href="attr__registry__map_8h.html">attr_registry_map.h</a></div><div class="ttdoc">Attribute map used in registry. </div></div>
 <div class="ttc" id="namespacetvm_1_1attr_html_a17f834882ba3cd00890329433e8e81dd"><div class="ttname"><a href="namespacetvm_1_1attr.html#a17f834882ba3cd00890329433e8e81dd">tvm::attr::kIsExternalCodegen</a></div><div class="ttdeci">constexpr const char * kIsExternalCodegen</div><div class="ttdoc">A TargetKind attribute of type Bool. If true, then the target kind name also corresponds to an extern...</div><div class="ttdef"><b>Definition:</b> target_kind.h:432</div></div>
 <div class="ttc" id="object_8h_html_a3aea9b3f65aeb9150c0fa7800e5573c6"><div class="ttname"><a href="object_8h.html#a3aea9b3f65aeb9150c0fa7800e5573c6">TVM_DECLARE_FINAL_OBJECT_INFO</a></div><div class="ttdeci">#define TVM_DECLARE_FINAL_OBJECT_INFO(TypeName, ParentType)</div><div class="ttdoc">helper macro to declare type information in a final class. </div><div class="ttdef"><b>Definition:</b> object.h:671</div></div>
-<div class="ttc" id="classtvm_1_1TargetKindNode_html_a18459286d8d501892992a4209ad08652"><div class="ttname"><a href="classtvm_1_1TargetKindNode.html#a18459286d8d501892992a4209ad08652">tvm::TargetKindNode::device_type</a></div><div class="ttdeci">int device_type</div><div class="ttdoc">Device type of target kind. </div><div class="ttdef"><b>Definition:</b> target_kind.h:95</div></div>
 <div class="ttc" id="classtvm_1_1TargetKindRegEntry_html_a4fa4f8e5fa280ddf3dc71310afd467a5"><div class="ttname"><a href="classtvm_1_1TargetKindRegEntry.html#a4fa4f8e5fa280ddf3dc71310afd467a5">tvm::TargetKindRegEntry::set_attr</a></div><div class="ttdeci">TargetKindRegEntry &amp; set_attr(const String &amp;attr_name, const ValueType &amp;value, int plevel=10)</div><div class="ttdoc">Register additional attributes to target_kind. </div><div class="ttdef"><b>Definition:</b> target_kind.h:35 [...]
 <div class="ttc" id="structtvm_1_1detail_1_1is__specialized_html_a3ea7783c457d7ddc82100674292724f4"><div class="ttname"><a href="structtvm_1_1detail_1_1is__specialized.html#a3ea7783c457d7ddc82100674292724f4">tvm::detail::is_specialized::type</a></div><div class="ttdeci">std::false_type type</div><div class="ttdef"><b>Definition:</b> target_kind.h:290</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1Map_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Map.html">tvm::runtime::Map</a></div><div class="ttdoc">Map container of NodeRef-&gt;NodeRef in DSL graph. Map implements copy on write semantics, which means map is mutable but copy will happen when array is referenced in more than two places. </div><div class="ttdef"><b>Definition:</b> map.h:1271</div></div>
@@ -111,6 +110,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1Optional_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Optional.html">tvm::runtime::Optional</a></div><div class="ttdoc">Optional container that to represent to a Nullable variant of T. </div><div class="ttdef"><b>Definition:</b> optional.h:51</div></div>
 <div class="ttc" id="classtvm_1_1TargetKindAttrMap_html"><div class="ttname"><a href="classtvm_1_1TargetKindAttrMap.html">tvm::TargetKindAttrMap</a></div><div class="ttdoc">Map&lt;TargetKind, ValueType&gt; used to store meta-information about TargetKind. </div><div class="ttdef"><b>Definition:</b> target_kind.h:87</div></div>
 <div class="ttc" id="classtvm_1_1TargetKindRegEntry_html_a21152c83f61180dcb6293226a98025a8"><div class="ttname"><a href="classtvm_1_1TargetKindRegEntry.html#a21152c83f61180dcb6293226a98025a8">tvm::TargetKindRegEntry::set_target_parser</a></div><div class="ttdeci">TargetKindRegEntry &amp; set_target_parser(FTVMTargetParser parser)</div><div class="ttdoc">Set the parsing function applied upon target creation. </div><div class="ttdef"><b>Definition:</b> target_kind.h:384</div></div>
+<div class="ttc" id="classtvm_1_1TargetKindRegEntry_html_aa34789ae275e36dcd6696aa3881bbc92"><div class="ttname"><a href="classtvm_1_1TargetKindRegEntry.html#aa34789ae275e36dcd6696aa3881bbc92">tvm::TargetKindRegEntry::set_default_device_type</a></div><div class="ttdeci">TargetKindRegEntry &amp; set_default_device_type(int device_type)</div><div class="ttdoc">Set DLPack&amp;#39;s device_type the target. </div><div class="ttdef"><b>Definition:</b> target_kind.h:366</div></div>
 <div class="ttc" id="classtvm_1_1TargetKind_html_ae3c4bff01e4c03982e4b92b3352c6532"><div class="ttname"><a href="classtvm_1_1TargetKind.html#ae3c4bff01e4c03982e4b92b3352c6532">tvm::TargetKind::GetAttrMap</a></div><div class="ttdeci">static TargetKindAttrMap&lt; ValueType &gt; GetAttrMap(const String &amp;attr_name)</div><div class="ttdoc">Get the attribute map given the attribute name. </div><div class="ttdef"><b>Definition:</b> target_kind.h:352</div></div>
 <div class="ttc" id="classtvm_1_1TargetKindNode_html_a496c8f36bc4ead9952b6a1fd369d20ad"><div class="ttname"><a href="classtvm_1_1TargetKindNode.html#a496c8f36bc4ead9952b6a1fd369d20ad">tvm::TargetKindNode::name</a></div><div class="ttdeci">String name</div><div class="ttdoc">Name of the target kind. </div><div class="ttdef"><b>Definition:</b> target_kind.h:93</div></div>
 <div class="ttc" id="object_8h_html_a782d0de62fbf75736e29c1e79c22c7f1"><div class="ttname"><a href="object_8h.html#a782d0de62fbf75736e29c1e79c22c7f1">TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS</a></div><div class="ttdeci">#define TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)</div><div class="ttdef"><b>Definition:</b> object.h:728</div></div>
diff --git a/docs/reference/api/doxygen/virtual__device_8h_source.html b/docs/reference/api/doxygen/virtual__device_8h_source.html
index 0610390148..c4a0de1612 100644
--- a/docs/reference/api/doxygen/virtual__device_8h_source.html
+++ b/docs/reference/api/doxygen/virtual__device_8h_source.html
@@ -66,7 +66,7 @@ $(function() {
 <div class="title">virtual_device.h</div>  </div>
 </div><!--header-->
 <div class="contents">
-<a href="virtual__device_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or  [...]
+<a href="virtual__device_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or  [...]
 <div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
 <div class="ttc" id="classtvm_1_1VirtualDevice_html_a6d818740fcd130202a36aa289bd3e7da"><div class="ttname"><a href="classtvm_1_1VirtualDevice.html#a6d818740fcd130202a36aa289bd3e7da">tvm::VirtualDevice::ForMemoryScope</a></div><div class="ttdeci">static VirtualDevice ForMemoryScope(MemoryScope memory_scope)</div><div class="ttdoc">Returns the VirtualDevice for memory_scope alone. </div><div class="ttdef"><b>Definition:</b> virtual_device.h:312</div></div>
 <div class="ttc" id="classtvm_1_1VirtualDeviceNode_html_ae4d7f111e3a45058026a3ffb156a4790"><div class="ttname"><a href="classtvm_1_1VirtualDeviceNode.html#ae4d7f111e3a45058026a3ffb156a4790">tvm::VirtualDeviceNode::VirtualDevice</a></div><div class="ttdeci">friend class VirtualDevice</div><div class="ttdef"><b>Definition:</b> virtual_device.h:255</div></div>
@@ -84,7 +84,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1String_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1String.html">tvm::runtime::String</a></div><div class="ttdoc">Reference to string objects. </div><div class="ttdef"><b>Definition:</b> string.h:97</div></div>
 <div class="ttc" id="namespacetvm_html_a7c2095aed90b2129ba631b90103313a2"><div class="ttname"><a href="namespacetvm.html#a7c2095aed90b2129ba631b90103313a2">tvm::Device</a></div><div class="ttdeci">DLDevice Device</div><div class="ttdef"><b>Definition:</b> ndarray.h:43</div></div>
 <div class="ttc" id="classtvm_1_1VirtualDeviceNode_html"><div class="ttname"><a href="classtvm_1_1VirtualDeviceNode.html">tvm::VirtualDeviceNode</a></div><div class="ttdoc">Describes at compile time the constraints on where data is to be stored at runtime down to the (virtu...</div><div class="ttdef"><b>Definition:</b> virtual_device.h:166</div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1VirtualDevice_html_ae240fa7f595b80bd3283cbea90a2950e"><div class="ttname"><a href="classtvm_1_1VirtualDevice.html#ae240fa7f595b80bd3283cbea90a2950e">tvm::VirtualDevice::ForDeviceType</a></div><div class="ttdeci">static VirtualDevice ForDeviceType(int device_type, int virtual_device_id=-1)</div><div class="ttdef"><b>Definition:</b> virtual_device.h:288</div></div>
 <div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></div><div class="ttdoc">Base class of all object reference. </div><div class="ttdef"><b>Definition:</b> object.h:511</div></div>
 <div class="ttc" id="classtvm_1_1VirtualDevice_html_a9712f748d0d6671ced9be72f5e11d492"><div class="ttname"><a href="classtvm_1_1VirtualDevice.html#a9712f748d0d6671ced9be72f5e11d492">tvm::VirtualDevice::ForDevice</a></div><div class="ttdeci">static VirtualDevice ForDevice(const Device &amp;device)</div><div class="ttdoc">Returns the VirtualDevice for device. </div><div class="ttdef"><b>Definition:</b> virtual_device.h:296</div></div>
diff --git a/docs/reference/api/doxygen/x86_2bnn_8h_source.html b/docs/reference/api/doxygen/x86_2bnn_8h_source.html
index 92bea28741..892389b6b2 100644
--- a/docs/reference/api/doxygen/x86_2bnn_8h_source.html
+++ b/docs/reference/api/doxygen/x86_2bnn_8h_source.html
@@ -78,7 +78,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1te_1_1ComputeOpNode_html"><div class="ttname"><a href="classtvm_1_1te_1_1ComputeOpNode.html">tvm::te::ComputeOpNode</a></div><div class="ttdoc">A Compute op that compute a tensor on certain domain. </div><div class="ttdef"><b>Definition:</b> operation.h:226</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1BaseComputeOpNode_html_a21617a643897727c51ded2b7260df4c3"><div class="ttname"><a href="classtvm_1_1te_1_1BaseComputeOpNode.html#a21617a643897727c51ded2b7260df4c3">tvm::te::BaseComputeOpNode::axis</a></div><div class="ttdeci">Array&lt; IterVar &gt; axis</div><div class="ttdoc">IterVar on each axis. </div><div class="ttdef"><b>Definition:</b> operation.h:207</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="namespacetvm_1_1topi_html_ad5dcb2721aae4c9b84552b85db6e6cae"><div class="ttname"><a href="namespacetvm_1_1topi.html#ad5dcb2721aae4c9b84552b85db6e6cae">tvm::topi::is_broadcast</a></div><div class="ttdeci">bool is_broadcast(std::string tag)</div><div class="ttdef"><b>Definition:</b> tags.h:47</div></div>
diff --git a/docs/reference/api/doxygen/x86_2default_8h_source.html b/docs/reference/api/doxygen/x86_2default_8h_source.html
index 734a30f640..fa869c8dd0 100644
--- a/docs/reference/api/doxygen/x86_2default_8h_source.html
+++ b/docs/reference/api/doxygen/x86_2default_8h_source.html
@@ -78,7 +78,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1ComputeOpNode_html"><div class="ttname"><a href="classtvm_1_1te_1_1ComputeOpNode.html">tvm::te::ComputeOpNode</a></div><div class="ttdoc">A Compute op that compute a tensor on certain domain. </div><div class="ttdef"><b>Definition:</b> operation.h:226</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
 <div class="ttc" id="tags_8h_html"><div class="ttname"><a href="tags_8h.html">tags.h</a></div><div class="ttdoc">External function interface to rocBLAS libraries. </div></div>
diff --git a/docs/reference/api/doxygen/x86_2injective_8h_source.html b/docs/reference/api/doxygen/x86_2injective_8h_source.html
index 0a2f6d675e..22ca30bf1f 100644
--- a/docs/reference/api/doxygen/x86_2injective_8h_source.html
+++ b/docs/reference/api/doxygen/x86_2injective_8h_source.html
@@ -75,7 +75,7 @@ $(function() {
 <div class="ttc" id="classtvm_1_1runtime_1_1Array_html"><div class="ttname"><a href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array</a></div><div class="ttdoc">Array, container representing a contiguous sequence of ObjectRefs. </div><div class="ttdef"><b>Definition:</b> array.h:289</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1ComputeOpNode_html"><div class="ttname"><a href="classtvm_1_1te_1_1ComputeOpNode.html">tvm::te::ComputeOpNode</a></div><div class="ttdoc">A Compute op that compute a tensor on certain domain. </div><div class="ttdef"><b>Definition:</b> operation.h:226</div></div>
 <div class="ttc" id="fuse_8h_html"><div class="ttname"><a href="fuse_8h.html">fuse.h</a></div><div class="ttdoc">Fuse operation. </div></div>
-<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:181</div></div>
+<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
 <div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
 <div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
diff --git a/docs/reference/api/python/auto_scheduler.html b/docs/reference/api/python/auto_scheduler.html
index 3b8a203fe6..077c2c4200 100644
--- a/docs/reference/api/python/auto_scheduler.html
+++ b/docs/reference/api/python/auto_scheduler.html
@@ -1602,7 +1602,7 @@ history states as starting point to perform Evolutionary Search).</p></li>
 
 <dl class="py class">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.SketchPolicy">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
 <dd><p>The search policy that searches in a hierarchical search space defined by sketches.
 The policy randomly samples programs from the space defined by sketches and use evolutionary
 search to fine-tune them.</p>
@@ -1886,7 +1886,7 @@ Candidates:
 
 <dl class="py function">
 <dt class="sig sig-object py" id="tvm.auto_scheduler.auto_schedule">
-<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
+<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
 <dd><p>THIS API IS DEPRECATED.</p>
 <p>Run auto scheduling search for a task.</p>
 <dl class="field-list simple">
diff --git a/docs/reference/api/python/target.html b/docs/reference/api/python/target.html
index a6328f0f2e..b3ad7f888d 100644
--- a/docs/reference/api/python/target.html
+++ b/docs/reference/api/python/target.html
@@ -520,25 +520,28 @@ We can also use other specific function in this module to create specific target
 <tr class="row-even"><td><p><a class="reference internal" href="#tvm.target.Target.get_kind_attr" title="tvm.target.Target.get_kind_attr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">get_kind_attr</span></code></a>(attr_name)</p></td>
 <td><p>Get additional attribute about the target kind.</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="#tvm.target.Target.list_kinds" title="tvm.target.Target.list_kinds"><code class="xref py py-obj docutils literal notranslate"><span class="pre">list_kinds</span></code></a>()</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="#tvm.target.Target.get_target_device_type" title="tvm.target.Target.get_target_device_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">get_target_device_type</span></code></a>()</p></td>
+<td><p>Returns the device_type for this target.</p></td>
+</tr>
+<tr class="row-even"><td><p><a class="reference internal" href="#tvm.target.Target.list_kinds" title="tvm.target.Target.list_kinds"><code class="xref py py-obj docutils literal notranslate"><span class="pre">list_kinds</span></code></a>()</p></td>
 <td><p>Returns the list of available target names.</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="#tvm.target.Target.canon_target" title="tvm.target.Target.canon_target"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_target</span></code></a>(target)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="#tvm.target.Target.canon_target" title="tvm.target.Target.canon_target"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_target</span></code></a>(target)</p></td>
 <td><p>Given a single target-like object, returns the TVM Target object representing it.</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="#tvm.target.Target.canon_target_and_host" title="tvm.target.Target.canon_target_and_host"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_target_and_host</span></code></a>(target[, target_host])</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="#tvm.target.Target.canon_target_and_host" title="tvm.target.Target.canon_target_and_host"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_target_and_host</span></code></a>(target[, target_host])</p></td>
 <td><p>Returns a TVM Target capturing target and target_host.</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="#tvm.target.Target.canon_multi_target" title="tvm.target.Target.canon_multi_target"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_multi_target</span></code></a>(multi_targets)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="#tvm.target.Target.canon_multi_target" title="tvm.target.Target.canon_multi_target"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_multi_target</span></code></a>(multi_targets)</p></td>
 <td><p>Given a single target-like object, or a collection-like object of target-like objects, returns a TVM Array of TVM Target objects representing then.</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="#tvm.target.Target.canon_multi_target_and_host" title="tvm.target.Target.canon_multi_target_and_host"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_multi_target_and_host</span></code></a>(target[, ...])</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="#tvm.target.Target.canon_multi_target_and_host" title="tvm.target.Target.canon_multi_target_and_host"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_multi_target_and_host</span></code></a>(target[, ...])</p></td>
 <td><p>Returns a TVM Array&lt;Target&gt; capturing target and target_host.</p></td>
 </tr>
-<tr class="row-even"><td><p><a class="reference internal" href="#tvm.target.Target.canon_target_map_and_host" title="tvm.target.Target.canon_target_map_and_host"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_target_map_and_host</span></code></a>(target_map[, ...])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="#tvm.target.Target.canon_target_map_and_host" title="tvm.target.Target.canon_target_map_and_host"><code class="xref py py-obj docutils literal notranslate"><span class="pre">canon_target_map_and_host</span></code></a>(target_map[, ...])</p></td>
 <td><p>Returns target_map as a map from TVM Target's in canonical form to IRModules.</p></td>
 </tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="#tvm.target.Target.target_or_current" title="tvm.target.Target.target_or_current"><code class="xref py py-obj docutils literal notranslate"><span class="pre">target_or_current</span></code></a>(target)</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="#tvm.target.Target.target_or_current" title="tvm.target.Target.target_or_current"><code class="xref py py-obj docutils literal notranslate"><span class="pre">target_or_current</span></code></a>(target)</p></td>
 <td><p>Returns target, or the current target in the environment if target is None</p></td>
 </tr>
 </tbody>
@@ -637,6 +640,12 @@ We can also use other specific function in this module to create specific target
 </dl>
 </dd></dl>
 
+<dl class="py method">
+<dt class="sig sig-object py" id="tvm.target.Target.get_target_device_type">
+<span class="sig-name descname"><span class="pre">get_target_device_type</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#tvm.target.Target.get_target_device_type" title="Permalink to this definition">¶</a></dt>
+<dd><p>Returns the device_type for this target.</p>
+</dd></dl>
+
 <dl class="py method">
 <dt class="sig sig-object py" id="tvm.target.Target.list_kinds">
 <em class="property"><span class="pre">static</span> </em><span class="sig-name descname"><span class="pre">list_kinds</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#tvm.target.Target.list_kinds" title="Permalink to this definition">¶</a></dt>
diff --git a/docs/reference/api/typedoc/classes/bytestreamreader.html b/docs/reference/api/typedoc/classes/bytestreamreader.html
index 7c052dcb3d..f3c9eea86a 100644
--- a/docs/reference/api/typedoc/classes/bytestreamreader.html
+++ b/docs/reference/api/typedoc/classes/bytestreamreader.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -141,7 +141,7 @@
 					<div class="tsd-signature tsd-kind-icon">bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Uint8Array</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -151,7 +151,7 @@
 					<div class="tsd-signature tsd-kind-icon">offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 0</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/rpc_server.ts#L42">rpc_server.ts:42</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/rpc_server.ts#L42">rpc_server.ts:42</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -168,7 +168,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/rpc_server.ts#L63">rpc_server.ts:63</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/rpc_server.ts#L63">rpc_server.ts:63</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">Uint8Array</span></h4>
@@ -185,7 +185,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/rpc_server.ts#L49">rpc_server.ts:49</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/rpc_server.ts#L49">rpc_server.ts:49</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -202,7 +202,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/rpc_server.ts#L57">rpc_server.ts:57</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/rpc_server.ts#L57">rpc_server.ts:57</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/cachedcallstack.html b/docs/reference/api/typedoc/classes/cachedcallstack.html
index b73be2bef6..43713f41fd 100644
--- a/docs/reference/api/typedoc/classes/cachedcallstack.html
+++ b/docs/reference/api/typedoc/classes/cachedcallstack.html
@@ -144,7 +144,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L223">memory.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L223">memory.ts:223</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
 					<div class="tsd-signature tsd-kind-icon">temp<wbr>Args<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><a href="../interfaces/disposable.html" class="tsd-signature-type">Disposable</a><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L208">memory.ts:208</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L208">memory.ts:208</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -194,7 +194,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L312">memory.ts:312</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L312">memory.ts:312</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L284">memory.ts:284</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L284">memory.ts:284</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L388">memory.ts:388</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L388">memory.ts:388</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -300,7 +300,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L376">memory.ts:376</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L376">memory.ts:376</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -340,7 +340,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L267">memory.ts:267</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L267">memory.ts:267</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -373,7 +373,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L243">memory.ts:243</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L243">memory.ts:243</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -390,7 +390,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L321">memory.ts:321</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L321">memory.ts:321</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -422,7 +422,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L252">memory.ts:252</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L252">memory.ts:252</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -444,7 +444,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L359">memory.ts:359</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L359">memory.ts:359</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -470,7 +470,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L342">memory.ts:342</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L342">memory.ts:342</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -496,7 +496,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L350">memory.ts:350</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L350">memory.ts:350</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -522,7 +522,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L326">memory.ts:326</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L326">memory.ts:326</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -548,7 +548,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L363">memory.ts:363</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L363">memory.ts:363</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -574,7 +574,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L346">memory.ts:346</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L346">memory.ts:346</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -600,7 +600,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/memory.ts#L334">memory.ts:334</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/memory.ts#L334">memory.ts:334</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/dldatatype.html b/docs/reference/api/typedoc/classes/dldatatype.html
index a737b4c802..df0eeeedc6 100644
--- a/docs/reference/api/typedoc/classes/dldatatype.html
+++ b/docs/reference/api/typedoc/classes/dldatatype.html
@@ -119,7 +119,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L262">runtime.ts:262</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L262">runtime.ts:262</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
 					<div class="tsd-signature tsd-kind-icon">bits<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L260">runtime.ts:260</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L260">runtime.ts:260</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">code<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L258">runtime.ts:258</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L258">runtime.ts:258</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -177,7 +177,7 @@
 					<div class="tsd-signature tsd-kind-icon">lanes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L262">runtime.ts:262</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L262">runtime.ts:262</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -199,7 +199,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L279">runtime.ts:279</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L279">runtime.ts:279</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -216,7 +216,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L270">runtime.ts:270</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L270">runtime.ts:270</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/dldevice.html b/docs/reference/api/typedoc/classes/dldevice.html
index ead21e77cf..58a87e0af7 100644
--- a/docs/reference/api/typedoc/classes/dldevice.html
+++ b/docs/reference/api/typedoc/classes/dldevice.html
@@ -118,7 +118,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L202">runtime.ts:202</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L202">runtime.ts:202</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L200">runtime.ts:200</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L200">runtime.ts:200</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -161,7 +161,7 @@
 					<div class="tsd-signature tsd-kind-icon">device<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L198">runtime.ts:198</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L198">runtime.ts:198</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -183,7 +183,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L223">runtime.ts:223</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L223">runtime.ts:223</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -205,7 +205,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L230">runtime.ts:230</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L230">runtime.ts:230</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/environment.html b/docs/reference/api/typedoc/classes/environment.html
index b64afca084..46f1708360 100644
--- a/docs/reference/api/typedoc/classes/environment.html
+++ b/docs/reference/api/typedoc/classes/environment.html
@@ -125,7 +125,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/environment.ts#L86">environment.ts:86</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/environment.ts#L86">environment.ts:86</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
 					<aside class="tsd-sources">
 						<p>Implementation of <a href="../interfaces/libraryprovider.html">LibraryProvider</a>.<a href="../interfaces/libraryprovider.html#imports">imports</a></p>
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/environment.ts#L70">environment.ts:70</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/environment.ts#L70">environment.ts:70</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -179,7 +179,7 @@
 					<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&gt; </span><span class="tsd-signature-type">void</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/environment.ts#L69">environment.ts:69</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/environment.ts#L69">environment.ts:69</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-type-declaration">
@@ -210,7 +210,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">ctypes.FTVMWasmPackedCFunc</span><span class="tsd-signature-symbol"> | </span><span class="tsd-signature-type">undefined</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = [undefined,]</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/environment.ts#L78">environment.ts:78</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/environment.ts#L78">environment.ts:78</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -228,7 +228,7 @@
 					<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<wbr>Free<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">&gt;</span><span class="tsd-signature-symbol"> = []</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/environment.ts#L84">environment.ts:84</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/environment.ts#L84">environment.ts:84</a></li>
 						</ul>
 					</aside>
 					<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/environment.ts#L105">environment.ts:105</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/environment.ts#L105">environment.ts:105</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ffilibrary.html b/docs/reference/api/typedoc/classes/ffilibrary.html
index 403ed059c6..eee0768b4e 100644
--- a/docs/reference/api/typedoc/classes/ffilibrary.html
+++ b/docs/reference/api/typedoc/classes/ffilibrary.html
@@ -131,7 +131,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L49">runtime.ts:49</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L49">runtime.ts:49</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -156,7 +156,7 @@
 					<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol">&lt;</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">&gt;</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L46">runtime.ts:46</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L46">runtime.ts:46</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -166,7 +166,7 @@
 					<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L45">runtime.ts:45</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L45">runtime.ts:45</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -176,7 +176,7 @@
 					<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L44">runtime.ts:44</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L44">runtime.ts:44</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -186,7 +186,7 @@
 					<div class="tsd-signature tsd-kind-icon">webGPUContext<span class="tsd-signature-symbol">:</span> <a href="webgpucontext.html" class="tsd-signature-type">WebGPUContext</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L47">runtime.ts:47</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L47">runtime.ts:47</a></li>
 						</ul>
 					</aside>
 				</section>
@@ -203,7 +203,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L76">runtime.ts:76</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L76">runtime.ts:76</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -226,7 +226,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L66">runtime.ts:66</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L66">runtime.ts:66</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -243,7 +243,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L84">runtime.ts:84</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L84">runtime.ts:84</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <a href="cachedcallstack.html" class="tsd-signature-type">CachedCallStack</a></h4>
@@ -260,7 +260,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L95">runtime.ts:95</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L95">runtime.ts:95</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-parameters-title">Parameters</h4>
@@ -283,7 +283,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L72">runtime.ts:72</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L72">runtime.ts:72</a></li>
 								</ul>
 							</aside>
 							<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/graphexecutor.html b/docs/reference/api/typedoc/classes/graphexecutor.html
index b0c1db04af..4530e63675 100644
--- a/docs/reference/api/typedoc/classes/graphexecutor.html
+++ b/docs/reference/api/typedoc/classes/graphexecutor.html
@@ -130,7 +130,7 @@
 						<li class="tsd-description">
 							<aside class="tsd-sources">
 								<ul>
-									<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L583">runtime.ts:583</a></li>
+									<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L583">runtime.ts:583</a></li>
 								</ul>
 							</aside>
 							<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
 					<div class="tsd-signature tsd-kind-icon">module<span class="tsd-signature-symbol">:</span> <a href="module.html" class="tsd-signature-type">Module</a></div>
 					<aside class="tsd-sources">
 						<ul>
-							<li>Defined in <a href="https://github.com/apache/tvm/blob/4e4089edd/web/src/runtime.ts#L579">runtime.ts:579</a></li>
+							<li>Defined in <a href="https://github.com/apache/tvm/blob/d4bf9ecf5/web/src/runtime.ts#L579">runtime.ts:579</a></li>
... 2468 lines suppressed ...