You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2023/03/15 00:46:06 UTC
[tvm-site] branch asf-site updated: deploying docs (apache/tvm@ce1fa8908f626e58f245966dd0a2e2540b75dace)
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new fef9942028 deploying docs (apache/tvm@ce1fa8908f626e58f245966dd0a2e2540b75dace)
fef9942028 is described below
commit fef9942028bdb63a8d095da50bd2087855433b03
Author: tvm-bot <95...@users.noreply.github.com>
AuthorDate: Wed Mar 15 00:45:57 2023 +0000
deploying docs (apache/tvm@ce1fa8908f626e58f245966dd0a2e2540b75dace)
---
docs/_images/sphx_glr_micro_train_001.png | Bin 307018 -> 361229 bytes
docs/_images/sphx_glr_micro_train_thumb.png | Bin 22335 -> 24545 bytes
.../how_to/compile_models/from_darknet.rst.txt | 2 +-
.../how_to/compile_models/from_keras.rst.txt | 2 +-
.../how_to/compile_models/from_mxnet.rst.txt | 2 +-
.../how_to/compile_models/from_oneflow.rst.txt | 2 +-
.../how_to/compile_models/from_pytorch.rst.txt | 2 +-
.../how_to/compile_models/from_tensorflow.rst.txt | 2 +-
.../compile_models/sg_execution_times.rst.txt | 22 +-
.../deploy_models/deploy_model_on_adreno.rst.txt | 4 +-
.../deploy_models/deploy_model_on_android.rst.txt | 2 +-
.../deploy_object_detection_pytorch.rst.txt | 4 +-
.../deploy_models/deploy_prequantized.rst.txt | 6 +-
.../deploy_prequantized_tflite.rst.txt | 4 +-
.../how_to/deploy_models/deploy_quantized.rst.txt | 2 +-
.../deploy_models/deploy_ssd_gluoncv.rst.txt | 4 +-
.../deploy_models/sg_execution_times.rst.txt | 20 +-
.../extend_tvm/bring_your_own_datatypes.rst.txt | 2 +-
.../how_to/extend_tvm/sg_execution_times.rst.txt | 8 +-
.../how_to/extend_tvm/use_pass_instrument.rst.txt | 16 +-
.../optimize_operators/opt_conv_cuda.rst.txt | 2 +-
.../optimize_operators/opt_conv_tensorcore.rst.txt | 2 +-
.../how_to/optimize_operators/opt_gemm.rst.txt | 16 +-
.../optimize_operators/sg_execution_times.rst.txt | 8 +-
.../sg_execution_times.rst.txt | 14 +-
.../tune_conv2d_layer_cuda.rst.txt | 986 ++-
.../tune_network_cuda.rst.txt | 4 +-
.../tune_network_x86.rst.txt | 4 +-
.../tune_sparse_x86.rst.txt | 78 +-
.../tune_with_autotvm/sg_execution_times.rst.txt | 6 +-
.../tune_with_autotvm/tune_conv2d_cuda.rst.txt | 676 +-
.../work_with_microtvm/micro_autotune.rst.txt | 18 +-
.../work_with_microtvm/micro_pytorch.rst.txt | 4 +-
.../how_to/work_with_microtvm/micro_train.rst.txt | 18 +-
.../work_with_microtvm/sg_execution_times.rst.txt | 14 +-
.../work_with_relay/sg_execution_times.rst.txt | 8 +-
.../how_to/work_with_schedules/intrin_math.rst.txt | 2 +-
.../work_with_schedules/sg_execution_times.rst.txt | 18 +-
.../tutorials/autotvm/sg_execution_times.rst.txt | 4 +-
.../frontend/deploy_classification.rst.txt | 2 +-
.../tutorials/frontend/deploy_detection.rst.txt | 2 +-
.../tutorials/frontend/sg_execution_times.rst.txt | 6 +-
.../tutorials/optimize/sg_execution_times.rst.txt | 6 +-
.../topic/vta/tutorials/sg_execution_times.rst.txt | 4 +-
.../tutorial/auto_scheduler_matmul_x86.rst.txt | 11 +-
docs/_sources/tutorial/autotvm_matmul_x86.rst.txt | 20 +-
docs/_sources/tutorial/autotvm_relay_x86.rst.txt | 58 +-
.../tutorial/cross_compilation_and_rpc.rst.txt | 2 +-
docs/_sources/tutorial/intro_topi.rst.txt | 2 +-
docs/_sources/tutorial/sg_execution_times.rst.txt | 18 +-
.../tutorial/tensor_expr_get_started.rst.txt | 43 +-
docs/commit_hash | 2 +-
docs/how_to/compile_models/from_darknet.html | 2 +-
docs/how_to/compile_models/from_keras.html | 2 +-
docs/how_to/compile_models/from_mxnet.html | 2 +-
docs/how_to/compile_models/from_oneflow.html | 13 +-
docs/how_to/compile_models/from_pytorch.html | 9 +-
docs/how_to/compile_models/from_tensorflow.html | 2 +-
docs/how_to/compile_models/sg_execution_times.html | 22 +-
.../deploy_models/deploy_model_on_adreno.html | 4 +-
.../deploy_models/deploy_model_on_android.html | 2 +-
.../deploy_object_detection_pytorch.html | 46 +-
docs/how_to/deploy_models/deploy_prequantized.html | 10 +-
.../deploy_models/deploy_prequantized_tflite.html | 4 +-
docs/how_to/deploy_models/deploy_quantized.html | 2 +-
docs/how_to/deploy_models/deploy_ssd_gluoncv.html | 38 +-
docs/how_to/deploy_models/sg_execution_times.html | 20 +-
.../extend_tvm/bring_your_own_datatypes.html | 2 +-
docs/how_to/extend_tvm/sg_execution_times.html | 8 +-
docs/how_to/extend_tvm/use_pass_instrument.html | 16 +-
docs/how_to/optimize_operators/opt_conv_cuda.html | 2 +-
.../optimize_operators/opt_conv_tensorcore.html | 2 +-
docs/how_to/optimize_operators/opt_gemm.html | 16 +-
.../optimize_operators/sg_execution_times.html | 8 +-
.../sg_execution_times.html | 14 +-
.../tune_conv2d_layer_cuda.html | 986 ++-
.../tune_with_autoscheduler/tune_network_cuda.html | 4 +-
.../tune_with_autoscheduler/tune_network_x86.html | 4 +-
.../tune_with_autoscheduler/tune_sparse_x86.html | 78 +-
.../tune_with_autotvm/sg_execution_times.html | 6 +-
.../how_to/tune_with_autotvm/tune_conv2d_cuda.html | 676 +-
docs/how_to/work_with_microtvm/micro_autotune.html | 18 +-
docs/how_to/work_with_microtvm/micro_pytorch.html | 6 +-
docs/how_to/work_with_microtvm/micro_train.html | 16 +-
.../work_with_microtvm/sg_execution_times.html | 14 +-
.../how_to/work_with_relay/sg_execution_times.html | 8 +-
docs/how_to/work_with_schedules/intrin_math.html | 2 +-
.../work_with_schedules/sg_execution_times.html | 22 +-
docs/install/nnpack.html | 12 +-
docs/reference/api/doxygen/annotated.html | 43 +-
docs/reference/api/doxygen/classes.html | 456 +-
.../doxygen/classtvm_1_1runtime_1_1ObjectRef.html | 2 +-
...asstvm_1_1runtime_1_1ObjectRef__coll__graph.svg | 12 +-
.../classtvm_1_1te_1_1ScheduleContext-members.html | 81 +
.../doxygen/classtvm_1_1te_1_1ScheduleContext.html | 126 +
...sstvm_1_1te_1_1ScheduleContext__coll__graph.svg | 23 +
.../classtvm_1_1te_1_1ScheduleNode-members.html | 37 +-
.../doxygen/classtvm_1_1te_1_1ScheduleNode.html | 61 +-
...classtvm_1_1te_1_1ScheduleNode__coll__graph.svg | 259 +-
...sstvm_1_1te_1_1ScheduleNode__inherit__graph.svg | 105 +-
.../doxygen/classtvm_1_1te_1_1Stage-members.html | 2 +-
.../api/doxygen/classtvm_1_1te_1_1Stage.html | 23 +-
.../classtvm_1_1te_1_1StageNode-members.html | 83 +-
.../api/doxygen/classtvm_1_1te_1_1StageNode.html | 21 +-
.../classtvm_1_1te_1_1StageNode__coll__graph.svg | 874 ++-
...classtvm_1_1te_1_1StageNode__inherit__graph.svg | 2 +-
.../api/doxygen/compute__dag_8h_source.html | 2 +-
.../api/doxygen/cuda_2dense_8h_source.html | 4 +-
.../api/doxygen/cuda_2injective_8h_source.html | 6 +-
.../api/doxygen/cuda_2pooling_8h_source.html | 6 +-
.../api/doxygen/cuda_2reduction_8h_source.html | 6 +-
.../api/doxygen/cuda_2softmax_8h_source.html | 4 +-
docs/reference/api/doxygen/functions_a.html | 3 +
docs/reference/api/doxygen/functions_func_s.html | 2 +-
docs/reference/api/doxygen/functions_func_t.html | 4 +-
docs/reference/api/doxygen/functions_func_u.html | 2 +-
docs/reference/api/doxygen/functions_k.html | 3 +
docs/reference/api/doxygen/functions_o.html | 2 +-
docs/reference/api/doxygen/functions_p.html | 5 +-
docs/reference/api/doxygen/functions_rela.html | 3 +
docs/reference/api/doxygen/functions_s.html | 13 +-
docs/reference/api/doxygen/functions_t.html | 10 +-
docs/reference/api/doxygen/functions_u.html | 2 +-
docs/reference/api/doxygen/functions_v.html | 10 +-
docs/reference/api/doxygen/functions_vars_a.html | 3 +
docs/reference/api/doxygen/functions_vars_k.html | 3 +
docs/reference/api/doxygen/functions_vars_p.html | 3 +
docs/reference/api/doxygen/functions_vars_s.html | 3 +
docs/reference/api/doxygen/functions_w.html | 3 +
docs/reference/api/doxygen/fuse_8h_source.html | 2 +-
.../api/doxygen/generic_2default_8h_source.html | 6 +-
.../api/doxygen/generic_2extern_8h_source.html | 4 +-
.../api/doxygen/generic_2injective_8h_source.html | 6 +-
docs/reference/api/doxygen/hierarchy.html | 1123 ++--
docs/reference/api/doxygen/inherit_graph_112.svg | 32 +-
docs/reference/api/doxygen/inherit_graph_12.svg | 16 +-
docs/reference/api/doxygen/inherit_graph_121.svg | 6648 ++++++++++----------
docs/reference/api/doxygen/inherit_graph_196.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_197.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_198.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_199.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_200.svg | 15 +-
docs/reference/api/doxygen/inherit_graph_201.svg | 17 +-
docs/reference/api/doxygen/inherit_graph_202.svg | 17 +-
docs/reference/api/doxygen/inherit_graph_203.svg | 15 +-
docs/reference/api/doxygen/inherit_graph_204.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_205.svg | 15 +-
docs/reference/api/doxygen/inherit_graph_206.svg | 14 +-
docs/reference/api/doxygen/inherit_graph_207.svg | 17 +-
docs/reference/api/doxygen/inherit_graph_208.svg | 125 +-
docs/reference/api/doxygen/inherit_graph_209.svg | 123 +-
docs/reference/api/doxygen/inherit_graph_210.svg | 79 +-
docs/reference/api/doxygen/inherit_graph_211.svg | 15 +-
docs/reference/api/doxygen/inherit_graph_212.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_213.svg | 18 +-
docs/reference/api/doxygen/inherit_graph_214.svg | 19 +-
docs/reference/api/doxygen/inherit_graph_215.svg | 15 +-
docs/reference/api/doxygen/inherit_graph_216.svg | 15 +-
docs/reference/api/doxygen/inherit_graph_217.svg | 29 +-
docs/reference/api/doxygen/inherit_graph_218.svg | 24 +-
docs/reference/api/doxygen/inherit_graph_219.svg | 30 +-
docs/reference/api/doxygen/inherit_graph_220.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_221.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_222.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_223.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_224.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_225.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_226.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_227.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_228.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_229.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_230.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_231.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_232.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_233.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_234.svg | 12 +-
docs/reference/api/doxygen/inherit_graph_235.svg | 12 +-
...inherit_graph_235.svg => inherit_graph_236.svg} | 0
docs/reference/api/doxygen/inherit_graph_41.svg | 16 +-
docs/reference/api/doxygen/inherit_graph_45.svg | 8 +-
docs/reference/api/doxygen/inherit_graph_95.svg | 8 +-
docs/reference/api/doxygen/inherits.html | 84 +-
docs/reference/api/doxygen/map_8h_source.html | 2 +-
docs/reference/api/doxygen/namespacetvm_1_1te.html | 3 +
.../api/doxygen/op__strategy_8h_source.html | 4 +-
.../doxygen/relay_2op__attr__types_8h_source.html | 2 +-
.../api/doxygen/rocm_2dense_8h_source.html | 2 +-
.../api/doxygen/rocm_2injective_8h_source.html | 2 +-
.../api/doxygen/rocm_2pooling_8h_source.html | 2 +-
.../api/doxygen/rocm_2reduction_8h_source.html | 2 +-
.../api/doxygen/rocm_2softmax_8h_source.html | 2 +-
docs/reference/api/doxygen/search/all_10.js | 3 +-
docs/reference/api/doxygen/search/all_11.js | 1 +
docs/reference/api/doxygen/search/all_13.js | 8 +-
docs/reference/api/doxygen/search/all_14.js | 12 +-
docs/reference/api/doxygen/search/all_15.js | 14 +-
docs/reference/api/doxygen/search/all_16.js | 6 +-
docs/reference/api/doxygen/search/all_17.js | 6 +-
docs/reference/api/doxygen/search/all_18.js | 1 +
docs/reference/api/doxygen/search/all_2.js | 2 +
docs/reference/api/doxygen/search/all_6.js | 2 +-
docs/reference/api/doxygen/search/all_7.js | 2 +-
docs/reference/api/doxygen/search/all_9.js | 2 +-
docs/reference/api/doxygen/search/all_a.js | 2 +-
docs/reference/api/doxygen/search/all_c.js | 1 +
docs/reference/api/doxygen/search/classes_0.js | 1 +
docs/reference/api/doxygen/search/classes_10.js | 3 +-
docs/reference/api/doxygen/search/classes_11.js | 8 +-
docs/reference/api/doxygen/search/classes_13.js | 4 +-
docs/reference/api/doxygen/search/classes_4.js | 2 +-
docs/reference/api/doxygen/search/classes_7.js | 2 +-
docs/reference/api/doxygen/search/classes_8.js | 2 +-
docs/reference/api/doxygen/search/classes_c.js | 1 +
docs/reference/api/doxygen/search/classes_f.js | 2 +-
docs/reference/api/doxygen/search/functions_12.js | 6 +-
docs/reference/api/doxygen/search/functions_13.js | 6 +-
docs/reference/api/doxygen/search/functions_14.js | 4 +-
docs/reference/api/doxygen/search/functions_15.js | 4 +-
docs/reference/api/doxygen/search/functions_16.js | 2 +-
docs/reference/api/doxygen/search/related_11.js | 1 +
docs/reference/api/doxygen/search/variables_1.js | 1 +
docs/reference/api/doxygen/search/variables_11.js | 1 +
docs/reference/api/doxygen/search/variables_a.js | 1 +
docs/reference/api/doxygen/search/variables_f.js | 1 +
docs/reference/api/doxygen/te_2schedule_8h.html | 3 +
.../api/doxygen/te_2schedule_8h_source.html | 171 +-
.../api/doxygen/transform__step_8h_source.html | 2 +-
docs/reference/api/doxygen/x86_2bnn_8h_source.html | 4 +-
.../api/doxygen/x86_2default_8h_source.html | 6 +-
.../api/doxygen/x86_2injective_8h_source.html | 6 +-
docs/reference/api/python/auto_scheduler.html | 4 +-
.../api/typedoc/classes/bytestreamreader.html | 12 +-
.../api/typedoc/classes/cachedcallstack.html | 34 +-
docs/reference/api/typedoc/classes/dldatatype.html | 12 +-
docs/reference/api/typedoc/classes/dldevice.html | 10 +-
.../reference/api/typedoc/classes/environment.html | 12 +-
docs/reference/api/typedoc/classes/ffilibrary.html | 20 +-
docs/reference/api/typedoc/classes/instance.html | 58 +-
docs/reference/api/typedoc/classes/memory.html | 34 +-
docs/reference/api/typedoc/classes/module.html | 10 +-
docs/reference/api/typedoc/classes/ndarray.html | 22 +-
.../api/typedoc/classes/packedfunccell.html | 6 +-
docs/reference/api/typedoc/classes/rpcserver.html | 14 +-
.../api/typedoc/classes/runtimecontext.html | 22 +-
docs/reference/api/typedoc/classes/scalar.html | 6 +-
docs/reference/api/typedoc/classes/tvmarray.html | 16 +-
docs/reference/api/typedoc/classes/tvmobject.html | 12 +-
.../api/typedoc/classes/webgpucontext.html | 12 +-
docs/reference/api/typedoc/enums/argtypecode.html | 30 +-
.../api/typedoc/enums/aynccallbackcode.html | 4 +-
.../api/typedoc/enums/dldatatypecode.html | 8 +-
.../api/typedoc/enums/rpcserverstate.html | 12 +-
docs/reference/api/typedoc/enums/sizeof.html | 18 +-
docs/reference/api/typedoc/index.html | 124 +-
.../api/typedoc/interfaces/disposable.html | 2 +-
.../api/typedoc/interfaces/functioninfo.html | 6 +-
.../api/typedoc/interfaces/libraryprovider.html | 4 +-
docs/searchindex.js | 2 +-
.../vta/tutorials/autotvm/sg_execution_times.html | 4 +-
.../tutorials/frontend/deploy_classification.html | 2 +-
.../vta/tutorials/frontend/deploy_detection.html | 2 +-
.../vta/tutorials/frontend/sg_execution_times.html | 6 +-
.../vta/tutorials/optimize/sg_execution_times.html | 6 +-
docs/topic/vta/tutorials/sg_execution_times.html | 4 +-
docs/tutorial/auto_scheduler_matmul_x86.html | 7 +-
docs/tutorial/autotvm_matmul_x86.html | 20 +-
docs/tutorial/autotvm_relay_x86.html | 274 +-
docs/tutorial/cross_compilation_and_rpc.html | 2 +-
docs/tutorial/intro_topi.html | 2 +-
docs/tutorial/sg_execution_times.html | 22 +-
docs/tutorial/tensor_expr_get_started.html | 39 +-
271 files changed, 9228 insertions(+), 7408 deletions(-)
diff --git a/docs/_images/sphx_glr_micro_train_001.png b/docs/_images/sphx_glr_micro_train_001.png
index 117ca26f92..9c348bf301 100644
Binary files a/docs/_images/sphx_glr_micro_train_001.png and b/docs/_images/sphx_glr_micro_train_001.png differ
diff --git a/docs/_images/sphx_glr_micro_train_thumb.png b/docs/_images/sphx_glr_micro_train_thumb.png
index f942f0e17d..a2ee099931 100644
Binary files a/docs/_images/sphx_glr_micro_train_thumb.png and b/docs/_images/sphx_glr_micro_train_thumb.png differ
diff --git a/docs/_sources/how_to/compile_models/from_darknet.rst.txt b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
index ffcdfdc30c..51a7488e99 100644
--- a/docs/_sources/how_to/compile_models/from_darknet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_darknet.rst.txt
@@ -318,7 +318,7 @@ The process is no different from other examples.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 21.836 seconds)
+ **Total running time of the script:** ( 1 minutes 23.547 seconds)
.. _sphx_glr_download_how_to_compile_models_from_darknet.py:
diff --git a/docs/_sources/how_to/compile_models/from_keras.rst.txt b/docs/_sources/how_to/compile_models/from_keras.rst.txt
index c27b36a73f..25f0bb76f6 100644
--- a/docs/_sources/how_to/compile_models/from_keras.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_keras.rst.txt
@@ -232,7 +232,7 @@ Look up prediction top 1 index in 1000 class synset.
.. code-block:: none
Relay top-1 id: 285, class name: Egyptian cat
-
1/1 [==============================] - ETA: 0s
1/1 [==============================] - 1s 976ms/step
+
1/1 [==============================] - ETA: 0s
1/1 [==============================] - 1s 1000ms/step
Keras top-1 id: 285, class name: Egyptian cat
diff --git a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
index fdec6993d0..5ad44d7ee1 100644
--- a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
@@ -116,7 +116,7 @@ In this section, we download a pretrained imagenet model and classify an image.
.. code-block:: none
- Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipaa07d1d4-10e6-48fb-9062-75e0caff6d92 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+ Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip67f53738-4530-4710-85b5-bf0709b5767c from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
x (1, 3, 224, 224)
diff --git a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
index 602395c3a3..1d10c8c528 100644
--- a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
@@ -121,7 +121,7 @@ Load a pretrained OneFlow model and save model
.. code-block:: none
Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
-
0%| | 0.00/41.5M [00:00<?, ?B/s]
19%|#9 | 7.99M/41.5M [00:00<00:00, 71.6MB/s]
39%|###8 | 16.0M/41.5M [00:00<00:00, 68.1MB/s]
58%|#####7 | 24.0M/41.5M [00:00<00:00, 57.6MB/s]
77%|#######7 | 32.0M/41.5M [00:00<00:00, 54.4MB/s]
90%|########9 | 37.3M/41.5M [00:00<00:00, 45.6MB/s]
100%|##########| 41.5M/41.5M [00:00<00:00, 52.2MB/s]
+
0%| | 0.00/41.5M [00:00<?, ?B/s]
19%|#9 | 7.99M/41.5M [00:00<00:00, 50.6MB/s]
35%|###4 | 14.3M/41.5M [00:00<00:00, 56.6MB/s]
48%|####7 | 19.9M/41.5M [00:00<00:00, 50.1MB/s]
60%|#####9 | 24.7M/41.5M [00:00<00:00, 47.5MB/s]
77%|#######7 | 32.0M/41.5M [00:00<00:00, 55.6MB/s]
90%|######### | 37.4M/41.5M [00:00<00:00, 46.3MB/s]
100%|##########| 41.5M/41.5M [00:00<00:00, 50.1MB/s]
diff --git a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
index c7e810ebfc..2ff6c9a72b 100644
--- a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
@@ -101,7 +101,7 @@ Load a pretrained PyTorch model
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
0%| | 0.00/44.7M [00:00<?, ?B/s]
18%|#7 | 7.99M/44.7M [00:00<00:00, 69.7MB/s]
36%|###5 | 16.0M/44.7M [00:00<00:00, 65.6MB/s]
58%|#####8 | 26.1M/44.7M [00:00<00:00, 65.7MB/s]
72%|#######2 | 32.3M/44.7M [00:00<00:00, 51.8MB/s]
90%|########9 | 40.0M/44.7M [00:00<00:00, 53.2MB/s]
100%|##########| 44.7M/44.7M [00:00<00:00, 55.3MB/s]
+
0%| | 0.00/44.7M [00:00<?, ?B/s]
35%|###4 | 15.6M/44.7M [00:00<00:00, 163MB/s]
70%|######9 | 31.1M/44.7M [00:00<00:00, 81.4MB/s]
100%|##########| 44.7M/44.7M [00:00<00:00, 99.8MB/s]
diff --git a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
index 97401a51a5..1072e8083e 100644
--- a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
@@ -424,7 +424,7 @@ Run the corresponding model on tensorflow
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 24.757 seconds)
+ **Total running time of the script:** ( 1 minutes 26.948 seconds)
.. _sphx_glr_download_how_to_compile_models_from_tensorflow.py:
diff --git a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
index 6da3617140..49748c0407 100644
--- a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
Computation times
=================
-**06:43.629** total execution time for **how_to_compile_models** files:
+**06:56.376** total execution time for **how_to_compile_models** files:
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:24.757 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``) | 01:26.948 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``) | 01:21.836 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``) | 01:23.547 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``) | 00:56.161 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``) | 00:59.062 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``) | 00:37.651 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``) | 00:38.822 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``) | 00:32.012 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``) | 00:33.508 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``) | 00:31.069 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``) | 00:31.827 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``) | 00:28.221 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``) | 00:28.270 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``) | 00:27.074 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``) | 00:27.308 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``) | 00:22.145 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``) | 00:24.300 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``) | 00:02.703 | 0.0 MB |
+| :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``) | 00:02.784 | 0.0 MB |
+-----------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
index 73607aef16..3b024b21b4 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_adreno.rst.txt
@@ -727,7 +727,7 @@ well as provides information about the model's performance
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 3332.6629 3331.4334 3340.3620 3329.4122 3.1287
+ 3336.8734 3336.7715 3341.2942 3333.3142 2.2568
@@ -736,7 +736,7 @@ well as provides information about the model's performance
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 4.181 seconds)
+ **Total running time of the script:** ( 1 minutes 5.099 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_model_on_adreno.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
index d523d1d725..7ee00a81f0 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
@@ -437,7 +437,7 @@ Execute on TVM
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 16.5683 16.7869 16.8908 15.8084 0.3796
+ 16.8231 16.8033 17.2963 16.1574 0.3658
diff --git a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
index 569228f392..5067297be6 100644
--- a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
@@ -130,7 +130,7 @@ Load pre-trained maskrcnn from torchvision and do tracing
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=MaskRCNN_ResNet50_FPN_Weights.COCO_V1`. You can also use `weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-
0%| | 0.00/170M [00:00<?, ?B/s]
5%|4 | 7.99M/170M [00:00<00:03, 48.5MB/s]
8%|8 | 14.3M/170M [00:00<00:03, 47.0MB/s]
11%|#1 | 18.8M/170M [00:00<00:03, 40.9MB/s]
13%|#3 | 22.6M/170M [00:00<00:04, 37.4MB/s]
15%|#5 | 26.2M/170M [00:00<00:04, 34.2MB/s]
19%|#8 | 32.0M/170M [00:00<00:04, 34.0MB/s]
24%|##3 | 40.0M/170M [00:01<00:03, 42.8MB/s]
28%|##8 | 48.0M/170M [00:01<00:02, 42.6MB/s]
35%|###4 | 59.1M/170M [00:01<00:01, 58.7MB/s]
38%|###8 | 65.3M/170M [00:01<00:02, 46.5MB/s]
42%|####2 | 72.0M/170M [00:01<00:02, 47.3MB/s]
47%|####7 | 80.0M/170M [00:01<00:01, 48.9MB/s]
52%|#####1 | 88.0M/170M [00:02<00:01, 51.4MB/s]
57%|#####6 | 96.0M/170M [00:02<00:01, 51.5MB/s]
61%|######1 | 104M/170M [00:02<00:01, 52.7MB/s]
66%|######5 | 112M/170M [00:02<00:01, 55.1MB/s]
71%|####### | 120M/170M [00:02<00:00, 58.0MB/s]
75%|#######5 | 128M/170M [00:02<00:00, 60.1MB/s]
80%|######## | 136M/170M [00:02<00:00, 59.1MB/s]
85%|########4 | 144M/170M [00:03<00:00, 58.6MB/s]
89%|########9 | 152M/170M [00:03<00:00, 61.1MB/s]
94%|#########4| 160M/170M [00:03<00:00, 56.7MB/s]
98%|#########7| 166M/170M [00:03<00:00, 57.1MB/s]
100%|##########| 170M/170M [00:03<00:00, 50.6MB/s]
+
0%| | 0.00/170M [00:00<?, ?B/s]
7%|7 | 12.1M/170M [00:00<00:01, 126MB/s]
14%|#4 | 24.1M/170M [00:00<00:01, 78.7MB/s]
19%|#9 | 32.6M/170M [00:00<00:01, 80.6MB/s]
26%|##6 | 44.5M/170M [00:00<00:01, 95.2MB/s]
32%|###1 | 54.3M/170M [00:00<00:01, 79.9MB/s]
38%|###7 | 64.0M/170M [00:00<00:01, 83.4MB/s]
43%|####3 | 73.2M/170M [00:00<00:01, 87.0MB/s]
49%|####8 | 82.7M/170M [00:00<00:01, 90.7MB/s]
54%|#####4 | 91.7M/170M [00:01<00:01, 77.2MB/s]
59%|#####8 | 99.6M/170M [00:01<00:00, 75.4MB/s]
63%|######3 | 107M/170M [00:01<00:01, 54.3MB/s]
67%|######7 | 115M/170M [00:01<00:00, 59.3MB/s]
73%|#######3 | 124M/170M [00:01<00:00, 69.2MB/s]
78%|#######7 | 132M/170M [00:01<00:00, 71.5MB/s]
82%|########2 | 139M/170M [00:01<00:00, 68.9MB/s]
87%|########6 | 147M/170M [00:02<00:00, 72.4MB/s]
91%|######### | 155M/170M [00:02<00:00, 69.9MB/s]
95%|#########5| 161M/170M [00:02<00:00, 67.6MB/s]
99%|#########8| 168M/170M [00:02<00:00, 66.3MB/s]
100%|##########| 170M/170M [00:02<00:00, 73.9MB/s]
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/torch/nn/functional.py:3897: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
for i in range(dim)
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/detection/anchor_utils.py:124: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
@@ -299,7 +299,7 @@ Get boxes with score larger than 0.9
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 3 minutes 35.868 seconds)
+ **Total running time of the script:** ( 3 minutes 47.714 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
index cfd0d106d9..56d372b6fc 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
@@ -227,7 +227,7 @@ training. Other models require a full post training calibration.
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
-
0%| | 0.00/13.6M [00:00<?, ?B/s]
47%|####6 | 6.30M/13.6M [00:00<00:00, 31.9MB/s]
69%|######8 | 9.34M/13.6M [00:00<00:00, 23.9MB/s]
100%|##########| 13.6M/13.6M [00:00<00:00, 29.3MB/s]
+
0%| | 0.00/13.6M [00:00<?, ?B/s]
59%|#####8 | 7.99M/13.6M [00:00<00:00, 48.5MB/s]
93%|#########3| 12.6M/13.6M [00:00<00:00, 48.1MB/s]
100%|##########| 13.6M/13.6M [00:00<00:00, 50.5MB/s]
@@ -409,7 +409,7 @@ Here we give an example of how to measure performance of TVM compiled models.
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 90.2462 90.1186 94.2846 89.8878 0.4816
+ 90.4882 90.4073 94.1937 90.0940 0.4522
@@ -458,7 +458,7 @@ TODO
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 17.732 seconds)
+ **Total running time of the script:** ( 1 minutes 20.294 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_prequantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
index 59726a1a1f..6ea6900083 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
@@ -423,7 +423,7 @@ Here we give an example of how to measure performance of TVM compiled models.
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 119.7386 119.8132 122.4225 118.1679 0.5759
+ 119.9493 119.9546 124.7480 118.6410 0.6881
@@ -460,7 +460,7 @@ Here we give an example of how to measure performance of TVM compiled models.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 2 minutes 33.741 seconds)
+ **Total running time of the script:** ( 2 minutes 35.280 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_prequantized_tflite.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
index 7f1f11fc22..58690e2fbc 100644
--- a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
@@ -257,7 +257,7 @@ We create a Relay VM to build and execute the model.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 37.166 seconds)
+ **Total running time of the script:** ( 1 minutes 37.038 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_quantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt b/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
index 9d324b1207..9d0ed16fb8 100644
--- a/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
@@ -170,7 +170,7 @@ Convert and compile model for CPU.
data: None
input_sym_arg_type = in_param.infer_type()[0]
Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_512_resnet50_v1_voc-9c8b225a.zip...
-
0%| | 0/132723 [00:00<?, ?KB/s]
4%|4 | 5469/132723 [00:00<00:02, 54681.58KB/s]
10%|9 | 13073/132723 [00:00<00:01, 67239.03KB/s]
15%|#5 | 20394/132723 [00:00<00:01, 69961.51KB/s]
21%|##1 | 28175/132723 [00:00<00:01, 73057.17KB/s]
27%|##7 | 35895/132723 [00:00<00:01, 74549.23KB/s]
33%|###2 | 43753/132723 [00:00<00:01, 75918.15KB/s]
39%|###8 | 51357/132723 [00:00<00:01, 75955.82KB/s]
45%|####4 | 59122/132723 [00:00<00:00, 76494.00KB/s]
50%|##### | 66859/132723 [00:00<00:00, 76765.11KB/s]
56%|#####6 | 74685/132723 [00:01<00:00, 77216.04KB/s]
62%|######2 | 82474/132723 [00:01<00:00, 77408.07KB/s]
68%|######8 | 90334/132723 [00:01<00:00, 77766.10KB/s]
74%|#######4 | 98409/132723 [00:01<00:00, 78667.83KB/s]
80%|######## | 106585/132723 [00:01<00:00, 79598.77KB/s]
86%|########6 | 114672/132723 [00:01<00:00, 79980.55KB/s]
93%|#########
2| 122773/132723 [00:01<00:00, 80288.91KB/s]
99%|#########8| 130960/132723 [00:01<00:00, 80761.89KB/s]
100%|##########| 132723/132723 [00:01<00:00, 77067.59KB/s]
+
0%| | 0/132723 [00:00<?, ?KB/s]
4%|4 | 5733/132723 [00:00<00:02, 57320.21KB/s]
9%|9 | 12375/132723 [00:00<00:01, 62668.73KB/s]
15%|#5 | 19963/132723 [00:00<00:01, 68698.40KB/s]
21%|## | 27767/132723 [00:00<00:01, 72383.29KB/s]
27%|##6 | 35665/132723 [00:00<00:01, 74759.79KB/s]
33%|###2 | 43512/132723 [00:00<00:01, 76016.28KB/s]
39%|###8 | 51481/132723 [00:00<00:01, 77213.86KB/s]
45%|####4 | 59203/132723 [00:00<00:00, 77114.53KB/s]
50%|##### | 66915/132723 [00:00<00:00, 76839.91KB/s]
56%|#####6 | 74600/132723 [00:01<00:00, 66981.62KB/s]
62%|######2 | 82309/132723 [00:01<00:00, 69780.83KB/s]
68%|######8 | 90276/132723 [00:01<00:00, 72582.17KB/s]
74%|#######3 | 98141/132723 [00:01<00:00, 74326.77KB/s]
80%|#######9 | 106111/132723 [00:01<00:00, 75892.32KB/s]
86%|########5 | 114017/132723 [00:01<00:00, 76818.61KB/s]
92%|#########
1| 121979/132723 [00:01<00:00, 77645.29KB/s]
98%|#########7| 129928/132723 [00:01<00:00, 78182.93KB/s]
100%|##########| 132723/132723 [00:01<00:00, 74309.96KB/s]
@@ -246,7 +246,7 @@ Display result
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 3 minutes 47.521 seconds)
+ **Total running time of the script:** ( 3 minutes 56.017 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_ssd_gluoncv.py:
diff --git a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
index ebe9b4bdf9..8bb2419342 100644
--- a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
@@ -5,26 +5,26 @@
Computation times
=================
-**15:35.938** total execution time for **how_to_deploy_models** files:
+**16:05.026** total execution time for **how_to_deploy_models** files:
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``) | 03:47.521 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``) | 03:56.017 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:35.868 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``) | 03:47.714 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``) | 02:33.741 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``) | 02:35.280 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``) | 01:37.166 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``) | 01:37.038 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``) | 01:17.732 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``) | 01:20.294 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``) | 01:04.181 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_adreno.py` (``deploy_model_on_adreno.py``) | 01:05.099 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``) | 00:42.793 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``) | 00:44.561 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``) | 00:28.702 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_nano.py` (``deploy_model_on_nano.py``) | 00:29.753 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``) | 00:28.228 | 0.0 MB |
+| :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``) | 00:29.264 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``) | 00:00.006 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
index 15d468639b..c75998f73b 100644
--- a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
@@ -463,7 +463,7 @@ First let us define two helper functions to get the mobilenet model and a cat im
.. code-block:: none
- Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip406fc29a-6ba3-4960-9c54-98be35d8950b from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+ Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipee11f25f-0d12-4b48-b125-626fd33d2c3a from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
diff --git a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
index 75a3be3791..0017ba3a3a 100644
--- a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
Computation times
=================
-**00:54.362** total execution time for **how_to_extend_tvm** files:
+**00:57.677** total execution time for **how_to_extend_tvm** files:
+-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:50.451 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``) | 00:53.647 | 0.0 MB |
+-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``) | 00:02.810 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``) | 00:02.888 | 0.0 MB |
+-------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``) | 00:01.093 | 0.0 MB |
+| :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``) | 00:01.135 | 0.0 MB |
+-------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``) | 00:00.007 | 0.0 MB |
+-------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
index 89069fd1a5..5c376417ca 100644
--- a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
@@ -220,10 +220,10 @@ profile the execution time of each passes.
.. code-block:: none
Printing results of timing profile...
- InferType: 22660us [22660us] (48.72%; 48.72%)
- FoldScaleAxis: 23855us [9us] (51.28%; 51.28%)
- FoldConstant: 23846us [1732us] (51.26%; 99.96%)
- InferType: 22114us [22114us] (47.54%; 92.74%)
+ InferType: 22847us [22847us] (48.51%; 48.51%)
+ FoldScaleAxis: 24249us [10us] (51.49%; 51.49%)
+ FoldConstant: 24239us [1789us] (51.47%; 99.96%)
+ InferType: 22450us [22450us] (47.67%; 92.62%)
@@ -262,10 +262,10 @@ Refer to following sections and :py:func:`tvm.instrument.pass_instrument` for th
.. code-block:: none
Printing results of timing profile...
- InferType: 22553us [22553us] (48.65%; 48.65%)
- FoldScaleAxis: 23806us [8us] (51.35%; 51.35%)
- FoldConstant: 23798us [1760us] (51.33%; 99.97%)
- InferType: 22038us [22038us] (47.54%; 92.60%)
+ InferType: 22507us [22507us] (48.10%; 48.10%)
+ FoldScaleAxis: 24282us [8us] (51.90%; 51.90%)
+ FoldConstant: 24274us [1806us] (51.88%; 99.97%)
+ InferType: 22467us [22467us] (48.02%; 92.56%)
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
index 2acd0be0a7..72f0aa8c3b 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
@@ -331,7 +331,7 @@ latency of convolution.
.. code-block:: none
- Convolution: 34.219966 ms
+ Convolution: 54.310878 ms
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
index 1551b37aea..d692b5e5d6 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
@@ -598,7 +598,7 @@ be able to run on our build server
.. code-block:: none
- conv2d with tensor core: 13.368025 ms
+ conv2d with tensor core: 7.004806 ms
diff --git a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
index d02adf6340..1c74205182 100644
--- a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
@@ -134,8 +134,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
.. code-block:: none
- Numpy running time: 0.018858
- Baseline: 3.458052
+ Numpy running time: 0.019331
+ Baseline: 3.272484
@@ -227,7 +227,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
.. code-block:: none
- Opt1: 0.306489
+ Opt1: 0.335746
@@ -318,7 +318,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
.. code-block:: none
- Opt2: 0.333816
+ Opt2: 0.352065
@@ -406,7 +406,7 @@ the access pattern for A matrix is more cache friendly.
.. code-block:: none
- Opt3: 0.118924
+ Opt3: 0.120151
@@ -523,7 +523,7 @@ flattening.
.. code-block:: none
- Opt4: 0.109810
+ Opt4: 0.109948
@@ -635,7 +635,7 @@ write to C when all the block results are ready.
.. code-block:: none
- Opt5: 0.111485
+ Opt5: 0.111054
@@ -748,7 +748,7 @@ Furthermore, we can also utilize multi-core processors to do the thread-level pa
.. code-block:: none
- Opt6: 0.147307
+ Opt6: 0.146604
diff --git a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
index 25251fbafe..a8e54be84c 100644
--- a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
@@ -5,12 +5,12 @@
Computation times
=================
-**00:35.258** total execution time for **how_to_optimize_operators** files:
+**00:35.163** total execution time for **how_to_optimize_operators** files:
+-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``) | 00:32.625 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``) | 00:32.590 | 0.0 MB |
+-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:01.568 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``) | 00:01.495 | 0.0 MB |
+-----------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``) | 00:01.064 | 0.0 MB |
+| :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``) | 00:01.078 | 0.0 MB |
+-----------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
index 05e4d9fc07..0b0eb35149 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
@@ -5,18 +5,18 @@
Computation times
=================
-**09:52.835** total execution time for **how_to_tune_with_autoscheduler** files:
+**10:13.691** total execution time for **how_to_tune_with_autoscheduler** files:
+----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 06:03.020 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``) | 06:17.931 | 0.0 MB |
+----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``) | 01:42.858 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``) | 01:45.180 | 0.0 MB |
+----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``) | 01:07.318 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``) | 01:09.205 | 0.0 MB |
+----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``) | 00:31.889 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``) | 00:32.432 | 0.0 MB |
+----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``) | 00:14.163 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``) | 00:14.737 | 0.0 MB |
+----------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``) | 00:13.587 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``) | 00:14.205 | 0.0 MB |
+----------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
index 4c54e6fb7b..5327fccbb4 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
@@ -243,12 +243,12 @@ cooperative fetching, unrolling and operator fusion.
@T.prim_func
def main(data: T.Buffer((1, 512, 7, 7), "float32"), kernel: T.Buffer((512, 512, 3, 3), "float32"), bias: T.Buffer((1, 512, 1, 1), "float32"), compute: T.Buffer((1, 512, 7, 7), "float32")):
T.func_attr({"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True})
- blockIdx_x = T.launch_thread("blockIdx.x", 32)
- conv2d_nchw = T.allocate([7], "float32", "local")
- pad_temp_shared = T.allocate([3136], "float32", "shared")
- kernel_shared = T.allocate([1024], "float32", "shared")
- threadIdx_x = T.launch_thread("threadIdx.x", 112)
- conv2d_nchw_1 = T.Buffer((1,), data=conv2d_nchw, scope="local", align=4)
+ blockIdx_x = T.launch_thread("blockIdx.x", 28)
+ conv2d_nchw = T.allocate([14], "float32", "local")
+ pad_temp_shared = T.allocate([72], "float32", "shared")
+ kernel_shared = T.allocate([3072], "float32", "shared")
+ threadIdx_x = T.launch_thread("threadIdx.x", 64)
+ conv2d_nchw_1 = T.Buffer((14,), data=conv2d_nchw, scope="local", align=32)
conv2d_nchw_1[0] = T.float32(0)
conv2d_nchw_1[1] = T.float32(0)
conv2d_nchw_1[2] = T.float32(0)
@@ -256,36 +256,466 @@ cooperative fetching, unrolling and operator fusion.
conv2d_nchw_1[4] = T.float32(0)
conv2d_nchw_1[5] = T.float32(0)
conv2d_nchw_1[6] = T.float32(0)
- for rc_outer_outer, ry_outer_outer, rx_outer_outer in T.grid(8, 3, 3):
- pad_temp_shared_1 = T.Buffer((3136,), data=pad_temp_shared, scope="shared")
- for ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer in range(28):
- cse_var_1: T.int32 = ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 112
- threadIdx_x_1 = T.launch_thread("threadIdx.x", 112)
+ conv2d_nchw_1[7] = T.float32(0)
+ conv2d_nchw_1[8] = T.float32(0)
+ conv2d_nchw_1[9] = T.float32(0)
+ conv2d_nchw_1[10] = T.float32(0)
+ conv2d_nchw_1[11] = T.float32(0)
+ conv2d_nchw_1[12] = T.float32(0)
+ conv2d_nchw_1[13] = T.float32(0)
+ for rc_outer_outer, ry_outer_outer in T.grid(64, 3):
+ cse_var_2: T.int32 = rc_outer_outer * 72
+ cse_var_1: T.int32 = ry_outer_outer * 3
+ pad_temp_shared_1 = T.Buffer((72,), data=pad_temp_shared, scope="shared")
+ with T.launch_thread("threadIdx.x", 64) as threadIdx_x_1:
data_1 = T.Buffer((25088,), data=data.data)
- pad_temp_shared_1[cse_var_1 + threadIdx_x_1] = T.if_then_else(1 <= ry_outer_outer + (ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 2 + threadIdx_x_1 // 7) % 7 and ry_outer_outer + (ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 2 + threadIdx_x_1 // 7) % 7 < 8 and 1 <= rx_outer_outer + threadIdx_x_1 % 7 and rx_outer_outer + threadIdx_x_1 % 7 < 8, data_1[rc_outer_outer * 3136 + cse_var_1 + ry_outer_outer * 7 + threadIdx_x_1 + rx_outer_outer - 8], T.float32(0))
- kernel_shared_1 = T.Buffer((1024,), data=kernel_shared, scope="shared")
- for ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer in range(10):
- threadIdx_x_1 = T.launch_thread("threadIdx.x", 112)
- if T.likely(ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 7 + threadIdx_x_1 // 16 < 64):
- kernel_1 = T.Buffer((2359296,), data=kernel.data)
- kernel_shared_1[ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 112 + threadIdx_x_1] = kernel_1[blockIdx_x * 73728 + (ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 7 + threadIdx_x_1 // 16) // 4 * 4608 + rc_outer_outer * 576 + (ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 48 + threadIdx_x_1) % 64 * 9 + ry_outer_outer * 3 + rx_outer_outer]
- for rc_outer_inner, rc_inner in T.grid(2, 32):
- conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 1] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 2] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 3] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 4] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 5] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 6] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- compute_1 = T.Buffer((25088,), data=compute.data)
- bias_1 = T.Buffer((512,), data=bias.data)
- compute_1[blockIdx_x * 784 + threadIdx_x * 7] = T.max(conv2d_nchw_1[0] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 1] = T.max(conv2d_nchw_1[1] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 2] = T.max(conv2d_nchw_1[2] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 3] = T.max(conv2d_nchw_1[3] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 4] = T.max(conv2d_nchw_1[4] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 5] = T.max(conv2d_nchw_1[5] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 6] = T.max(conv2d_nchw_1[6] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
+ if T.likely(threadIdx_x_1 < 18):
+ pad_temp_shared_1[threadIdx_x_1 * 4] = T.if_then_else(1 <= ry_outer_outer + blockIdx_x % 7 and ry_outer_outer + blockIdx_x % 7 < 8 and 1 <= threadIdx_x_1 * 4 % 9 and threadIdx_x_1 * 4 % 9 < 8, data_1[rc_outer_outer * 392 + threadIdx_x_1 * 4 // 9 * 49 + ry_outer_outer * 7 + blockIdx_x % 7 * 7 + threadIdx_x_1 * 4 % 9 - 8], T.float32(0))
+ if T.likely(threadIdx_x_1 < 18):
+ pad_temp_shared_1[threadIdx_x_1 * 4 + 1] = T.if_then_else(1 <= ry_outer_outer + blockIdx_x % 7 and ry_outer_outer + blockIdx_x % 7 < 8 and 1 <= (threadIdx_x_1 * 4 + 1) % 9 and (threadIdx_x_1 * 4 + 1) % 9 < 8, data_1[rc_outer_outer * 392 + (threadIdx_x_1 * 4 + 1) // 9 * 49 + ry_outer_outer * 7 + blockIdx_x % 7 * 7 + (threadIdx_x_1 * 4 + 1) % 9 - 8], T.float32(0))
+ if T.likely(threadIdx_x_1 < 18):
+ pad_temp_shared_1[threadIdx_x_1 * 4 + 2] = T.if_then_else(1 <= ry_outer_outer + blockIdx_x % 7 and ry_outer_outer + blockIdx_x % 7 < 8 and 1 <= (threadIdx_x_1 * 4 + 2) % 9 and (threadIdx_x_1 * 4 + 2) % 9 < 8, data_1[rc_outer_outer * 392 + (threadIdx_x_1 * 4 + 2) // 9 * 49 + ry_outer_outer * 7 + blockIdx_x % 7 * 7 + (threadIdx_x_1 * 4 + 2) % 9 - 8], T.float32(0))
+ if T.likely(threadIdx_x_1 < 18):
+ pad_temp_shared_1[threadIdx_x_1 * 4 + 3] = T.if_then_else(1 <= ry_outer_outer + blockIdx_x % 7 and ry_outer_outer + blockIdx_x % 7 < 8 and 1 <= (threadIdx_x_1 * 4 + 3) % 9 and (threadIdx_x_1 * 4 + 3) % 9 < 8, data_1[rc_outer_outer * 392 + (threadIdx_x_1 * 4 + 3) // 9 * 49 + ry_outer_outer * 7 + blockIdx_x % 7 * 7 + (threadIdx_x_1 * 4 + 3) % 9 - 8], T.float32(0))
+ threadIdx_x_1 = T.env_thread("threadIdx.x")
+ kernel_shared_1 = T.Buffer((3072,), data=kernel_shared, scope="shared")
+ kernel_1 = T.Buffer((2359296,), data=kernel.data)
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 64) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 64) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 128) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 128) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 192] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 36864]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 256) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 256) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 320) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 320) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 384] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 73728]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 448) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 448) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 512) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 512) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 576] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 110592]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 640) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 640) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 704) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 704) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 768] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 147456]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 832) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 832) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 896) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 896) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 960] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 184320]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1024) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1024) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1088) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1088) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1152] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 221184]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1216) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1216) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1280) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1280) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1344] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 258048]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1408) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1408) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1472) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1472) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1536] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 294912]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1600) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1600) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1664) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1664) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1728] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 331776]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1792) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1792) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1856) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1856) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1920] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 368640]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1984) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1984) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2048) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2048) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2112] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 405504]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2176) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2176) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2240) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2240) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2304] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 442368]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2368) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2368) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2432) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2432) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2496] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 479232]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2560) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2560) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2624) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2624) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2688] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 516096]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2752) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2752) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2816) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2816) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2880] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 552960]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2944) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2944) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 3008) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 3008) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[0] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[9] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[1] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[10] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[0] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[9] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[1] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[10] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[1] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[10] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[7] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[16] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[1] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[10] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[7] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[16] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[7] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[16] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[8] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[17] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[7] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[16] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[8] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[17] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[18] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[27] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[19] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[28] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[18] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[27] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[19] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[28] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[19] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[28] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[25] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[34] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[19] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[28] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[25] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[34] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[25] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[34] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[26] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[35] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[25] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[34] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[26] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[35] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[36] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[45] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[37] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[46] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[36] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[45] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[37] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[46] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[37] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[46] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[43] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[52] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[37] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[46] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[43] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[52] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[43] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[52] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[44] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[53] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[43] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[52] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[44] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[53] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[54] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[63] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[55] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[64] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[54] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[63] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[55] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[64] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[55] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[64] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[61] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[70] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[55] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[64] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[61] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[70] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[61] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[70] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[62] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[71] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[61] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[70] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[62] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[71] * kernel_shared_1[threadIdx_x * 48 + 47]
+ for i1_inner, i3_inner in T.grid(2, 7):
+ compute_1 = T.Buffer((25088,), data=compute.data)
+ bias_1 = T.Buffer((512,), data=bias.data)
+ compute_1[blockIdx_x // 7 * 6272 + threadIdx_x * 98 + i1_inner * 49 + blockIdx_x % 7 * 7 + i3_inner] = T.max(conv2d_nchw_1[i1_inner * 7 + i3_inner] + bias_1[blockIdx_x // 7 * 128 + threadIdx_x * 2 + i1_inner], T.float32(0))
@@ -335,7 +765,7 @@ We build the binary and check its correctness and performance.
.. code-block:: none
- Execution time of this operator: 0.375 ms
+ Execution time of this operator: 0.359 ms
@@ -384,36 +814,36 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
conv2d_nchw_nn_o_o_o_i, conv2d_nchw_nn_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_i, factor=1)
conv2d_nchw_nn_o_o_o_o, conv2d_nchw_nn_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_o_i, factor=1)
conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=1)
- conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=1)
- conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=16)
+ conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=2)
+ conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=64)
conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=1)
conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=1)
conv2d_nchw_yy_o_o_i, conv2d_nchw_yy_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_i, factor=1)
- conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=7)
+ conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=1)
conv2d_nchw_yy_o_o_o_o, conv2d_nchw_yy_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_o_i, factor=1)
conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=1)
- conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=1)
+ conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=7)
conv2d_nchw_xx_o_o_o_i, conv2d_nchw_xx_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_i, factor=1)
- conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=7)
- conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=32)
- conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=2)
+ conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=1)
+ conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=2)
+ conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=4)
conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=1)
conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=1)
conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=1)
- conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=1)
+ conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=3)
s[conv2d_nchw].reorder(conv2d_nchw_nn_o_o_o_o, conv2d_nchw_ff_o_o_o_o, conv2d_nchw_yy_o_o_o_o, conv2d_nchw_xx_o_o_o_o, conv2d_nchw_nn_o_o_o_i, conv2d_nchw_ff_o_o_o_i, conv2d_nchw_yy_o_o_o_i, conv2d_nchw_xx_o_o_o_i, conv2d_nchw_nn_o_o_i, conv2d_nchw_ff_o_o_i, conv2d_nchw_yy_o_o_i, conv2d_nchw_xx_o_o_i, conv2d_nchw_rc_o_o, conv2d_nchw_ry_o_o, conv2d_nchw_rx_o_o, conv2d_nchw_rc_o_i, conv2d_nchw_ry_o_i, conv2d_nchw_rx_o_i, conv2d_nchw_nn_o_i, conv2d_nchw_ff_o_i, conv2d_nchw_yy_o_i, conv2 [...]
compute_i0_o_i, compute_i0_i = s[compute].split(compute_i0, factor=1)
compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
- compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=1)
- compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=16)
+ compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=2)
+ compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=64)
compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
- compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
+ compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=1)
compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
- compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=1)
+ compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=7)
compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=1)
- compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=7)
+ compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=1)
s[compute].reorder(compute_i0_o_o_o, compute_i1_o_o_o, compute_i2_o_o_o, compute_i3_o_o_o, compute_i0_o_o_i, compute_i1_o_o_i, compute_i2_o_o_i, compute_i3_o_o_i, compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i, compute_i0_i, compute_i1_i, compute_i2_i, compute_i3_i)
s[conv2d_nchw].compute_at(s[compute], compute_i3_o_i)
kernel_shared = s.cache_read(kernel, "shared", [conv2d_nchw])
@@ -432,14 +862,14 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
- kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=112)
+ kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
- pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
+ pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=4)
s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
- pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=112)
+ pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
- s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 0)
+ s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 512)
s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "unroll_explicit", True)
CUDA source code:
@@ -457,10 +887,10 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
#define int64_t long long
#define uint64_t unsigned long long
#endif
- extern "C" __global__ void __launch_bounds__(112) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
- float conv2d_nchw[7];
- __shared__ float pad_temp_shared[3136];
- __shared__ float kernel_shared[1024];
+ extern "C" __global__ void __launch_bounds__(64) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
+ float conv2d_nchw[14];
+ __shared__ float pad_temp_shared[72];
+ __shared__ float kernel_shared[3072];
conv2d_nchw[0] = 0.000000e+00f;
conv2d_nchw[1] = 0.000000e+00f;
conv2d_nchw[2] = 0.000000e+00f;
@@ -468,40 +898,420 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
conv2d_nchw[4] = 0.000000e+00f;
conv2d_nchw[5] = 0.000000e+00f;
conv2d_nchw[6] = 0.000000e+00f;
- for (int rc_outer_outer = 0; rc_outer_outer < 8; ++rc_outer_outer) {
+ conv2d_nchw[7] = 0.000000e+00f;
+ conv2d_nchw[8] = 0.000000e+00f;
+ conv2d_nchw[9] = 0.000000e+00f;
+ conv2d_nchw[10] = 0.000000e+00f;
+ conv2d_nchw[11] = 0.000000e+00f;
+ conv2d_nchw[12] = 0.000000e+00f;
+ conv2d_nchw[13] = 0.000000e+00f;
+ for (int rc_outer_outer = 0; rc_outer_outer < 64; ++rc_outer_outer) {
for (int ry_outer_outer = 0; ry_outer_outer < 3; ++ry_outer_outer) {
- for (int rx_outer_outer = 0; rx_outer_outer < 3; ++rx_outer_outer) {
- __syncthreads();
- for (int ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer = 0; ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer < 28; ++ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer) {
- pad_temp_shared[((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 112) + ((int)threadIdx.x))] = (((((1 <= (ry_outer_outer + (((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 2) + (((int)threadIdx.x) / 7)) % 7))) && ((ry_outer_outer + (((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 2) + (((int)threadIdx.x) / 7)) % 7)) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 3136) + (a [...]
- }
- for (int ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 = 0; ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 < 10; ++ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1) {
- if (((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 * 7) + (((int)threadIdx.x) >> 4)) < 64) {
- kernel_shared[((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 * 112) + ((int)threadIdx.x))] = kernel[((((((((int)blockIdx.x) * 73728) + ((((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 * 7) + (((int)threadIdx.x) >> 4)) >> 2) * 4608)) + (rc_outer_outer * 576)) + ((((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 * 48) + ((int)threadIdx.x)) & 63) * 9)) + (ry_outer_outer * 3)) + rx_outer_outer)];
- }
- }
- __syncthreads();
- for (int rc_outer_inner = 0; rc_outer_inner < 2; ++rc_outer_inner) {
- for (int rc_inner = 0; rc_inner < 32; ++rc_inner) {
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7))] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 1)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 2)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 3)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 4)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 5)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 6)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- }
- }
+ __syncthreads();
+ if (((int)threadIdx.x) < 18) {
+ pad_temp_shared[(((int)threadIdx.x) * 4)] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= ((((int)threadIdx.x) * 4) % 9))) && (((((int)threadIdx.x) * 4) % 9) < 8)) ? data[((((((rc_outer_outer * 392) + (((((int)threadIdx.x) * 4) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + ((((int)threadIdx.x) * 4) % 9)) - 8)] : 0.000000e+00f);
}
+ if (((int)threadIdx.x) < 18) {
+ pad_temp_shared[((((int)threadIdx.x) * 4) + 1)] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((((int)threadIdx.x) * 4) + 1) % 9))) && ((((((int)threadIdx.x) * 4) + 1) % 9) < 8)) ? data[((((((rc_outer_outer * 392) + ((((((int)threadIdx.x) * 4) + 1) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((((int)threadIdx.x) * 4) + 1) % 9)) - 8)] : 0.000000e+00f);
+ }
+ if (((int)threadIdx.x) < 18) {
+ pad_temp_shared[((((int)threadIdx.x) * 4) + 2)] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((((int)threadIdx.x) * 4) + 2) % 9))) && ((((((int)threadIdx.x) * 4) + 2) % 9) < 8)) ? data[((((((rc_outer_outer * 392) + ((((((int)threadIdx.x) * 4) + 2) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((((int)threadIdx.x) * 4) + 2) % 9)) - 8)] : 0.000000e+00f);
+ }
+ if (((int)threadIdx.x) < 18) {
+ pad_temp_shared[((((int)threadIdx.x) * 4) + 3)] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((((int)threadIdx.x) * 4) + 3) % 9))) && ((((((int)threadIdx.x) * 4) + 3) % 9) < 8)) ? data[((((((rc_outer_outer * 392) + ((((((int)threadIdx.x) * 4) + 3) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((((int)threadIdx.x) * 4) + 3) % 9)) - 8)] : 0.000000e+00f);
+ }
+ kernel_shared[((int)threadIdx.x)] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 64) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 64) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 128) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 128) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 192)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 36864)];
+ kernel_shared[(((((((int)threadIdx.x) + 256) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 256) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 320) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 320) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 384)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 73728)];
+ kernel_shared[(((((((int)threadIdx.x) + 448) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 448) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 512) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 512) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 576)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 110592)];
+ kernel_shared[(((((((int)threadIdx.x) + 640) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 640) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 704) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 704) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 768)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 147456)];
+ kernel_shared[(((((((int)threadIdx.x) + 832) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 832) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 896) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 896) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 960)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 184320)];
+ kernel_shared[(((((((int)threadIdx.x) + 1024) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1024) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1088) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1088) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1152)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 221184)];
+ kernel_shared[(((((((int)threadIdx.x) + 1216) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1216) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1280) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1280) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1344)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 258048)];
+ kernel_shared[(((((((int)threadIdx.x) + 1408) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1408) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1472) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1472) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1536)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 294912)];
+ kernel_shared[(((((((int)threadIdx.x) + 1600) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1600) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1664) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1664) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1728)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 331776)];
+ kernel_shared[(((((((int)threadIdx.x) + 1792) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1792) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1856) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1856) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1920)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 368640)];
+ kernel_shared[(((((((int)threadIdx.x) + 1984) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1984) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2048) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2048) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2112)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 405504)];
+ kernel_shared[(((((((int)threadIdx.x) + 2176) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2176) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2240) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2240) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2304)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 442368)];
+ kernel_shared[(((((((int)threadIdx.x) + 2368) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2368) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2432) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2432) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2496)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 479232)];
+ kernel_shared[(((((((int)threadIdx.x) + 2560) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2560) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2624) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2624) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2688)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 516096)];
+ kernel_shared[(((((((int)threadIdx.x) + 2752) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2752) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2816) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2816) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2880)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 552960)];
+ kernel_shared[(((((((int)threadIdx.x) + 2944) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2944) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 3008) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 3008) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ __syncthreads();
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[0] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[9] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[1] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[10] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[2] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[3] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[4] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[5] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[6] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[0] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[9] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[1] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[10] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[1] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[10] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[7] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[16] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[1] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[10] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[7] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[16] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[7] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[16] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[8] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[17] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[7] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[16] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[8] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[17] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[18] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[27] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[19] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[28] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[18] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[27] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[19] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[28] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[19] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[28] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[25] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[34] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[19] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[28] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[25] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[34] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[25] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[34] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[26] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[35] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[25] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[34] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[26] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[35] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[36] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[45] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[37] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[46] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[36] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[45] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[37] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[46] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[37] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[46] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[43] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[52] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[37] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[46] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[43] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[52] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[43] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[52] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[44] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[53] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[43] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[52] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[44] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[53] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[54] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[63] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[55] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[64] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[54] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[63] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[55] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[64] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[55] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[64] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[61] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[70] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[55] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[64] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[61] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[70] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[61] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[70] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[62] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[71] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[61] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[70] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[62] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[71] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ }
+ }
+ for (int i1_inner = 0; i1_inner < 2; ++i1_inner) {
+ for (int i3_inner = 0; i3_inner < 7; ++i3_inner) {
+ compute[((((((((int)blockIdx.x) / 7) * 6272) + (((int)threadIdx.x) * 98)) + (i1_inner * 49)) + ((((int)blockIdx.x) % 7) * 7)) + i3_inner)] = max((conv2d_nchw[((i1_inner * 7) + i3_inner)] + bias[((((((int)blockIdx.x) / 7) * 128) + (((int)threadIdx.x) * 2)) + i1_inner)]), 0.000000e+00f);
}
}
- compute[((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7))] = max((conv2d_nchw[0] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 1)] = max((conv2d_nchw[1] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 2)] = max((conv2d_nchw[2] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 3)] = max((conv2d_nchw[3] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 4)] = max((conv2d_nchw[4] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 5)] = max((conv2d_nchw[5] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 6)] = max((conv2d_nchw[6] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
}
@@ -560,7 +1370,7 @@ In the example below we resume the status and do more 5 trials.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 6 minutes 3.020 seconds)
+ **Total running time of the script:** ( 6 minutes 17.931 seconds)
.. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
index 405eb235fd..b8d190fce3 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
@@ -647,7 +647,7 @@ so we can read the log file and load the best schedules.
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 7.9057 7.9119 7.9122 7.8930 0.0090
+ 7.8962 7.8949 7.9029 7.8909 0.0050
@@ -675,7 +675,7 @@ Other Tips
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 7.318 seconds)
+ **Total running time of the script:** ( 1 minutes 9.205 seconds)
.. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
index 66fb4a146f..abe67421cb 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
@@ -666,7 +666,7 @@ so we can read the log file and load the best schedules.
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 756.2868 756.3433 756.3818 756.1353 0.1083
+ 758.8034 759.8965 760.0964 756.4172 1.6892
@@ -694,7 +694,7 @@ Other Tips
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 42.858 seconds)
+ **Total running time of the script:** ( 1 minutes 45.180 seconds)
.. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_x86.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
index 4e090de017..691c107f14 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
@@ -389,26 +389,74 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
@T.prim_func
def main(placeholder: T.Buffer((128, 256), "float32"), placeholder_1: T.Buffer((4916, 16, 1), "float32"), placeholder_2: T.Buffer((4916,), "int32"), placeholder_3: T.Buffer((33,), "int32"), placeholder_4: T.Buffer((128, 512), "float32"), compute: T.Buffer((128, 512), "float32")):
T.func_attr({"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True})
- for i0_outer_i1_outer_fused in T.parallel(128):
- compute_1 = T.allocate([512], "float32", "global")
- compute_2 = T.Buffer((512,), data=compute_1)
- for i_outer_inner, nb_j_inner in T.grid(4, 2):
- for i_inner_init, j_init in T.grid(4, 16):
- compute_2[i_outer_inner * 128 + i_inner_init * 32 + nb_j_inner * 16 + j_init] = T.float32(0)
- for elem_idx, i_inner, j in T.grid(T.Let(placeholder_5[cse_var_1 + 1] - placeholder_5[cse_var_1], where={cse_var_1: i0_outer_i1_outer_fused % 16 * 2 + nb_j_inner}), 4, 16):
- cse_var_1 = T.int32()
+ for i0_outer_i1_outer_fused in T.parallel(16):
+ compute_1 = T.allocate([4096], "float32", "global")
+ compute_2 = T.Buffer((4096,), data=compute_1)
+ for i_outer_inner, nb_j_inner in T.grid(2, 2):
+ for i_inner_init in range(64):
+ cse_var_1: T.int32 = i_outer_inner * 2048 + i_inner_init * 32 + nb_j_inner * 16
+ compute_2[cse_var_1] = T.float32(0)
+ compute_2[cse_var_1 + 1] = T.float32(0)
+ compute_2[cse_var_1 + 2] = T.float32(0)
+ compute_2[cse_var_1 + 3] = T.float32(0)
+ compute_2[cse_var_1 + 4] = T.float32(0)
+ compute_2[cse_var_1 + 5] = T.float32(0)
+ compute_2[cse_var_1 + 6] = T.float32(0)
+ compute_2[cse_var_1 + 7] = T.float32(0)
+ compute_2[cse_var_1 + 8] = T.float32(0)
+ compute_2[cse_var_1 + 9] = T.float32(0)
+ compute_2[cse_var_1 + 10] = T.float32(0)
+ compute_2[cse_var_1 + 11] = T.float32(0)
+ compute_2[cse_var_1 + 12] = T.float32(0)
+ compute_2[cse_var_1 + 13] = T.float32(0)
+ compute_2[cse_var_1 + 14] = T.float32(0)
+ compute_2[cse_var_1 + 15] = T.float32(0)
+ for elem_idx, i_inner in T.grid(T.Let(placeholder_5[cse_var_2 + 1] - placeholder_5[cse_var_2], where={cse_var_2: i0_outer_i1_outer_fused * 2 + nb_j_inner}), 64):
+ cse_var_2 = T.int32()
placeholder_5 = T.Buffer((33,), "int32", data=placeholder_3.data)
- cse_var_3: T.int32 = i0_outer_i1_outer_fused % 16 * 2 + nb_j_inner
- cse_var_2: T.int32 = i_outer_inner * 128 + i_inner * 32 + nb_j_inner * 16 + j
+ cse_var_21: T.int32 = elem_idx * 16
+ cse_var_20: T.int32 = i0_outer_i1_outer_fused * 2 + nb_j_inner
+ cse_var_19: T.int32 = i_outer_inner * 16384 + i_inner * 256
+ cse_var_18: T.int32 = i_outer_inner * 2048 + i_inner * 32 + nb_j_inner * 16
+ cse_var_17: T.int32 = cse_var_18 + 9
+ cse_var_16: T.int32 = cse_var_18 + 8
+ cse_var_15: T.int32 = cse_var_18 + 7
+ cse_var_14: T.int32 = cse_var_18 + 6
+ cse_var_13: T.int32 = cse_var_18 + 5
+ cse_var_12: T.int32 = cse_var_18 + 4
+ cse_var_11: T.int32 = cse_var_18 + 3
+ cse_var_10: T.int32 = cse_var_18 + 2
+ cse_var_9: T.int32 = cse_var_18 + 15
+ cse_var_8: T.int32 = cse_var_18 + 14
+ cse_var_7: T.int32 = cse_var_18 + 13
+ cse_var_6: T.int32 = cse_var_18 + 12
+ cse_var_5: T.int32 = cse_var_18 + 11
+ cse_var_4: T.int32 = cse_var_18 + 10
+ cse_var_3: T.int32 = cse_var_18 + 1
placeholder_6 = T.Buffer((78656,), data=placeholder_1.data)
placeholder_7 = T.Buffer((32768,), data=placeholder.data)
placeholder_8 = T.Buffer((4916,), "int32", data=placeholder_2.data)
- compute_2[cse_var_2] = compute_2[cse_var_2] + placeholder_6[placeholder_5[cse_var_3] * 16 + elem_idx * 16 + j] * T.max(placeholder_7[i0_outer_i1_outer_fused // 16 * 4096 + i_outer_inner * 1024 + i_inner * 256 + placeholder_8[placeholder_5[cse_var_3] + elem_idx]], T.float32(0))
- for i0_inner in range(16):
- cse_var_4: T.int32 = i0_outer_i1_outer_fused // 16 * 8192 + i0_inner * 512 + i0_outer_i1_outer_fused % 16 * 32
+ compute_2[cse_var_18] = compute_2[cse_var_18] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_3] = compute_2[cse_var_3] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 1] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_10] = compute_2[cse_var_10] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 2] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_11] = compute_2[cse_var_11] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 3] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_12] = compute_2[cse_var_12] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 4] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_13] = compute_2[cse_var_13] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 5] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_14] = compute_2[cse_var_14] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 6] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_15] = compute_2[cse_var_15] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 7] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_16] = compute_2[cse_var_16] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 8] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_17] = compute_2[cse_var_17] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 9] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_4] = compute_2[cse_var_4] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 10] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_5] = compute_2[cse_var_5] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 11] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_6] = compute_2[cse_var_6] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 12] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_7] = compute_2[cse_var_7] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 13] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_8] = compute_2[cse_var_8] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 14] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_9] = compute_2[cse_var_9] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 15] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ for i0_inner in range(128):
+ cse_var_22: T.int32 = i0_inner * 512 + i0_outer_i1_outer_fused * 32
compute_3 = T.Buffer((65536,), data=compute.data)
placeholder_5 = T.Buffer((65536,), data=placeholder_4.data)
- compute_3[cse_var_4:cse_var_4 + 32] = T.max(compute_2[i0_inner * 32:i0_inner * 32 + 32] + placeholder_5[cse_var_4:cse_var_4 + 32], T.Broadcast(T.float32(0), 32))
+ compute_3[cse_var_22:cse_var_22 + 32] = T.max(compute_2[i0_inner * 32:i0_inner * 32 + 32] + placeholder_5[cse_var_22:cse_var_22 + 32], T.Broadcast(T.float32(0), 32))
@@ -458,7 +506,7 @@ We build the binary and check its correctness and performance.
.. code-block:: none
- Execution time of this operator: 1.496 ms
+ Execution time of this operator: 1.870 ms
diff --git a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
index 3ad456cfac..516e3ae67a 100644
--- a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
@@ -5,16 +5,16 @@
Computation times
=================
-**00:28.687** total execution time for **how_to_tune_with_autotvm** files:
+**00:44.950** total execution time for **how_to_tune_with_autotvm** files:
+--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``) | 00:28.652 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``) | 00:44.915 | 0.0 MB |
+--------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``) | 00:00.021 | 0.0 MB |
+--------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``) | 00:00.005 | 0.0 MB |
+--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``) | 00:00.004 | 0.0 MB |
+| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``) | 00:00.005 | 0.0 MB |
+--------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``) | 00:00.004 | 0.0 MB |
+--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
index d9b8bb5183..9722250819 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
@@ -390,7 +390,7 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 1, 16]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 8, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1270006
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 16, 8]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 64]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1128978
No: 2 GFLOPS: 0.00/0.00 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
@@ -513,8 +513,10 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 1, 32]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 128, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3579805
- No: 3 GFLOPS: 0.00/0.00 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 2, 4]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 64, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,5896331
+ No: 3 GFLOPS: 17.53/17.53 result: MeasureResult(costs=(0.01320753811111111,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.656529664993286, timestamp=1678838216.3901742) [('tile_f', [-1, 4, 2, 8]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 2, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2714505
+ No: 4 GFLOPS: 58.24/58.24 result: MeasureResult(costs=(0.003975269448275862,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.8088016510009766, timestamp=1678838218.727577) [('tile_f', [-1, 2, 1, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2614205
+ No: 5 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -636,8 +638,8 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 2, 8]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 256]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,186484
- No: 4 GFLOPS: 0.00/0.00 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 16, 16, 1]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 128, 2]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9742958
+ No: 6 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -759,501 +761,161 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 64, 4, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 2, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1260458
- No: 5 GFLOPS: 25.12/25.12 result: MeasureResult(costs=(0.009214138363636363,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.087787389755249, timestamp=1678816859.364998) [('tile_f', [-1, 8, 2, 1]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 16]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,5153733
- No: 6 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
- func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
- func = build(s, args, target=target, runtime=runtime)
- File "/workspace/python/tvm/driver/build_module.py", line 227, in build
- input_mod = lower(inputs, args, name=name, binds=binds)
- File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
- return ffi.lower_schedule(inp, args, name, binds, simple_mode)
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 2, 256]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6307618
+ No: 7 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
+ File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 742, in __call__
+ yield remote, remote.load_module(os.path.split(build_result.filename)[1])
+ File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 706, in run_through_rpc
+ costs = time_f(*args).results
+ File "/workspace/python/tvm/runtime/module.py", line 357, in evaluator
+ blob = feval(*args)
File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
- File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
+ File "tvm/_ffi/_cython/./packed_func.pxi", line 262, in tvm._ffi._cy3.core.FuncCall
+ File "tvm/_ffi/_cython/./packed_func.pxi", line 251, in tvm._ffi._cy3.core.FuncCall3
File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
- 24: TVMFuncCall
+ 4: TVMFuncCall
at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
+ 3: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+ at ../include/tvm/runtime/packed_func.h:1217
+ 2: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+ at ../src/runtime/rpc/rpc_module.cc:129
+ 1: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)
+ at ../src/runtime/rpc/rpc_endpoint.cc:1012
+ 0: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)
+ at ../src/runtime/rpc/rpc_endpoint.cc:804
+ File "../src/runtime/rpc/rpc_endpoint.cc", line 804
+ TVMError:
+ ---------------------------------------------------------------
+ An error occurred during the execution of TVM.
+ For more information, please see: https://tvm.apache.org/docs/errors.html
+ ---------------------------------------------------------------
+ Check failed: (code == RPCCode::kReturn) is false: code=kShutdown
+
+ During handling of the above exception, another exception occurred:
Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 16, 4, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 16]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7869863
- No: 7 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
- func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
- func = build(s, args, target=target, runtime=runtime)
- File "/workspace/python/tvm/driver/build_module.py", line 227, in build
- input_mod = lower(inputs, args, name=name, binds=binds)
- File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
- return ffi.lower_schedule(inp, args, name, binds, simple_mode)
- File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
- File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
- File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
+ File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 706, in run_through_rpc
+ costs = time_f(*args).results
+ File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
+ self.gen.throw(type, value, traceback)
+ File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 746, in __call__
+ remote.remove(build_result.filename)
+ File "/workspace/python/tvm/rpc/client.py", line 144, in remove
+ self._remote_funcs["remove"] = self.get_function("tvm.rpc.server.remove")
+ File "/workspace/python/tvm/rpc/client.py", line 72, in get_function
+ return self._sess.get_function(name)
+ File "/workspace/python/tvm/runtime/module.py", line 171, in get_function
+ self.handle, c_str(name), ctypes.c_int(query_imports), ctypes.byref(ret_handle)
+ File "/workspace/python/tvm/_ffi/base.py", line 348, in check_call
+ raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
+ 52: 0xffffffffffffffff
+ 51: _start
+ 50: __libc_start_main
+ 49: _Py_UnixMain
+ 48: 0x0000000000650da0
+ 47: 0x0000000000650afa
+ 46: _PyFunction_FastCallDict
+ 45: _PyEval_EvalCodeWithName
+ 44: _PyEval_EvalFrameDefault
+ 43: _PyFunction_FastCallKeywords
+ 42: _PyEval_EvalCodeWithName
+ 41: _PyEval_EvalFrameDefault
+ 40: _PyMethodDef_RawFastCallKeywords
+ 39: 0x0000000000546369
+ 38: _PyEval_EvalCodeWithName
+ 37: _PyEval_EvalFrameDefault
+ 36: _PyFunction_FastCallKeywords
+ 35: _PyEval_EvalCodeWithName
+ 34: _PyEval_EvalFrameDefault
+ 33: _PyFunction_FastCallDict
+ 32: _PyEval_EvalCodeWithName
+ 31: _PyEval_EvalFrameDefault
+ 30: _PyObject_FastCallDict
+ 29: 0x00000000004c06e1
+ 28: _PyFunction_FastCallDict
+ 27: _PyEval_EvalFrameDefault
+ 26: _PyMethodDescr_FastCallKeywords
+ 25: 0x00000000005dcb58
+ 24: 0x00000000005dc83f
+ 23: 0x00000000004ba127
+ 22: _PyEval_EvalFrameDefault
+ 21: _PyFunction_FastCallKeywords
+ 20: _PyEval_EvalFrameDefault
+ 19: _PyFunction_FastCallKeywords
+ 18: _PyEval_EvalFrameDefault
+ 17: _PyFunction_FastCallKeywords
+ 16: _PyEval_EvalCodeWithName
+ 15: _PyEval_EvalFrameDefault
+ 14: 0x0000000000537c30
+ 13: _PyObject_FastCallKeywords
+ 12: 0x00007f670fd0bfa2
+ 11: _ctypes_callproc
+ 10: ffi_call
+ 9: ffi_call_unix64
+ 8: TVMModGetFunction
+ at ../src/runtime/c_runtime_api.cc:408
+ 7: tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)
+ at ../src/runtime/module.cc:66
+ 6: tvm::runtime::RPCModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)
+ at ../src/runtime/rpc/rpc_module.cc:185
+ 5: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+ at ../src/runtime/rpc/rpc_endpoint.cc:1007
+ 4: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(tvm::runtime::RPCCode, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+ at ../src/runtime/rpc/rpc_endpoint.h:223
+ 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(int&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
at ../include/tvm/runtime/packed_func.h:1621
2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
at ../include/tvm/runtime/packed_func.h:1217
1: Call
at ../include/tvm/runtime/packed_func.h:1213
0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
+ at ../src/runtime/rpc/rpc_endpoint.cc:684
+ File "../src/runtime/rpc/rpc_endpoint.cc", line 684
+ TVMError:
+ ---------------------------------------------------------------
+ An error occurred during the execution of TVM.
+ For more information, please see: https://tvm.apache.org/docs/errors.html
+ ---------------------------------------------------------------
+ Check failed: (code == RPCCode::kReturn) is false: code=1
Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 4, 64]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8011727
- No: 8 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
- func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
- func = build(s, args, target=target, runtime=runtime)
- File "/workspace/python/tvm/driver/build_module.py", line 227, in build
- input_mod = lower(inputs, args, name=name, binds=binds)
- File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
- return ffi.lower_schedule(inp, args, name, binds, simple_mode)
- File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
- File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
- File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
- tvm._ffi.base.TVMError: Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
-
- Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 16, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 128, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,413980
- No: 9 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
- func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
- func = build(s, args, target=target, runtime=runtime)
- File "/workspace/python/tvm/driver/build_module.py", line 227, in build
- input_mod = lower(inputs, args, name=name, binds=binds)
- File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
- return ffi.lower_schedule(inp, args, name, binds, simple_mode)
- File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
- File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
- File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
- tvm._ffi.base.TVMError: Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
-
- Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 16, 8, 2]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8532123
- No: 10 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
+ 52: 0xffffffffffffffff
+ 51: _start
+ 50: __libc_start_main
+ 49: _Py_UnixMain
+ 48: 0x0000000000650da0
+ 47: 0x0000000000650afa
+ 46: _PyFunction_FastCallDict
+ 45: _PyEval_EvalCodeWithName
+ 44: _PyEval_EvalFrameDefault
+ 43: _PyFunction_FastCallKeywords
+ 42: _PyEval_EvalCodeWithName
+ 41: _PyEval_EvalFrameDefault
+ 40: _PyMethodDef_RawFastCallKeywords
+ 39: 0x0000000000546369
+ 38: _PyEval_EvalCodeWithName
+ 37: _PyEval_EvalFrameDefault
+ 36: _PyFunction_FastCallKeywords
+ 35: _PyEval_EvalCodeWithName
+ 34: _PyEval_EvalFrameDefault
+ 33: _PyFunction_FastCallDict
+ 32: _PyEval_EvalCodeWithName
+ 31: _PyEval_EvalFrameDefault
+ 30: _PyObject_FastCallDict
+ 29: 0x00000000004c06e1
+ 28: _PyFunction_FastCallDict
+ 27: _PyEval_EvalFrameDefault
+ 26: _PyMethodDescr_FastCallKeywords
+ 25: 0x00000000005dcb58
+ 24: 0x00000000005dc83f
+ 23: 0x00000000004ba127
+ 22: _PyEval_EvalFrameDefault
+ 21: _PyFunction_FastCallKeywords
+ 20: _PyEval_EvalFrameDefault
+ 19: _PyFunction_FastCall [('tile_f', [-1, 16, 1, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1259640
+ No: 8 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1375,8 +1037,8 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 16, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 64, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9769187
- No: 11 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 1, 8]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7686938
+ No: 9 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1498,8 +1160,9 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 1, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 128, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8192195
- No: 12 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 64, 2, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 16, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8846050
+ No: 10 GFLOPS: 0.86/58.24 result: MeasureResult(costs=(0.26825750725,), error_no=MeasureErrorNo.NO_ERROR, all_cost=5.713873624801636, timestamp=1678838232.9064837) [('tile_f', [-1, 64, 2, 2]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,5297450
+ No: 11 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1621,9 +1284,10 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 8, 2]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 16, 16]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1491239
- No: 13 GFLOPS: 273.35/273.35 result: MeasureResult(costs=(0.0008468909682539682,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.3628065586090088, timestamp=1678816861.156233) [('tile_f', [-1, 4, 2, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,206592
- No: 14 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 8, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 32]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8279340
+ No: 12 GFLOPS: 160.00/160.00 result: MeasureResult(costs=(0.0014468562027027028,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.133188247680664, timestamp=1678838233.686128) [('tile_f', [-1, 1, 8, 1]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 64, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,21367
+ No: 13 GFLOPS: 8.73/160.00 result: MeasureResult(costs=(0.026505223999999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.9389443397521973, timestamp=1678838237.7996368) [('tile_f', [-1, 8, 2, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3889393
+ No: 14 GFLOPS: 0.00/160.00 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1745,8 +1409,9 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 4, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 8, 32]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,10219178
- No: 15 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 4, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 128, 2]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4514515
+ No: 15 GFLOPS: 765.53/765.53 result: MeasureResult(costs=(0.00030240633888888887,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.8589601516723633, timestamp=1678838238.8367987) [('tile_f', [-1, 2, 2, 1]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7950151
+ No: 16 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1868,8 +1533,8 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 512, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 16, 32]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4026934
- No: 16 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 1, 512]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 32]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4408799
+ No: 17 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1991,9 +1656,8 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 32, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 512, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6617031
- No: 17 GFLOPS: 121.12/273.35 result: MeasureResult(costs=(0.0019113225283018869,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4811546802520752, timestamp=1678816862.8299277) [('tile_f', [-1, 2, 1, 1]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 16]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2835801
- No: 18 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 2, 8]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 64, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1667746
+ No: 18 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2115,8 +1779,8 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 16, 4, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 128, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2735413
- No: 19 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 1, 256]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 128, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7963557
+ No: 19 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2238,8 +1902,8 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 32, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 256]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9094891
- No: 20 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 4, 32]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 128]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7731434
+ No: 20 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2361,7 +2025,7 @@ for this template
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
- tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 2, 16]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 8, 32]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4217353
+ tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 4, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 4, 32]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9443896
@@ -2416,9 +2080,9 @@ and measure running time.
Finish loading 20 records
Best config:
- [('tile_f', [-1, 4, 2, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,206592
+ [('tile_f', [-1, 2, 2, 1]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7950151
Finish loading 20 records
- Time cost of this operator: 0.001257
+ Time cost of this operator: 0.000597
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
index 873669dfcb..7f3d30617c 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
@@ -360,10 +360,10 @@ Timing the untuned program
########## Build without Autotuning ##########
Node Name Ops Time(us) Time(%) Shape Inputs Outputs Measurements(us)
--------- --- -------- ------- ----- ------ ------- ----------------
- tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 317.7 98.752 (1, 2, 10, 10, 3) 2 1 [317.7]
- tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 3.048 0.947 (1, 6, 10, 10) 1 1 [3.048]
- tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.966 0.3 (1, 1, 10, 10, 3) 1 1 [0.966]
- Total_time - 321.714 - - - - -
+ tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 315.7 98.73 (1, 2, 10, 10, 3) 2 1 [315.7]
+ tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 3.094 0.968 (1, 6, 10, 10) 1 1 [3.094]
+ tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.967 0.303 (1, 1, 10, 10, 3) 1 1 [0.967]
+ Total_time - 319.762 - - - - -
@@ -428,10 +428,10 @@ Timing the tuned program
########## Build with Autotuning ##########
Node Name Ops Time(us) Time(%) Shape Inputs Outputs Measurements(us)
--------- --- -------- ------- ----- ------ ------- ----------------
- tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 136.3 98.043 (1, 6, 10, 10, 1) 2 1 [136.3]
- tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 1.763 1.268 (1, 6, 10, 10) 1 1 [1.763]
- tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.957 0.688 (1, 1, 10, 10, 3) 1 1 [0.957]
- Total_time - 139.02 - - - - -
+ tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 101.3 97.386 (1, 6, 10, 10, 1) 2 1 [101.3]
+ tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 1.765 1.697 (1, 6, 10, 10) 1 1 [1.765]
+ tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.954 0.917 (1, 1, 10, 10, 3) 1 1 [0.954]
+ Total_time - 104.019 - - - - -
@@ -439,7 +439,7 @@ Timing the tuned program
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 22.178 seconds)
+ **Total running time of the script:** ( 1 minutes 24.008 seconds)
.. _sphx_glr_download_how_to_work_with_microtvm_micro_autotune.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
index 7514bf753a..1340e595dc 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_pytorch.rst.txt
@@ -118,7 +118,7 @@ download a cat image and preprocess it to use as the model input.
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/torch/ao/quantization/utils.py:281: UserWarning: must run observer before calling calculate_qparams. Returning default values.
"must run observer before calling calculate_qparams. " +
Downloading: "https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
-
0%| | 0.00/3.42M [00:00<?, ?B/s]
61%|###### | 2.09M/3.42M [00:00<00:00, 17.2MB/s]
100%|##########| 3.42M/3.42M [00:00<00:00, 26.3MB/s]
+
0%| | 0.00/3.42M [00:00<?, ?B/s]
61%|###### | 2.09M/3.42M [00:00<00:00, 19.7MB/s]
100%|##########| 3.42M/3.42M [00:00<00:00, 30.6MB/s]
/workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
return LooseVersion(torch_ver) > ver
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -324,7 +324,7 @@ Look up prediction top 1 index in 1000 class synset.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 18.102 seconds)
+ **Total running time of the script:** ( 1 minutes 21.449 seconds)
.. _sphx_glr_download_how_to_work_with_microtvm_micro_pytorch.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
index 16dfc2df27..4cccca7726 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_train.rst.txt
@@ -218,7 +218,7 @@ take about **2 minutes** to download the Stanford Cars, while COCO 2017 validati
.. code-block:: none
- '/tmp/tmpr78_oloq/images/random'
+ '/tmp/tmp_w894do9/images/random'
@@ -309,7 +309,7 @@ objects to other stuff? We can display some examples from our datasets using ``m
.. image-sg:: /how_to/work_with_microtvm/images/sphx_glr_micro_train_001.png
- :alt: [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0]
+ :alt: [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [1.0, 0.0]
:srcset: /how_to/work_with_microtvm/images/sphx_glr_micro_train_001.png
:class: sphx-glr-single-img
@@ -318,8 +318,8 @@ objects to other stuff? We can display some examples from our datasets using ``m
.. code-block:: none
- /tmp/tmpr78_oloq/images/target contains 8144 images
- /tmp/tmpr78_oloq/images/random contains 5000 images
+ /tmp/tmp_w894do9/images/target contains 8144 images
+ /tmp/tmp_w894do9/images/random contains 5000 images
@@ -494,13 +494,13 @@ the time on our validation set).
.. code-block:: none
Epoch 1/3
- 328/328 - 48s - loss: 0.2333 - accuracy: 0.9183 - val_loss: 0.1236 - val_accuracy: 0.9592 - 48s/epoch - 145ms/step
+ 328/328 - 48s - loss: 0.2109 - accuracy: 0.9269 - val_loss: 0.1022 - val_accuracy: 0.9641 - 48s/epoch - 148ms/step
Epoch 2/3
- 328/328 - 43s - loss: 0.0977 - accuracy: 0.9638 - val_loss: 0.1380 - val_accuracy: 0.9517 - 43s/epoch - 132ms/step
+ 328/328 - 44s - loss: 0.0939 - accuracy: 0.9648 - val_loss: 0.0982 - val_accuracy: 0.9641 - 44s/epoch - 134ms/step
Epoch 3/3
- 328/328 - 43s - loss: 0.0666 - accuracy: 0.9756 - val_loss: 0.1027 - val_accuracy: 0.9622 - 43s/epoch - 132ms/step
+ 328/328 - 44s - loss: 0.0745 - accuracy: 0.9721 - val_loss: 0.1042 - val_accuracy: 0.9656 - 44s/epoch - 134ms/step
- <keras.callbacks.History object at 0x7f915a584d50>
+ <keras.callbacks.History object at 0x7fdd8856d6d0>
@@ -861,7 +861,7 @@ Arduino tutorial for how to do that `on GitHub <https://github.com/guberti/tvm-a
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 4 minutes 43.389 seconds)
+ **Total running time of the script:** ( 5 minutes 5.128 seconds)
.. _sphx_glr_download_how_to_work_with_microtvm_micro_train.py:
diff --git a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
index e037cf4352..b79d02f105 100644
--- a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
@@ -5,20 +5,20 @@
Computation times
=================
-**07:49.439** total execution time for **how_to_work_with_microtvm** files:
+**08:17.309** total execution time for **how_to_work_with_microtvm** files:
+-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``) | 04:43.389 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_train.py` (``micro_train.py``) | 05:05.128 | 0.0 MB |
+-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``) | 01:22.178 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``) | 01:24.008 | 0.0 MB |
+-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``) | 01:18.102 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_pytorch.py` (``micro_pytorch.py``) | 01:21.449 | 0.0 MB |
+-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``) | 00:10.188 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_aot.py` (``micro_aot.py``) | 00:10.568 | 0.0 MB |
+-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.224 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_custom_ide.py` (``micro_custom_ide.py``) | 00:08.380 | 0.0 MB |
+-----------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``) | 00:07.359 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``) | 00:07.776 | 0.0 MB |
+-----------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
index 6ad3b635d3..975a4647cc 100644
--- a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
@@ -5,14 +5,14 @@
Computation times
=================
-**00:45.897** total execution time for **how_to_work_with_relay** files:
+**00:47.014** total execution time for **how_to_work_with_relay** files:
+----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:33.664 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_pipeline_executor.py` (``using_pipeline_executor.py``) | 00:34.614 | 0.0 MB |
+----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``) | 00:10.582 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``) | 00:10.721 | 0.0 MB |
+----------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``) | 00:01.646 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``) | 00:01.673 | 0.0 MB |
+----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``) | 00:00.006 | 0.0 MB |
+----------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
index 3160f11eda..b0abe4133c 100644
--- a/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/intrin_math.rst.txt
@@ -264,7 +264,7 @@ The following example customizes CUDA lowering rule for :code:`exp`.
.. code-block:: none
- <function my_cuda_math_rule at 0x7f900eaaab90>
+ <function my_cuda_math_rule at 0x7fdc323297a0>
diff --git a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
index 2927f46bda..6a2a4e5e3e 100644
--- a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
@@ -5,22 +5,22 @@
Computation times
=================
-**00:08.791** total execution time for **how_to_work_with_schedules** files:
+**00:07.455** total execution time for **how_to_work_with_schedules** files:
+------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``) | 00:06.208 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``) | 00:04.846 | 0.0 MB |
+------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``) | 00:01.223 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``) | 00:01.178 | 0.0 MB |
+------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``) | 00:00.575 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``) | 00:00.605 | 0.0 MB |
+------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``) | 00:00.557 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``) | 00:00.560 | 0.0 MB |
+------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``) | 00:00.118 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``) | 00:00.123 | 0.0 MB |
+------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.051 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``) | 00:00.062 | 0.0 MB |
+------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``) | 00:00.033 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``) | 00:00.053 | 0.0 MB |
+------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``) | 00:00.026 | 0.0 MB |
+| :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``) | 00:00.028 | 0.0 MB |
+------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
index 5ec4bedbbc..9d6b134e6e 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
Computation times
=================
-**00:31.078** total execution time for **topic_vta_tutorials_autotvm** files:
+**00:32.619** total execution time for **topic_vta_tutorials_autotvm** files:
+---------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:31.071 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``) | 00:32.612 | 0.0 MB |
+---------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``) | 00:00.007 | 0.0 MB |
+---------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
index cefa645c1d..d941ac8516 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -293,7 +293,7 @@ The compilation steps are:
DeprecationWarning,
/workspace/vta/tutorials/frontend/deploy_classification.py:213: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the new recommended usage.
relay_prog, target=tvm.target.Target(target, host=env.target_host), params=params
- resnet18_v1 inference graph built in 33.35s!
+ resnet18_v1 inference graph built in 35.67s!
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
index 828a171131..e7bdf4f608 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
@@ -337,7 +337,7 @@ The compilation steps are:
/workspace/python/tvm/relay/build_module.py:348: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
DeprecationWarning,
- yolov3-tiny inference graph built in 22.88s!
+ yolov3-tiny inference graph built in 23.94s!
diff --git a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
index 4d4f03bc84..412d333b27 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
Computation times
=================
-**01:39.968** total execution time for **topic_vta_tutorials_frontend** files:
+**01:44.003** total execution time for **topic_vta_tutorials_frontend** files:
+------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 00:50.103 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``) | 00:52.687 | 0.0 MB |
+------------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``) | 00:49.865 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``) | 00:51.316 | 0.0 MB |
+------------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
index 4afc5d1d1a..d57e2c6d94 100644
--- a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
Computation times
=================
-**00:03.144** total execution time for **topic_vta_tutorials_optimize** files:
+**00:03.185** total execution time for **topic_vta_tutorials_optimize** files:
+--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``) | 00:02.692 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``) | 00:02.702 | 0.0 MB |
+--------------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.451 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``) | 00:00.483 | 0.0 MB |
+--------------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
index 2ff69328ae..d9fb62adb6 100644
--- a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
Computation times
=================
-**00:00.770** total execution time for **topic_vta_tutorials** files:
+**00:00.779** total execution time for **topic_vta_tutorials** files:
+---------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.397 | 0.0 MB |
+| :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``) | 00:00.406 | 0.0 MB |
+---------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``) | 00:00.373 | 0.0 MB |
+---------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
index 4c2cba99a9..649ba7a890 100644
--- a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
@@ -207,6 +207,13 @@ trials, we can load the best schedule from the log file and apply it.
+.. rst-class:: sphx-glr-script-out
+
+ .. code-block:: none
+
+ *E
+
+
@@ -318,7 +325,7 @@ We build the binary and check its correctness and performance.
.. code-block:: none
- Execution time of this operator: 92.532 ms
+ Execution time of this operator: 94.072 ms
@@ -434,7 +441,7 @@ operations.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 42.784 seconds)
+ **Total running time of the script:** ( 1 minutes 41.846 seconds)
.. _sphx_glr_download_tutorial_auto_scheduler_matmul_x86.py:
diff --git a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
index 1642c4f26f..81ceb10d52 100644
--- a/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_matmul_x86.rst.txt
@@ -454,16 +454,16 @@ reduce variance, we take 5 measurements and average them.
waiting for device...
device available
Get devices for measurement successfully!
- No: 1 GFLOPS: 1.30/1.30 result: MeasureResult(costs=(0.2058989222,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.5218405723571777, timestamp=1678815226.2339067) [('tile_y', [-1, 1]), ('tile_x', [-1, 1])],None,0
- No: 2 GFLOPS: 1.91/1.91 result: MeasureResult(costs=(0.14048785260000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.487527847290039, timestamp=1678815228.738748) [('tile_y', [-1, 2]), ('tile_x', [-1, 4])],None,21
- No: 3 GFLOPS: 0.51/1.91 result: MeasureResult(costs=(0.5276659593999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.675540924072266, timestamp=1678815238.684599) [('tile_y', [-1, 128]), ('tile_x', [-1, 1])],None,7
- No: 4 GFLOPS: 11.26/11.26 result: MeasureResult(costs=(0.023837771,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6401183605194092, timestamp=1678815240.5885434) [('tile_y', [-1, 128]), ('tile_x', [-1, 256])],None,87
- No: 5 GFLOPS: 3.00/11.26 result: MeasureResult(costs=(0.08939998660000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.696471929550171, timestamp=1678815242.4331748) [('tile_y', [-1, 256]), ('tile_x', [-1, 8])],None,38
- No: 6 GFLOPS: 0.50/11.26 result: MeasureResult(costs=(0.5327600462000001,), error_no=MeasureErrorNo.NO_ERROR, all_cost=8.801893472671509, timestamp=1678815251.2359838) [('tile_y', [-1, 64]), ('tile_x', [-1, 1])],None,6
- No: 7 GFLOPS: 0.90/11.26 result: MeasureResult(costs=(0.29889496439999996,), error_no=MeasureErrorNo.NO_ERROR, all_cost=5.029704809188843, timestamp=1678815257.539202) [('tile_y', [-1, 256]), ('tile_x', [-1, 2])],None,18
- No: 8 GFLOPS: 10.56/11.26 result: MeasureResult(costs=(0.025414149200000003,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6937263011932373, timestamp=1678815258.2078693) [('tile_y', [-1, 512]), ('tile_x', [-1, 128])],None,79
- No: 9 GFLOPS: 12.65/12.65 result: MeasureResult(costs=(0.0212143968,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6358003616333008, timestamp=1678815258.9598224) [('tile_y', [-1, 1]), ('tile_x', [-1, 64])],None,60
- No: 10 GFLOPS: 12.67/12.67 result: MeasureResult(costs=(0.0211942408,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.5751068592071533, timestamp=1678815259.562149) [('tile_y', [-1, 32]), ('tile_x', [-1, 128])],None,75
+ No: 1 GFLOPS: 10.96/10.96 result: MeasureResult(costs=(0.024483266,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.671400785446167, timestamp=1678836561.9711208) [('tile_y', [-1, 64]), ('tile_x', [-1, 256])],None,86
+ No: 2 GFLOPS: 8.64/10.96 result: MeasureResult(costs=(0.031064748600000004,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.8936619758605957, timestamp=1678836564.0434296) [('tile_y', [-1, 16]), ('tile_x', [-1, 64])],None,64
+ No: 3 GFLOPS: 2.20/10.96 result: MeasureResult(costs=(0.12191464739999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.2137904167175293, timestamp=1678836567.5936735) [('tile_y', [-1, 16]), ('tile_x', [-1, 2])],None,14
+ No: 4 GFLOPS: 3.94/10.96 result: MeasureResult(costs=(0.06816883659999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.3984143733978271, timestamp=1678836568.9491825) [('tile_y', [-1, 64]), ('tile_x', [-1, 16])],None,46
+ No: 5 GFLOPS: 11.48/11.48 result: MeasureResult(costs=(0.023391334399999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6513710021972656, timestamp=1678836569.7145703) [('tile_y', [-1, 64]), ('tile_x', [-1, 512])],None,96
+ No: 6 GFLOPS: 3.27/11.48 result: MeasureResult(costs=(0.08202559379999999,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.564176321029663, timestamp=1678836571.2857552) [('tile_y', [-1, 32]), ('tile_x', [-1, 8])],None,35
+ No: 7 GFLOPS: 1.52/11.48 result: MeasureResult(costs=(0.1764057646,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.055400848388672, timestamp=1678836575.6935515) [('tile_y', [-1, 1]), ('tile_x', [-1, 1])],None,0
+ No: 8 GFLOPS: 3.93/11.48 result: MeasureResult(costs=(0.0682253636,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.3457090854644775, timestamp=1678836577.0451944) [('tile_y', [-1, 16]), ('tile_x', [-1, 8])],None,34
+ No: 9 GFLOPS: 9.88/11.48 result: MeasureResult(costs=(0.027180288400000002,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6695516109466553, timestamp=1678836577.9386733) [('tile_y', [-1, 1]), ('tile_x', [-1, 512])],None,90
+ No: 10 GFLOPS: 12.48/12.48 result: MeasureResult(costs=(0.0215026136,), error_no=MeasureErrorNo.NO_ERROR, all_cost=0.6879034042358398, timestamp=1678836578.5473447) [('tile_y', [-1, 64]), ('tile_x', [-1, 128])],None,76
diff --git a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
index 20c1b86e93..8b48baf97f 100644
--- a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
@@ -311,7 +311,7 @@ standard deviation.
.. code-block:: none
- {'mean': 517.2460499599993, 'median': 517.6602823499934, 'std': 1.9526973936459062}
+ {'mean': 520.9648976600033, 'median': 521.4993348500059, 'std': 1.9851776540295982}
@@ -545,31 +545,31 @@ the tuning data to.
.. code-block:: none
-
[Task 1/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 1/25] Current/Best: 9.45/ 15.58 GFLOPS | Progress: (4/20) | 9.99 s
[Task 1/25] Current/Best: 23.15/ 23.15 GFLOPS | Progress: (8/20) | 16.22 s
[Task 1/25] Current/Best: 12.59/ 23.15 GFLOPS | Progress: (12/20) | 19.43 s
[Task 1/25] Current/Best: 13.20/ 23.15 GFLOPS | Progress: (16/20) | 23.49 s
[Task 1/25] Current/Best: 11.74/ 23.15 GFLOPS | Progress: (20/20) | 26.31 s Done.
-
[Task 2/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 2/25] Current/Best: 6.92/ 14.50 GFLOPS | Progress: (4/20) | 4.26 s
[Task 2/25] Current/Best: 17.21/ 17.21 GFLOPS | Progress: (8/20) | 5.92 s
[Task 2/25] Current/Best: 6.64/ 18.13 GFLOPS | Progress: (12/20) | 7.80 s
[Task 2/25] Current/Best: 10.83/ 18.13 GFLOPS | Progress: (16/20) | 10.58 s
[Task 2/25] Current/Best: 14.32/ 18.13 GFLOPS | Progress: (20/20) | 11.96 s Done.
-
[Task 3/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 3/25] Current/Best: 15.88/ 16.26 GFLOPS | Progress: (4/20) | 4.87 s
[Task 3/25] Current/Best: 3.11/ 18.53 GFLOPS | Progress: (8/20) | 7.84 s
[Task 3/25] Current/Best: 13.82/ 18.53 GFLOPS | Progress: (12/20) | 11.49 s
[Task 3/25] Current/Best: 10.11/ 19.39 GFLOPS | Progress: (16/20) | 13.62 s
[Task 3/25] Current/Best: 16.34/ 19.39 GFLOPS | Progress: (20/20) | 15.75 s Done.
-
[Task 4/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 4/25] Current/Best: 14.09/ 16.40 GFLOPS | Progress: (4/20) | 5.07 s
[Task 4/25] Current/Best: 20.00/ 20.00 GFLOPS | Progress: (8/20) | 6.99 s
[Task 4/25] Current/Best: 13.22/ 20.00 GFLOPS | Progress: (12/20) | 9.51 s
[Task 4/25] Current/Best: 16.38/ 20.00 GFLOPS | Progress: (16/20) | 11.41 s
[Task 4/25] Current/Best: 13.29/ 20.00 GFLOPS | Progress: (20/20) | 15.76 s Done.
-
[Task 5/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 5/25] Current/Best: 12.62/ 16.77 GFLOPS | Progress: (4/20) | 4.91 s
[Task 5/25] Current/Best: 14.89/ 23.06 GFLOPS | Progress: (8/20) | 7.40 s
[Task 5/25] Current/Best: 3.73/ 23.06 GFLOPS | Progress: (12/20) | 10.08 s
[Task 5/25] Current/Best: 15.16/ 23.06 GFLOPS | Progress: (16/20) | 11.98 s
[Task 5/25] Current/Best: 12.38/ 23.06 GFLOPS | Progress: (20/20) | 14.48 s Done.
-
[Task 6/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 6/25] Current/Best: 9.79/ 19.38 GFLOPS | Progress: (4/20) | 5.70 s
[Task 6/25] Current/Best: 14.46/ 19.38 GFLOPS | Progress: (8/20) | 8.61 s
[Task 6/25] Current/Best: 1.72/ 19.38 GFLOPS | Progress: (12/20) | 12.63 s
[Task 6/25] Current/Best: 8.92/ 19.38 GFLOPS | Progress: (16/20) | 16.20 s
[Task 6/25] Current/Best: 5.74/ 19.38 GFLOPS | Progress: (20/20) | 19.45 s Done.
-
[Task 7/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 7/25] Current/Best: 6.04/ 19.66 GFLOPS | Progress: (4/20) | 4.90 s
[Task 7/25] Current/Best: 8.92/ 19.66 GFLOPS | Progress: (8/20) | 7.61 s
[Task 7/25] Current/Best: 5.44/ 19.66 GFLOPS | Progress: (12/20) | 11.23 s
[Task 7/25] Current/Best: 5.64/ 19.66 GFLOPS | Progress: (16/20) | 13.88 s
[Task 7/25] Current/Best: 11.45/ 19.66 GFLOPS | Progress: (20/20) | 16.96 s Done.
-
[Task 8/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 8/25] Current/Best: 9.02/ 13.75 GFLOPS | Progress: (4/20) | 14.26 s
[Task 8/25] Current/Best: 11.09/ 16.65 GFLOPS | Progress: (8/20) | 25.86 s
[Task 8/25] Current/Best: 12.97/ 16.65 GFLOPS | Progress: (12/20) | 36.22 s
[Task 8/25] Current/Best: 5.18/ 18.28 GFLOPS | Progress: (16/20) | 38.88 s
[Task 8/25] Current/Best: 13.32/ 18.28 GFLOPS | Progress: (20/20) | 43.04 s Done.
-
[Task 9/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 9/25] Current/Best: 9.50/ 12.12 GFLOPS | Progress: (4/20) | 4.71 s
[Task 9/25] Current/Best: 6.74/ 16.62 GFLOPS | Progress: (8/20) | 7.57 s
[Task 9/25] Current/Best: 6.79/ 21.35 GFLOPS | Progress: (12/20) | 9.50 s
[Task 9/25] Current/Best: 20.45/ 21.35 GFLOPS | Progress: (16/20) | 18.17 s
[Task 9/25] Current/Best: 10.09/ 21.35 GFLOPS | Progress: (20/20) | 29.38 s
[Task 10/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 10/25] Current/Best: 17.85/ 18.09 GFLOPS | Progress: (4/20) | 6.16 s
[Task 10/25] Current/Best: 13.58/ 18.09 GFLOPS | Progress: (8/20) | 7.85 s
[Task 10/25] Current/Best: 2.99/ 18.10 GFLOPS | Progress: (12/20) | 9.70 s
[Task 10/25] Current/Best: 13.02/ 18.27 GFLOPS | Progress: (16/20) | 11.76 s
[Task 10/25] Current/Best: 8.60/ 18.27 GFLOPS | Progress: (20/20)
| 14.40 s Done.
-
[Task 11/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 11/25] Current/Best: 21.79/ 21.79 GFLOPS | Progress: (4/20) | 5.24 s
[Task 11/25] Current/Best: 6.14/ 21.79 GFLOPS | Progress: (8/20) | 8.56 s
[Task 11/25] Current/Best: 12.28/ 21.79 GFLOPS | Progress: (12/20) | 11.09 s
[Task 11/25] Current/Best: 11.37/ 21.79 GFLOPS | Progress: (16/20) | 15.33 s
[Task 11/25] Current/Best: 8.96/ 22.11 GFLOPS | Progress: (20/20) | 17.68 s Done.
-
[Task 12/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 12/25] Current/Best: 14.06/ 14.06 GFLOPS | Progress: (4/20) | 5.07 s
[Task 12/25] Current/Best: 20.20/ 20.20 GFLOPS | Progress: (8/20) | 10.17 s
[Task 12/25] Current/Best: 15.42/ 20.20 GFLOPS | Progress: (12/20) | 12.81 s
[Task 12/25] Current/Best: 14.35/ 20.20 GFLOPS | Progress: (16/20) | 15.99 s
[Task 12/25] Current/Best: 16.88/ 20.20 GFLOPS | Progress: (20/20) | 18.80 s Done.
-
[Task 13/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 13/25] Current/Best: 6.21/ 21.27 GFLOPS | Progress: (4/20) | 5.42 s
[Task 13/25] Current/Best: 15.04/ 21.27 GFLOPS | Progress: (8/20) | 7.97 s
[Task 13/25] Current/Best: 11.67/ 21.27 GFLOPS | Progress: (12/20) | 12.44 s
[Task 13/25] Current/Best: 13.99/ 21.27 GFLOPS | Progress: (16/20) | 15.35 s
[Task 13/25] Current/Best: 18.52/ 21.27 GFLOPS | Progress: (20/20) | 18.87 s Done.
-
[Task 14/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 14/25] Current/Best: 2.74/ 14.45 GFLOPS | Progress: (4/20) | 6.96 s
[Task 14/25] Current/Best: 4.77/ 15.79 GFLOPS | Progress: (8/20) | 10.93 s
[Task 14/25] Current/Best: 9.86/ 15.79 GFLOPS | Progress: (12/20) | 14.24 s
[Task 14/25] Current/Best: 16.20/ 16.20 GFLOPS | Progress: (16/20) | 17.48 s
[Task 14/25] Current/Best: 3.14/ 16.20 GFLOPS | Progress: (20/20) | 20.85 s
[Task 15/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 15/25] Current/Best: 11.64/ 18.19 GFLOPS | Progress: (4/20) | 6.48 s Done.
+
[Task 1/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 1/25] Current/Best: 11.19/ 11.19 GFLOPS | Progress: (4/20) | 13.22 s
[Task 1/25] Current/Best: 14.27/ 22.54 GFLOPS | Progress: (8/20) | 17.29 s
[Task 1/25] Current/Best: 9.79/ 22.54 GFLOPS | Progress: (12/20) | 20.21 s
[Task 1/25] Current/Best: 18.30/ 22.54 GFLOPS | Progress: (16/20) | 23.55 s
[Task 1/25] Current/Best: 4.80/ 23.49 GFLOPS | Progress: (20/20) | 27.11 s Done.
+
[Task 2/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 2/25] Current/Best: 15.71/ 15.71 GFLOPS | Progress: (4/20) | 5.03 s
[Task 2/25] Current/Best: 10.71/ 18.53 GFLOPS | Progress: (8/20) | 6.76 s
[Task 2/25] Current/Best: 16.59/ 21.79 GFLOPS | Progress: (12/20) | 8.27 s
[Task 2/25] Current/Best: 5.92/ 21.79 GFLOPS | Progress: (16/20) | 10.46 s
[Task 2/25] Current/Best: 19.65/ 21.98 GFLOPS | Progress: (20/20) | 12.57 s Done.
+
[Task 3/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 3/25] Current/Best: 20.94/ 20.94 GFLOPS | Progress: (4/20) | 5.75 s
[Task 3/25] Current/Best: 10.37/ 22.38 GFLOPS | Progress: (8/20) | 7.71 s
[Task 3/25] Current/Best: 13.82/ 22.38 GFLOPS | Progress: (12/20) | 10.60 s
[Task 3/25] Current/Best: 14.41/ 24.04 GFLOPS | Progress: (16/20) | 14.20 s
[Task 3/25] Current/Best: 15.92/ 24.04 GFLOPS | Progress: (20/20) | 17.21 s Done.
+
[Task 4/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 4/25] Current/Best: 18.36/ 19.46 GFLOPS | Progress: (4/20) | 4.52 s
[Task 4/25] Current/Best: 10.73/ 19.46 GFLOPS | Progress: (8/20) | 7.04 s
[Task 4/25] Current/Best: 20.38/ 20.38 GFLOPS | Progress: (12/20) | 9.19 s
[Task 4/25] Current/Best: 10.81/ 20.38 GFLOPS | Progress: (16/20) | 11.36 s
[Task 4/25] Current/Best: 7.56/ 20.38 GFLOPS | Progress: (20/20) | 13.66 s Done.
+
[Task 5/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 5/25] Current/Best: 18.46/ 20.48 GFLOPS | Progress: (4/20) | 6.39 s
[Task 5/25] Current/Best: 18.21/ 20.48 GFLOPS | Progress: (8/20) | 8.28 s
[Task 5/25] Current/Best: 4.42/ 21.27 GFLOPS | Progress: (12/20) | 10.37 s
[Task 5/25] Current/Best: 20.33/ 21.27 GFLOPS | Progress: (16/20) | 12.10 s
[Task 5/25] Current/Best: 23.05/ 23.05 GFLOPS | Progress: (20/20) | 13.99 s Done.
+
[Task 6/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 6/25] Current/Best: 5.36/ 18.56 GFLOPS | Progress: (4/20) | 5.17 s
[Task 6/25] Current/Best: 4.45/ 18.56 GFLOPS | Progress: (8/20) | 9.50 s
[Task 6/25] Current/Best: 14.27/ 18.56 GFLOPS | Progress: (12/20) | 12.15 s
[Task 6/25] Current/Best: 15.24/ 19.56 GFLOPS | Progress: (16/20) | 16.05 s
[Task 6/25] Current/Best: 8.24/ 19.56 GFLOPS | Progress: (20/20) | 18.77 s Done.
+
[Task 7/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 7/25] Current/Best: 17.15/ 17.15 GFLOPS | Progress: (4/20) | 5.25 s
[Task 7/25] Current/Best: 6.66/ 17.15 GFLOPS | Progress: (8/20) | 8.14 s
[Task 7/25] Current/Best: 6.20/ 17.15 GFLOPS | Progress: (12/20) | 11.04 s
[Task 7/25] Current/Best: 5.04/ 18.28 GFLOPS | Progress: (16/20) | 13.96 s
[Task 7/25] Current/Best: 3.07/ 18.48 GFLOPS | Progress: (20/20) | 16.56 s Done.
+
[Task 8/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 8/25] Current/Best: 11.73/ 19.91 GFLOPS | Progress: (4/20) | 6.70 s
[Task 8/25] Current/Best: 10.46/ 19.91 GFLOPS | Progress: (8/20) | 11.92 s
[Task 8/25] Current/Best: 12.60/ 19.91 GFLOPS | Progress: (12/20) | 15.17 s
[Task 8/25] Current/Best: 14.16/ 19.91 GFLOPS | Progress: (16/20) | 18.25 s
[Task 8/25] Current/Best: 11.95/ 20.67 GFLOPS | Progress: (20/20) | 20.87 s Done.
+
[Task 9/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 9/25] Current/Best: 16.23/ 20.30 GFLOPS | Progress: (4/20) | 5.22 s
[Task 9/25] Current/Best: 12.37/ 20.30 GFLOPS | Progress: (8/20) | 15.67 s
[Task 9/25] Current/Best: 3.10/ 20.30 GFLOPS | Progress: (12/20) | 20.97 s
[Task 9/25] Current/Best: 6.11/ 20.30 GFLOPS | Progress: (16/20) | 24.57 s
[Task 9/25] Current/Best: 12.40/ 20.30 GFLOPS | Progress: (20/20) | 35.77 s
[Task 10/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+
[Task 10/25] Current/Best: 14.41/ 18.08 GFLOPS | Progress: (4/20) | 6.35 s
[Task 10/25] Current/Best: 9.72/ 18.08 GFLOPS | Progress: (8/20) | 9.45 s
[Task 10/25] Current/Best: 18.78/ 18.78 GFLOPS | Progress: (12/20) | 11.20 s
[Task 10/25] Current/Best: 13.17/ 18.78 GFLOPS | Progress: (16/20) | 13.11 s
[Task 10/25] Current/Best: 20.32/ 22.05 GFLOPS | Progress: (20/20) | 14.87 s Done.
+
[Task 11/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 11/25] Current/Best: 15.91/ 15.91 GFLOPS | Progress: (4/20) | 5.28 s
[Task 11/25] Current/Best: 11.79/ 18.27 GFLOPS | Progress: (8/20) | 7.95 s
[Task 11/25] Current/Best: 6.18/ 18.27 GFLOPS | Progress: (12/20) | 10.73 s
[Task 11/25] Current/Best: 11.47/ 21.49 GFLOPS | Progress: (16/20) | 14.47 s
[Task 11/25] Current/Best: 12.29/ 21.49 GFLOPS | Progress: (20/20) | 17.17 s Done.
+
[Task 12/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 12/25] Current/Best: 15.06/ 15.06 GFLOPS | Progress: (4/20) | 5.41 s
[Task 12/25] Current/Best: 5.95/ 15.06 GFLOPS | Progress: (8/20) | 10.36 s
[Task 12/25] Current/Best: 14.56/ 15.13 GFLOPS | Progress: (12/20) | 14.90 s
[Task 12/25] Current/Best: 3.24/ 15.13 GFLOPS | Progress: (16/20) | 18.05 s
[Task 12/25] Current/Best: 10.26/ 15.13 GFLOPS | Progress: (20/20) | 20.85 s Done.
+
[Task 13/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 13/25] Current/Best: 8.95/ 12.19 GFLOPS | Progress: (4/20) | 6.68 s
[Task 13/25] Current/Best: 21.15/ 21.15 GFLOPS | Progress: (8/20) | 9.85 s
[Task 13/25] Current/Best: 3.10/ 21.15 GFLOPS | Progress: (12/20) | 13.75 s
[Task 13/25] Current/Best: 12.64/ 21.15 GFLOPS | Progress: (16/20) | 17.58 s
[Task 13/25] Current/Best: 20.61/ 21.15 GFLOPS | Progress: (20/20) | 20.79 s Done.
+
[Task 14/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 14/25] Current/Best: 12.51/ 12.51 GFLOPS | Progress: (4/20) | 5.47 s
[Task 14/25] Current/Best: 15.56/ 15.56 GFLOPS | Progress: (8/20) | 10.19 s
[Task 14/25] Current/Best: 12.29/ 15.98 GFLOPS | Progress: (12/20) | 12.55 s
[Task 14/25] Current/Best: 10.54/ 15.98 GFLOPS | Progress: (16/20) | 19.09 s
[Task 14/25] Current/Best: 8.91/ 15.98 GFLOPS | Progress: (20/20) | 24.75 s
[Task 15/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 15/25] Current/Best: 15.25/ 15.25 GFLOPS | Progress: (4/20) | 4.82 s
[Task 15/25] Current/Best: 12.16/ 15.25 GFLOPS | Progress: (8/20) | 7.55 s
[Task 15/25] Current/Best: 5.91/ 15.25 GFLOPS | Progress: (12/20) | 14.43 s Done.
+
[Task 15/25] Current/Best: 12.13/ 21.45 GFLOPS | Progress: (16/20) | 16.60 s
[Task 15/25] Current/Best: 15.60/ 21.45 GFLOPS | Progress: (20/20) | 19.27 s Done.
+
[Task 16/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 16/25] Current/Best: 16.18/ 16.77 GFLOPS | Progress: (4/20) | 4.58 s
[Task 16/25] Current/Best: 15.47/ 16.77 GFLOPS | Progress: (8/20) | 6.83 s
[Task 16/25] Current/Best: 12.96/ 16.78 GFLOPS | Progress: (12/20) | 10.34 s
[Task 16/25] Current/Best: 13.09/ 16.78 GFLOPS | Progress: (16/20) | 14.49 s
[Task 16/25] Current/Best: 16.17/ 16.78 GFLOPS | Progress: (20/20) | 16.60 s Done.
+
[Task 17/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 17/25] Current/Best: 1.56/ 17.65 GFLOPS | Progress: (4/20) | 6.91 s
[Task 17/25] Current/Best: 9.82/ 19.44 GFLOPS | Progress: (8/20) | 9.60 s
[Task 17/25] Current/Best: 17.32/ 19.44 GFLOPS | Progress: (12/20) | 11.92 s
[Task 17/25] Current/Best: 3.10/ 23.05 GFLOPS | Progress: (16/20) | 14.62 s
[Task 17/25] Current/Best: 17.98/ 23.05 GFLOPS | Progress: (20/20) | 16.94 s Done.
+
[Task 18/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 18/25] Current/Best: 13.08/ 16.87 GFLOPS | Progress: (4/20) | 5.38 s
[Task 18/25] Current/Best: 13.53/ 16.87 GFLOPS | Progress: (8/20) | 8.19 s
[Task 18/25] Current/Best: 5.76/ 16.87 GFLOPS | Progress: (12/20) | 11.10 s
[Task 18/25] Current/Best: 15.49/ 17.78 GFLOPS | Progress: (16/20) | 14.96 s
[Task 18/25] Current/Best: 14.61/ 19.57 GFLOPS | Progress: (20/20) | 18.04 s Done.
+
[Task 19/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 19/25] Current/Best: 10.65/ 20.39 GFLOPS | Progress: (4/20) | 9.40 s
[Task 19/25] Current/Best: 9.39/ 20.39 GFLOPS | Progress: (8/20) | 14.30 s
[Task 19/25] Current/Best: 10.72/ 20.39 GFLOPS | Progress: (12/20) | 18.01 s
[Task 19/25] Current/Best: 10.41/ 20.39 GFLOPS | Progress: (16/20) | 21.46 s
[Task 19/25] Current/Best: 17.19/ 20.39 GFLOPS | Progress: (20/20) | 26.06 s Done.
+
[Task 20/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 20/25] Current/Best: 14.39/ 17.37 GFLOPS | Progress: (4/20) | 4.89 s
[Task 20/25] Current/Best: 13.37/ 17.37 GFLOPS | Progress: (8/20) | 7.79 s
[Task 20/25] Current/Best: 15.94/ 17.37 GFLOPS | Progress: (12/20) | 12.39 s
[Task 20/25] Current/Best: 15.40/ 17.37 GFLOPS | Progress: (16/20) | 15.93 s
[Task 20/25] Current/Best: 6.30/ 17.37 GFLOPS | Progress: (20/20) | 18.17 s
[Task 21/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 21/25] Current/Best: 10.42/ 16.30 GFLOPS | Progress: (4/20) | 5.89 s
[Task 21/25] Current/Best: 6.65/ 16.30 GFLOPS | Progress: (8/20) | 8.36 s
[Task 21/25] Current/Best: 6.74/ 16.30 GFLOPS | Progress: (12/20) | 11.27 s Done.
+
[Task 21/25] Current/Best: 14.30/ 20.07 GFLOPS | Progress: (16/20) | 14.18 s
[Task 21/25] Current/Best: 13.68/ 20.07 GFLOPS | Progress: (20/20) | 16.01 s
[Task 22/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 22/25] Current/Best: 10.46/ 14.45 GFLOPS | Progress: (4/20) | 5.92 s
[Task 22/25] Current/Best: 5.28/ 16.44 GFLOPS | Progress: (8/20) | 9.41 s
[Task 22/25] Current/Best: 15.70/ 16.44 GFLOPS | Progress: (12/20) | 11.66 s
[Task 22/25] Current/Best: 10.48/ 16.44 GFLOPS | Progress: (16/20) | 13.56 s
[Task 22/25] Current/Best: 16.69/ 16.69 GFLOPS | Progress: (20/20) | 15.54 s Done.
+
[Task 23/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 23/25] Current/Best: 11.85/ 19.38 GFLOPS | Progress: (4/20) | 7.03 s
[Task 23/25] Current/Best: 11.70/ 19.38 GFLOPS | Progress: (8/20) | 9.67 s
[Task 23/25] Current/Best: 11.13/ 19.38 GFLOPS | Progress: (12/20) | 14.64 s
[Task 23/25] Current/Best: 22.34/ 22.34 GFLOPS | Progress: (16/20) | 17.54 s
[Task 23/25] Current/Best: 9.80/ 22.34 GFLOPS | Progress: (20/20) | 21.48 s Done.
+
[Task 24/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 24/25] Current/Best: 5.61/ 5.61 GFLOPS | Progress: (4/20) | 4.72 s
[Task 24/25] Current/Best: 6.33/ 7.20 GFLOPS | Progress: (8/20) | 6.36 s
[Task 24/25] Current/Best: 3.01/ 9.70 GFLOPS | Progress: (12/20) | 17.32 s
[Task 24/25] Current/Best: 8.39/ 9.70 GFLOPS | Progress: (16/20) | 21.62 s
[Task 24/25] Current/Best: 8.83/ 9.70 GFLOPS | Progress: (20/20) | 32.61 s
[Task 25/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 25/25] Current/Best: 1.55/ 5.97 GFLOPS | Progress: (4/20) | 15.23 s Done.
Done.
-
[Task 15/25] Current/Best: 14.46/ 19.21 GFLOPS | Progress: (8/20) | 8.51 s
[Task 15/25] Current/Best: 12.59/ 19.21 GFLOPS | Progress: (12/20) | 10.23 s
[Task 15/25] Current/Best: 23.11/ 23.11 GFLOPS | Progress: (16/20) | 11.78 s
[Task 15/25] Current/Best: 13.14/ 23.11 GFLOPS | Progress: (20/20) | 14.13 s Done.
-
[Task 16/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 16/25] Current/Best: 3.06/ 17.34 GFLOPS | Progress: (4/20) | 6.96 s
[Task 16/25] Current/Best: 4.88/ 20.78 GFLOPS | Progress: (8/20) | 9.65 s
[Task 16/25] Current/Best: 15.91/ 20.78 GFLOPS | Progress: (12/20) | 11.44 s
[Task 16/25] Current/Best: 11.49/ 20.78 GFLOPS | Progress: (16/20) | 14.21 s
[Task 16/25] Current/Best: 15.53/ 20.78 GFLOPS | Progress: (20/20) | 16.51 s Done.
-
[Task 17/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 17/25] Current/Best: 7.64/ 14.77 GFLOPS | Progress: (4/20) | 5.92 s
[Task 17/25] Current/Best: 20.64/ 20.64 GFLOPS | Progress: (8/20) | 9.05 s
[Task 17/25] Current/Best: 8.64/ 20.64 GFLOPS | Progress: (12/20) | 12.65 s
[Task 17/25] Current/Best: 10.94/ 20.93 GFLOPS | Progress: (16/20) | 14.91 s
[Task 17/25] Current/Best: 7.31/ 20.93 GFLOPS | Progress: (20/20) | 19.51 s Done.
-
[Task 18/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 18/25] Current/Best: 13.94/ 19.93 GFLOPS | Progress: (4/20) | 6.30 s
[Task 18/25] Current/Best: 13.92/ 19.93 GFLOPS | Progress: (8/20) | 8.72 s
[Task 18/25] Current/Best: 6.05/ 19.93 GFLOPS | Progress: (12/20) | 11.20 s
[Task 18/25] Current/Best: 15.07/ 19.93 GFLOPS | Progress: (16/20) | 13.16 s
[Task 18/25] Current/Best: 12.51/ 19.93 GFLOPS | Progress: (20/20) | 18.29 s Done.
-
[Task 19/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 19/25] Current/Best: 11.30/ 18.40 GFLOPS | Progress: (4/20) | 6.23 s
[Task 19/25] Current/Best: 18.04/ 18.40 GFLOPS | Progress: (8/20) | 8.89 s
[Task 19/25] Current/Best: 9.25/ 19.69 GFLOPS | Progress: (12/20) | 11.99 s
[Task 19/25] Current/Best: 12.16/ 19.69 GFLOPS | Progress: (16/20) | 15.74 s
[Task 19/25] Current/Best: 2.69/ 19.69 GFLOPS | Progress: (20/20) | 20.51 s Done.
-
[Task 20/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 20/25] Current/Best: 16.20/ 16.20 GFLOPS | Progress: (4/20) | 5.12 s
[Task 20/25] Current/Best: 15.82/ 18.43 GFLOPS | Progress: (8/20) | 8.85 s
[Task 20/25] Current/Best: 11.71/ 18.43 GFLOPS | Progress: (12/20) | 11.07 s
[Task 20/25] Current/Best: 13.56/ 18.43 GFLOPS | Progress: (16/20) | 18.57 s
[Task 20/25] Current/Best: 10.62/ 19.11 GFLOPS | Progress: (20/20) | 20.44 s
[Task 21/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 21/25] Current/Best: 16.31/ 18.00 GFLOPS | Progress: (4/20) | 5.62 s
[Task 21/25] Current/Best: 11.31/ 18.00 GFLOPS | Progress: (8/20) | 9.26 s
[Task 21/25] Current/Best: 17.93/ 18.00 GFLOPS | Progress: (12/20) | 11.93 s
[Task 21/25] Current/Best: 19.36/ 19.36 GFLOPS | Progress: (16/20) | 14.59 s
[Task 21/25] Current/Best: 2.72/ 19.36 GFLOPS | Progress: (20/20
) | 17.94 s
[Task 22/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
- Done.
-
[Task 22/25] Current/Best: 7.07/ 12.22 GFLOPS | Progress: (4/20) | 6.07 s
[Task 22/25] Current/Best: 9.59/ 19.51 GFLOPS | Progress: (8/20) | 8.29 s
[Task 22/25] Current/Best: 11.66/ 19.51 GFLOPS | Progress: (12/20) | 10.44 s
[Task 22/25] Current/Best: 6.95/ 19.51 GFLOPS | Progress: (16/20) | 12.14 s
[Task 22/25] Current/Best: 18.58/ 19.82 GFLOPS | Progress: (20/20) | 13.85 s Done.
-
[Task 23/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 23/25] Current/Best: 1.55/ 10.01 GFLOPS | Progress: (4/20) | 9.75 s
[Task 23/25] Current/Best: 14.13/ 15.37 GFLOPS | Progress: (8/20) | 13.26 s
[Task 23/25] Current/Best: 21.20/ 21.20 GFLOPS | Progress: (12/20) | 16.11 s
[Task 23/25] Current/Best: 18.90/ 21.37 GFLOPS | Progress: (16/20) | 18.46 s
[Task 23/25] Current/Best: 9.34/ 21.37 GFLOPS | Progress: (20/20) | 21.14 s Done.
-
[Task 24/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 24/25] Current/Best: 3.29/ 3.29 GFLOPS | Progress: (4/20) | 13.47 s
[Task 24/25] Current/Best: 8.98/ 8.98 GFLOPS | Progress: (8/20) | 26.05 s
[Task 24/25] Current/Best: 6.20/ 8.98 GFLOPS | Progress: (12/20) | 36.67 s
[Task 24/25] Current/Best: 6.63/ 8.98 GFLOPS | Progress: (16/20) | 48.89 s
[Task 24/25] Current/Best: 2.22/ 8.98 GFLOPS | Progress: (20/20) | 62.10 s
[Task 25/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
-
[Task 25/25] Current/Best: 3.25/ 3.25 GFLOPS | Progress: (4/20) | 13.18 s
[Task 25/25] Current/Best: 5.81/ 8.76 GFLOPS | Progress: (8/20) | 19.56 s
[Task 25/25] Current/Best: 5.75/ 9.32 GFLOPS | Progress: (12/20) | 26.97 s
[Task 25/25] Current/Best: 8.38/ 9.32 GFLOPS | Progress: (16/20) | 37.91 s
[Task 25/25] Current/Best: 4.62/ 9.32 GFLOPS | Progress: (20/20) | 50.45 s
+
[Task 25/25] Current/Best: 7.97/ 7.97 GFLOPS | Progress: (8/20) | 26.28 s
[Task 25/25] Current/Best: 5.89/ 7.97 GFLOPS | Progress: (12/20) | 32.10 s
[Task 25/25] Current/Best: 7.44/ 7.97 GFLOPS | Progress: (16/20) | 43.08 s
[Task 25/25] Current/Best: 8.04/ 8.04 GFLOPS | Progress: (20/20) | 48.41 s
@@ -665,7 +665,7 @@ Verify that the optimized model runs and produces the same results:
.. code-block:: none
- class='n02123045 tabby, tabby cat' with probability=0.621104
+ class='n02123045 tabby, tabby cat' with probability=0.621105
class='n02123159 tiger cat' with probability=0.356377
class='n02124075 Egyptian cat' with probability=0.019712
class='n02129604 tiger, Panthera tigris' with probability=0.001215
@@ -723,8 +723,8 @@ improvement in comparing the optimized model to the unoptimized model.
.. code-block:: none
- optimized: {'mean': 425.7281788399928, 'median': 424.9334229499823, 'std': 2.240203157743987}
- unoptimized: {'mean': 517.2460499599993, 'median': 517.6602823499934, 'std': 1.9526973936459062}
+ optimized: {'mean': 421.8219032899992, 'median': 419.26141014999985, 'std': 5.033173061045772}
+ unoptimized: {'mean': 520.9648976600033, 'median': 521.4993348500059, 'std': 1.9851776540295982}
@@ -747,7 +747,7 @@ profiling/benchmarking.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 13 minutes 11.051 seconds)
+ **Total running time of the script:** ( 12 minutes 40.560 seconds)
.. _sphx_glr_download_tutorial_autotvm_relay_x86.py:
diff --git a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
index 6d4e3ab01c..3e355f0d70 100644
--- a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
@@ -274,7 +274,7 @@ device and returns the measured cost. Network overhead is excluded.
.. code-block:: none
- 1.246e-07 secs/op
+ 1.277e-07 secs/op
diff --git a/docs/_sources/tutorial/intro_topi.rst.txt b/docs/_sources/tutorial/intro_topi.rst.txt
index 12da7d7dcd..3befa59f73 100644
--- a/docs/_sources/tutorial/intro_topi.rst.txt
+++ b/docs/_sources/tutorial/intro_topi.rst.txt
@@ -270,7 +270,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
.. code-block:: none
- [stage(a, placeholder(a, 0x29dbb650)), stage(b, placeholder(b, 0x2308eec0)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T [...]
+ [stage(a, placeholder(a, 0x25814070)), stage(b, placeholder(b, 0x253192a0)), stage(T_add, compute(T_add, body=[a[ax0, ax1, ax2] + b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T.Range(0, 10), "DataPar", ""), T.iter_var(ax2, T.Range(0, 10), "DataPar", "")], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[a[ax0, ax1, ax2] * b[ax1, ax2]], axis=[T.iter_var(ax0, T.Range(0, 100), "DataPar", ""), T.iter_var(ax1, T [...]
diff --git a/docs/_sources/tutorial/sg_execution_times.rst.txt b/docs/_sources/tutorial/sg_execution_times.rst.txt
index f9369c478d..37d6a70c9e 100644
--- a/docs/_sources/tutorial/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorial/sg_execution_times.rst.txt
@@ -5,24 +5,24 @@
Computation times
=================
-**17:18.416** total execution time for **tutorial** files:
+**16:26.835** total execution time for **tutorial** files:
+------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``) | 13:11.051 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``) | 12:40.560 | 0.0 MB |
+------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:42.784 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``) | 01:41.846 | 0.0 MB |
+------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``) | 01:01.674 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``) | 00:59.965 | 0.0 MB |
+------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``) | 00:43.323 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``) | 00:37.657 | 0.0 MB |
+------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``) | 00:36.815 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``) | 00:24.219 | 0.0 MB |
+------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``) | 00:01.742 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``) | 00:01.531 | 0.0 MB |
+------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``) | 00:00.854 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``) | 00:00.868 | 0.0 MB |
+------------------------------------------------------------------------------------------+-----------+--------+
-| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.172 | 0.0 MB |
+| :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``) | 00:00.189 | 0.0 MB |
+------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_uma.py` (``uma.py``) | 00:00.000 | 0.0 MB |
+------------------------------------------------------------------------------------------+-----------+--------+
diff --git a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
index ccce4f1b8e..c93ea804ac 100644
--- a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
@@ -498,10 +498,10 @@ We can now compare the different schedules
.. code-block:: none
Operator Timing Performance
- numpy 7.210239998585166e-06 1.0
- naive 6.7155e-06 0.9313836989223317
- parallel 6.979399999999999e-06 0.9679844223451003
- vector 2.4538e-05 3.4032154276161384
+ numpy 7.092420000844868e-06 1.0
+ naive 6.9421e-06 0.9788055415743905
+ parallel 6.9999000000000005e-06 0.9869550871446071
+ vector 2.46489e-05 3.4753863980226436
@@ -922,7 +922,7 @@ matrix multiplication.
.. code-block:: none
- Numpy running time: 0.019080
+ Numpy running time: 0.020044
@@ -980,7 +980,7 @@ optimizations.
.. code-block:: none
- none: 3.460447
+ none: 3.265856
@@ -1080,7 +1080,7 @@ schedule.
.. code-block:: none
- blocking: 0.296317
+ blocking: 0.318353
@@ -1164,7 +1164,7 @@ already cache friendly from our previous optimizations.
.. code-block:: none
- vectorization: 0.336181
+ vectorization: 0.345539
# from tvm.script import ir as I
# from tvm.script import tir as T
@@ -1230,7 +1230,7 @@ more cache friendly.
.. code-block:: none
- loop permutation: 0.117107
+ loop permutation: 0.130713
# from tvm.script import ir as I
# from tvm.script import tir as T
@@ -1321,7 +1321,7 @@ optimized schedule.
.. code-block:: none
- array packing: 0.108342
+ array packing: 0.110260
# from tvm.script import ir as I
# from tvm.script import tir as T
@@ -1404,7 +1404,7 @@ to `C` when all the block results are ready.
.. code-block:: none
- block caching: 0.110331
+ block caching: 0.111886
# from tvm.script import ir as I
# from tvm.script import tir as T
@@ -1478,7 +1478,7 @@ of thread-level parallelization.
.. code-block:: none
- parallelization: 0.146422
+ parallelization: 0.147335
# from tvm.script import ir as I
# from tvm.script import tir as T
@@ -1548,13 +1548,13 @@ working, we can compare the results.
.. code-block:: none
Operator Timing Performance
- none 3.4604471772 1.0
- blocking 0.29631718139999996 0.0856297369173435
- vectorization 0.33618133370000003 0.0971496793579202
- loop permutation 0.1171069266 0.03384155879378467
- array packing 0.1083415687 0.03130854573184496
- block caching 0.11033055839999999 0.03188332396082789
- parallelization 0.1464217946 0.04231296913437538
+ none 3.2658563150999997 1.0
+ blocking 0.3183525889 0.09747905547101576
+ vectorization 0.345539399 0.10580361340527
+ loop permutation 0.13071338189999998 0.04002422926435377
+ array packing 0.11026016450000001 0.033761486685804754
+ block caching 0.11188624840000001 0.03425939098504831
+ parallelization 0.1473353046 0.04511383551039312
@@ -1594,11 +1594,6 @@ operations with tunable parameters that allows you to automatically optimize
the computation for specific platforms.
-.. rst-class:: sphx-glr-timing
-
- **Total running time of the script:** ( 1 minutes 1.674 seconds)
-
-
.. _sphx_glr_download_tutorial_tensor_expr_get_started.py:
.. only:: html
diff --git a/docs/commit_hash b/docs/commit_hash
index 67da423c1d..c25e02a764 100644
--- a/docs/commit_hash
+++ b/docs/commit_hash
@@ -1 +1 @@
-d22bdce2bf4c16fab0ed54ca320f07ed48ee85d0
+ce1fa8908f626e58f245966dd0a2e2540b75dace
diff --git a/docs/how_to/compile_models/from_darknet.html b/docs/how_to/compile_models/from_darknet.html
index babdae4fcc..0d0e0a7e9a 100644
--- a/docs/how_to/compile_models/from_darknet.html
+++ b/docs/how_to/compile_models/from_darknet.html
@@ -590,7 +590,7 @@ class:['truck 0.9266'] left:471 top:83 right:689 bottom:169
class:['bicycle 0.9984'] left:111 top:113 right:577 bottom:447
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 21.836 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 23.547 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-darknet-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/7716f96385bd5abb6e822041e285be54/from_darknet.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_darknet.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_keras.html b/docs/how_to/compile_models/from_keras.html
index 2b6203de69..7827856a99 100644
--- a/docs/how_to/compile_models/from_keras.html
+++ b/docs/how_to/compile_models/from_keras.html
@@ -511,7 +511,7 @@ Tensorflow is also required since it’s used as the default backend of keras.</
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Relay top-1 id: 285, class name: Egyptian cat
1/1 [==============================] - ETA: 0s
-1/1 [==============================] - 1s 976ms/step
+1/1 [==============================] - 1s 1000ms/step
Keras top-1 id: 285, class name: Egyptian cat
</pre></div>
</div>
diff --git a/docs/how_to/compile_models/from_mxnet.html b/docs/how_to/compile_models/from_mxnet.html
index 26b2fa2db4..7bb4cae549 100644
--- a/docs/how_to/compile_models/from_mxnet.html
+++ b/docs/how_to/compile_models/from_mxnet.html
@@ -444,7 +444,7 @@
<span class="nb">print</span><span class="p">(</span><span class="s2">"x"</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#tuple" title="builtins.tuple" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">x</span><span class="o">.</span><span class="n">shape</span></a><span class="p">)</span>
</pre></div>
</div>
-<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zipaa07d1d4-10e6-48fb-9062-75e0caff6d92 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+<img src="../../_images/sphx_glr_from_mxnet_001.png" srcset="../../_images/sphx_glr_from_mxnet_001.png" alt="from mxnet" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip67f53738-4530-4710-85b5-bf0709b5767c from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
x (1, 3, 224, 224)
</pre></div>
</div>
diff --git a/docs/how_to/compile_models/from_oneflow.html b/docs/how_to/compile_models/from_oneflow.html
index 3b9627dc91..d052c69134 100644
--- a/docs/how_to/compile_models/from_oneflow.html
+++ b/docs/how_to/compile_models/from_oneflow.html
@@ -454,12 +454,13 @@ Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdo
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
0%| | 0.00/41.5M [00:00<?, ?B/s]
- 19%|#9 | 7.99M/41.5M [00:00<00:00, 71.6MB/s]
- 39%|###8 | 16.0M/41.5M [00:00<00:00, 68.1MB/s]
- 58%|#####7 | 24.0M/41.5M [00:00<00:00, 57.6MB/s]
- 77%|#######7 | 32.0M/41.5M [00:00<00:00, 54.4MB/s]
- 90%|########9 | 37.3M/41.5M [00:00<00:00, 45.6MB/s]
-100%|##########| 41.5M/41.5M [00:00<00:00, 52.2MB/s]
+ 19%|#9 | 7.99M/41.5M [00:00<00:00, 50.6MB/s]
+ 35%|###4 | 14.3M/41.5M [00:00<00:00, 56.6MB/s]
+ 48%|####7 | 19.9M/41.5M [00:00<00:00, 50.1MB/s]
+ 60%|#####9 | 24.7M/41.5M [00:00<00:00, 47.5MB/s]
+ 77%|#######7 | 32.0M/41.5M [00:00<00:00, 55.6MB/s]
+ 90%|######### | 37.4M/41.5M [00:00<00:00, 46.3MB/s]
+100%|##########| 41.5M/41.5M [00:00<00:00, 50.1MB/s]
</pre></div>
</div>
</div>
diff --git a/docs/how_to/compile_models/from_pytorch.html b/docs/how_to/compile_models/from_pytorch.html
index b0d0cf1c83..4820f85a7e 100644
--- a/docs/how_to/compile_models/from_pytorch.html
+++ b/docs/how_to/compile_models/from_pytorch.html
@@ -437,12 +437,9 @@ be unstable.</p>
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
0%| | 0.00/44.7M [00:00<?, ?B/s]
- 18%|#7 | 7.99M/44.7M [00:00<00:00, 69.7MB/s]
- 36%|###5 | 16.0M/44.7M [00:00<00:00, 65.6MB/s]
- 58%|#####8 | 26.1M/44.7M [00:00<00:00, 65.7MB/s]
- 72%|#######2 | 32.3M/44.7M [00:00<00:00, 51.8MB/s]
- 90%|########9 | 40.0M/44.7M [00:00<00:00, 53.2MB/s]
-100%|##########| 44.7M/44.7M [00:00<00:00, 55.3MB/s]
+ 35%|###4 | 15.6M/44.7M [00:00<00:00, 163MB/s]
+ 70%|######9 | 31.1M/44.7M [00:00<00:00, 81.4MB/s]
+100%|##########| 44.7M/44.7M [00:00<00:00, 99.8MB/s]
</pre></div>
</div>
</div>
diff --git a/docs/how_to/compile_models/from_tensorflow.html b/docs/how_to/compile_models/from_tensorflow.html
index 8962d5c755..7724b39c00 100644
--- a/docs/how_to/compile_models/from_tensorflow.html
+++ b/docs/how_to/compile_models/from_tensorflow.html
@@ -654,7 +654,7 @@ banana (score = 0.00022)
desk (score = 0.00019)
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 24.757 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 26.948 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-tensorflow-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/7f1d3d1b878694c201c614c807cdebc8/from_tensorflow.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_tensorflow.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/sg_execution_times.html b/docs/how_to/compile_models/sg_execution_times.html
index 72d45b3642..feda9a5f15 100644
--- a/docs/how_to/compile_models/sg_execution_times.html
+++ b/docs/how_to/compile_models/sg_execution_times.html
@@ -345,7 +345,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-compile-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>06:43.629</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
+<p><strong>06:56.376</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 81%" />
@@ -354,43 +354,43 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></td>
-<td><p>01:24.757</p></td>
+<td><p>01:26.948</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></td>
-<td><p>01:21.836</p></td>
+<td><p>01:23.547</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></td>
-<td><p>00:56.161</p></td>
+<td><p>00:59.062</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></td>
-<td><p>00:37.651</p></td>
+<td><p>00:38.822</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></td>
-<td><p>00:32.012</p></td>
+<td><p>00:33.508</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></td>
-<td><p>00:31.069</p></td>
+<td><p>00:31.827</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></td>
-<td><p>00:28.221</p></td>
+<td><p>00:28.270</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></td>
-<td><p>00:27.074</p></td>
+<td><p>00:27.308</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></td>
-<td><p>00:22.145</p></td>
+<td><p>00:24.300</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></td>
-<td><p>00:02.703</p></td>
+<td><p>00:02.784</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>
diff --git a/docs/how_to/deploy_models/deploy_model_on_adreno.html b/docs/how_to/deploy_models/deploy_model_on_adreno.html
index 3d5be79d27..aeb6a3fbd5 100644
--- a/docs/how_to/deploy_models/deploy_model_on_adreno.html
+++ b/docs/how_to/deploy_models/deploy_model_on_adreno.html
@@ -925,10 +925,10 @@ Top5 predictions:
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 3332.6629 3331.4334 3340.3620 3329.4122 3.1287
+ 3336.8734 3336.7715 3341.2942 3333.3142 2.2568
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 4.181 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 5.099 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-model-on-adreno-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/2387d8448da213eb625e6b3d916327d4/deploy_model_on_adreno.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_model_on_adreno.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_model_on_android.html b/docs/how_to/deploy_models/deploy_model_on_android.html
index 7190e47c6b..c064911590 100644
--- a/docs/how_to/deploy_models/deploy_model_on_android.html
+++ b/docs/how_to/deploy_models/deploy_model_on_android.html
@@ -667,7 +667,7 @@ to the remote android device.</p>
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 16.5683 16.7869 16.8908 15.8084 0.3796
+ 16.8231 16.8033 17.2963 16.1574 0.3658
</pre></div>
</div>
</div>
diff --git a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
index e41984d061..6263dfa910 100644
--- a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
+++ b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
@@ -459,30 +459,26 @@ be unstable.</p>
Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
0%| | 0.00/170M [00:00<?, ?B/s]
- 5%|4 | 7.99M/170M [00:00<00:03, 48.5MB/s]
- 8%|8 | 14.3M/170M [00:00<00:03, 47.0MB/s]
- 11%|#1 | 18.8M/170M [00:00<00:03, 40.9MB/s]
- 13%|#3 | 22.6M/170M [00:00<00:04, 37.4MB/s]
- 15%|#5 | 26.2M/170M [00:00<00:04, 34.2MB/s]
- 19%|#8 | 32.0M/170M [00:00<00:04, 34.0MB/s]
- 24%|##3 | 40.0M/170M [00:01<00:03, 42.8MB/s]
- 28%|##8 | 48.0M/170M [00:01<00:02, 42.6MB/s]
- 35%|###4 | 59.1M/170M [00:01<00:01, 58.7MB/s]
- 38%|###8 | 65.3M/170M [00:01<00:02, 46.5MB/s]
- 42%|####2 | 72.0M/170M [00:01<00:02, 47.3MB/s]
- 47%|####7 | 80.0M/170M [00:01<00:01, 48.9MB/s]
- 52%|#####1 | 88.0M/170M [00:02<00:01, 51.4MB/s]
- 57%|#####6 | 96.0M/170M [00:02<00:01, 51.5MB/s]
- 61%|######1 | 104M/170M [00:02<00:01, 52.7MB/s]
- 66%|######5 | 112M/170M [00:02<00:01, 55.1MB/s]
- 71%|####### | 120M/170M [00:02<00:00, 58.0MB/s]
- 75%|#######5 | 128M/170M [00:02<00:00, 60.1MB/s]
- 80%|######## | 136M/170M [00:02<00:00, 59.1MB/s]
- 85%|########4 | 144M/170M [00:03<00:00, 58.6MB/s]
- 89%|########9 | 152M/170M [00:03<00:00, 61.1MB/s]
- 94%|#########4| 160M/170M [00:03<00:00, 56.7MB/s]
- 98%|#########7| 166M/170M [00:03<00:00, 57.1MB/s]
-100%|##########| 170M/170M [00:03<00:00, 50.6MB/s]
+ 7%|7 | 12.1M/170M [00:00<00:01, 126MB/s]
+ 14%|#4 | 24.1M/170M [00:00<00:01, 78.7MB/s]
+ 19%|#9 | 32.6M/170M [00:00<00:01, 80.6MB/s]
+ 26%|##6 | 44.5M/170M [00:00<00:01, 95.2MB/s]
+ 32%|###1 | 54.3M/170M [00:00<00:01, 79.9MB/s]
+ 38%|###7 | 64.0M/170M [00:00<00:01, 83.4MB/s]
+ 43%|####3 | 73.2M/170M [00:00<00:01, 87.0MB/s]
+ 49%|####8 | 82.7M/170M [00:00<00:01, 90.7MB/s]
+ 54%|#####4 | 91.7M/170M [00:01<00:01, 77.2MB/s]
+ 59%|#####8 | 99.6M/170M [00:01<00:00, 75.4MB/s]
+ 63%|######3 | 107M/170M [00:01<00:01, 54.3MB/s]
+ 67%|######7 | 115M/170M [00:01<00:00, 59.3MB/s]
+ 73%|#######3 | 124M/170M [00:01<00:00, 69.2MB/s]
+ 78%|#######7 | 132M/170M [00:01<00:00, 71.5MB/s]
+ 82%|########2 | 139M/170M [00:01<00:00, 68.9MB/s]
+ 87%|########6 | 147M/170M [00:02<00:00, 72.4MB/s]
+ 91%|######### | 155M/170M [00:02<00:00, 69.9MB/s]
+ 95%|#########5| 161M/170M [00:02<00:00, 67.6MB/s]
+ 99%|#########8| 168M/170M [00:02<00:00, 66.3MB/s]
+100%|##########| 170M/170M [00:02<00:00, 73.9MB/s]
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/torch/nn/functional.py:3897: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
for i in range(dim)
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/torchvision/models/detection/anchor_utils.py:124: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode=& [...]
@@ -580,7 +576,7 @@ torchvision rcnn models.</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Get 9 valid boxes
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 35.868 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 47.714 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-object-detection-pytorch-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/7795da4b258c8feff986668b95ef57ad/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized.html b/docs/how_to/deploy_models/deploy_prequantized.html
index 0a0cf1e4f0..ece85fecb9 100644
--- a/docs/how_to/deploy_models/deploy_prequantized.html
+++ b/docs/how_to/deploy_models/deploy_prequantized.html
@@ -500,9 +500,9 @@ training. Other models require a full post training calibration.</p>
Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
0%| | 0.00/13.6M [00:00<?, ?B/s]
- 47%|####6 | 6.30M/13.6M [00:00<00:00, 31.9MB/s]
- 69%|######8 | 9.34M/13.6M [00:00<00:00, 23.9MB/s]
-100%|##########| 13.6M/13.6M [00:00<00:00, 29.3MB/s]
+ 59%|#####8 | 7.99M/13.6M [00:00<00:00, 48.5MB/s]
+ 93%|#########3| 12.6M/13.6M [00:00<00:00, 48.1MB/s]
+100%|##########| 13.6M/13.6M [00:00<00:00, 50.5MB/s]
</pre></div>
</div>
</div>
@@ -593,7 +593,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
</div>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 90.2462 90.1186 94.2846 89.8878 0.4816
+ 90.4882 90.4073 94.1937 90.0940 0.4522
</pre></div>
</div>
<div class="admonition note">
@@ -632,7 +632,7 @@ This includes support for the VNNI 8 bit dot product instruction (CascadeLake or
<div class="section" id="deploy-a-quantized-tflite-model">
<h2>Deploy a quantized TFLite Model<a class="headerlink" href="#deploy-a-quantized-tflite-model" title="Permalink to this headline">¶</a></h2>
<p>TODO</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 17.732 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 20.294 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/fb8217c13f4351224c6cf3aacf1a87fc/deploy_prequantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized_tflite.html b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
index ec2ca9a3e7..00438c582f 100644
--- a/docs/how_to/deploy_models/deploy_prequantized_tflite.html
+++ b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
@@ -585,7 +585,7 @@ TFLite Top-5 labels: [387 102 386 341 349]
</div>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 119.7386 119.8132 122.4225 118.1679 0.5759
+ 119.9493 119.9546 124.7480 118.6410 0.6881
</pre></div>
</div>
<div class="admonition note">
@@ -613,7 +613,7 @@ network for ARM CPU</span></a>.</p></li>
</ul>
</div></blockquote>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 33.741 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 35.280 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-tflite-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/56691c7a27d45da61d112276334640d3/deploy_prequantized_tflite.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized_tflite.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_quantized.html b/docs/how_to/deploy_models/deploy_quantized.html
index d0d9b404e2..36f0b8d488 100644
--- a/docs/how_to/deploy_models/deploy_quantized.html
+++ b/docs/how_to/deploy_models/deploy_quantized.html
@@ -526,7 +526,7 @@ for calibration. But the accuracy might be impacted.</p>
DeprecationWarning,
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 37.166 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 37.038 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-quantized-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/7810ecf51bfc05f7d5e8a400ac3e815d/deploy_quantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_quantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_ssd_gluoncv.html b/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
index b022c6b5fa..9a57c27910 100644
--- a/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
+++ b/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
@@ -468,24 +468,24 @@ to your device.</p>
Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_512_resnet50_v1_voc-9c8b225a.zip...
0%| | 0/132723 [00:00<?, ?KB/s]
- 4%|4 | 5469/132723 [00:00<00:02, 54681.58KB/s]
- 10%|9 | 13073/132723 [00:00<00:01, 67239.03KB/s]
- 15%|#5 | 20394/132723 [00:00<00:01, 69961.51KB/s]
- 21%|##1 | 28175/132723 [00:00<00:01, 73057.17KB/s]
- 27%|##7 | 35895/132723 [00:00<00:01, 74549.23KB/s]
- 33%|###2 | 43753/132723 [00:00<00:01, 75918.15KB/s]
- 39%|###8 | 51357/132723 [00:00<00:01, 75955.82KB/s]
- 45%|####4 | 59122/132723 [00:00<00:00, 76494.00KB/s]
- 50%|##### | 66859/132723 [00:00<00:00, 76765.11KB/s]
- 56%|#####6 | 74685/132723 [00:01<00:00, 77216.04KB/s]
- 62%|######2 | 82474/132723 [00:01<00:00, 77408.07KB/s]
- 68%|######8 | 90334/132723 [00:01<00:00, 77766.10KB/s]
- 74%|#######4 | 98409/132723 [00:01<00:00, 78667.83KB/s]
- 80%|######## | 106585/132723 [00:01<00:00, 79598.77KB/s]
- 86%|########6 | 114672/132723 [00:01<00:00, 79980.55KB/s]
- 93%|#########2| 122773/132723 [00:01<00:00, 80288.91KB/s]
- 99%|#########8| 130960/132723 [00:01<00:00, 80761.89KB/s]
-100%|##########| 132723/132723 [00:01<00:00, 77067.59KB/s]
+ 4%|4 | 5733/132723 [00:00<00:02, 57320.21KB/s]
+ 9%|9 | 12375/132723 [00:00<00:01, 62668.73KB/s]
+ 15%|#5 | 19963/132723 [00:00<00:01, 68698.40KB/s]
+ 21%|## | 27767/132723 [00:00<00:01, 72383.29KB/s]
+ 27%|##6 | 35665/132723 [00:00<00:01, 74759.79KB/s]
+ 33%|###2 | 43512/132723 [00:00<00:01, 76016.28KB/s]
+ 39%|###8 | 51481/132723 [00:00<00:01, 77213.86KB/s]
+ 45%|####4 | 59203/132723 [00:00<00:00, 77114.53KB/s]
+ 50%|##### | 66915/132723 [00:00<00:00, 76839.91KB/s]
+ 56%|#####6 | 74600/132723 [00:01<00:00, 66981.62KB/s]
+ 62%|######2 | 82309/132723 [00:01<00:00, 69780.83KB/s]
+ 68%|######8 | 90276/132723 [00:01<00:00, 72582.17KB/s]
+ 74%|#######3 | 98141/132723 [00:01<00:00, 74326.77KB/s]
+ 80%|#######9 | 106111/132723 [00:01<00:00, 75892.32KB/s]
+ 86%|########5 | 114017/132723 [00:01<00:00, 76818.61KB/s]
+ 92%|#########1| 121979/132723 [00:01<00:00, 77645.29KB/s]
+ 98%|#########7| 129928/132723 [00:01<00:00, 78182.93KB/s]
+100%|##########| 132723/132723 [00:01<00:00, 74309.96KB/s]
</pre></div>
</div>
<p>Create TVM runtime and do inference
@@ -524,7 +524,7 @@ Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from h
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
</div>
-<img src="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" srcset="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" alt="deploy ssd gluoncv" class = "sphx-glr-single-img"/><p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 47.521 seconds)</p>
+<img src="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" srcset="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" alt="deploy ssd gluoncv" class = "sphx-glr-single-img"/><p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 56.017 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-ssd-gluoncv-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/cccb17d28e5e8b2e94ea8cd5ec59f6ed/deploy_ssd_gluoncv.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_ssd_gluoncv.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/sg_execution_times.html b/docs/how_to/deploy_models/sg_execution_times.html
index 3c4696f91a..e80cae29ad 100644
--- a/docs/how_to/deploy_models/sg_execution_times.html
+++ b/docs/how_to/deploy_models/sg_execution_times.html
@@ -345,7 +345,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-deploy-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>15:35.938</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
+<p><strong>16:05.026</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 86%" />
@@ -354,39 +354,39 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="deploy_ssd_gluoncv.html#sphx-glr-how-to-deploy-models-deploy-ssd-gluoncv-py"><span class="std std-ref">Deploy Single Shot Multibox Detector(SSD) model</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_ssd_gluoncv.py</span></code>)</p></td>
-<td><p>03:47.521</p></td>
+<td><p>03:56.017</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></td>
-<td><p>03:35.868</p></td>
+<td><p>03:47.714</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></td>
-<td><p>02:33.741</p></td>
+<td><p>02:35.280</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></td>
-<td><p>01:37.166</p></td>
+<td><p>01:37.038</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></td>
-<td><p>01:17.732</p></td>
+<td><p>01:20.294</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_adreno.html#sphx-glr-how-to-deploy-models-deploy-model-on-adreno-py"><span class="std std-ref">Deploy the Pretrained Model on Adreno</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_adreno.py</span></code>)</p></td>
-<td><p>01:04.181</p></td>
+<td><p>01:05.099</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></td>
-<td><p>00:42.793</p></td>
+<td><p>00:44.561</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="deploy_model_on_nano.html#sphx-glr-how-to-deploy-models-deploy-model-on-nano-py"><span class="std std-ref">Deploy the Pretrained Model on Jetson Nano</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_nano.py</span></code>)</p></td>
-<td><p>00:28.702</p></td>
+<td><p>00:29.753</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></td>
-<td><p>00:28.228</p></td>
+<td><p>00:29.264</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/bring_your_own_datatypes.html b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
index aa17c76c43..e3dc611ef9 100644
--- a/docs/how_to/extend_tvm/bring_your_own_datatypes.html
+++ b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
@@ -624,7 +624,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
<span class="n">module</span><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">params</span></a> <span class="o">=</span> <span class="n">get_mobilenet</span><span class="p">()</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip406fc29a-6ba3-4960-9c54-98be35d8950b from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipee11f25f-0d12-4b48-b125-626fd33d2c3a from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
</pre></div>
</div>
<p>It’s easy to execute MobileNet with native TVM:</p>
diff --git a/docs/how_to/extend_tvm/sg_execution_times.html b/docs/how_to/extend_tvm/sg_execution_times.html
index d873d46de5..8338758d6a 100644
--- a/docs/how_to/extend_tvm/sg_execution_times.html
+++ b/docs/how_to/extend_tvm/sg_execution_times.html
@@ -345,7 +345,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-extend-tvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:54.362</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
+<p><strong>00:57.677</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 84%" />
@@ -354,15 +354,15 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></td>
-<td><p>00:50.451</p></td>
+<td><p>00:53.647</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></td>
-<td><p>00:02.810</p></td>
+<td><p>00:02.888</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></td>
-<td><p>00:01.093</p></td>
+<td><p>00:01.135</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></td>
diff --git a/docs/how_to/extend_tvm/use_pass_instrument.html b/docs/how_to/extend_tvm/use_pass_instrument.html
index b841628df2..5ffb5a2086 100644
--- a/docs/how_to/extend_tvm/use_pass_instrument.html
+++ b/docs/how_to/extend_tvm/use_pass_instrument.html
@@ -531,10 +531,10 @@ profile the execution time of each passes.</p>
</pre></div>
</div>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 22660us [22660us] (48.72%; 48.72%)
-FoldScaleAxis: 23855us [9us] (51.28%; 51.28%)
- FoldConstant: 23846us [1732us] (51.26%; 99.96%)
- InferType: 22114us [22114us] (47.54%; 92.74%)
+InferType: 22847us [22847us] (48.51%; 48.51%)
+FoldScaleAxis: 24249us [10us] (51.49%; 51.49%)
+ FoldConstant: 24239us [1789us] (51.47%; 99.96%)
+ InferType: 22450us [22450us] (47.67%; 92.62%)
</pre></div>
</div>
</div>
@@ -556,10 +556,10 @@ Refer to following sections and <a class="reference internal" href="../../refere
</pre></div>
</div>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 22553us [22553us] (48.65%; 48.65%)
-FoldScaleAxis: 23806us [8us] (51.35%; 51.35%)
- FoldConstant: 23798us [1760us] (51.33%; 99.97%)
- InferType: 22038us [22038us] (47.54%; 92.60%)
+InferType: 22507us [22507us] (48.10%; 48.10%)
+FoldScaleAxis: 24282us [8us] (51.90%; 51.90%)
+ FoldConstant: 24274us [1806us] (51.88%; 99.97%)
+ InferType: 22467us [22467us] (48.02%; 92.56%)
</pre></div>
</div>
<p>Register empty list to clear existing instruments.</p>
diff --git a/docs/how_to/optimize_operators/opt_conv_cuda.html b/docs/how_to/optimize_operators/opt_conv_cuda.html
index e9c2ec246c..416cadfd9d 100644
--- a/docs/how_to/optimize_operators/opt_conv_cuda.html
+++ b/docs/how_to/optimize_operators/opt_conv_cuda.html
@@ -580,7 +580,7 @@ latency of convolution.</p>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Convolution: </span><span class="si">%f</span><span class="s2"> ms"</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">*</span> <span cl [...]
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 34.219966 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 54.310878 ms
</pre></div>
</div>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-optimize-operators-opt-conv-cuda-py">
diff --git a/docs/how_to/optimize_operators/opt_conv_tensorcore.html b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
index ea08af83da..b6d14207c3 100644
--- a/docs/how_to/optimize_operators/opt_conv_tensorcore.html
+++ b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
@@ -862,7 +862,7 @@ be able to run on our build server</p>
<span class="nb">print</span><span class="p">(</span><span class="s2">"conv2d with tensor core: </span><span class="si">%f</span><span class="s2"> ms"</span> <span class="o">%</span> <span class="p">(</span><span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span> <span class="o">* [...]
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 13.368025 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 7.004806 ms
</pre></div>
</div>
</div>
diff --git a/docs/how_to/optimize_operators/opt_gemm.html b/docs/how_to/optimize_operators/opt_gemm.html
index 1e4b05dbe9..3b103fd5ba 100644
--- a/docs/how_to/optimize_operators/opt_gemm.html
+++ b/docs/how_to/optimize_operators/opt_gemm.html
@@ -477,8 +477,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
<span class="nb">print</span><span class="p">(</span><span class="s2">"Baseline: </span><span class="si">%f</span><span class="s2">"</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018858
-Baseline: 3.458052
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019331
+Baseline: 3.272484
</pre></div>
</div>
<p>In TVM, we can always inspect lower level IR to debug or optimize our schedule.
@@ -537,7 +537,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
<span class="nb">print</span><span class="p">(</span><span class="s2">"Opt1: </span><span class="si">%f</span><span class="s2">"</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.306489
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.335746
</pre></div>
</div>
<p>Here is the generated IR after blocking.</p>
@@ -594,7 +594,7 @@ vastly.</p>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Opt2: </span><span class="si">%f</span><span class="s2">"</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.333816
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.352065
</pre></div>
</div>
<p>Here is the generated IR after vectorization.</p>
@@ -649,7 +649,7 @@ the access pattern for A matrix is more cache friendly.</p>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Opt3: </span><span class="si">%f</span><span class="s2">"</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.118924
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.120151
</pre></div>
</div>
<p>Here is the generated IR after loop permutation.</p>
@@ -726,7 +726,7 @@ flattening.</p>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Opt4: </span><span class="si">%f</span><span class="s2">"</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.109810
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.109948
</pre></div>
</div>
<p>Here is the generated IR after array packing.</p>
@@ -804,7 +804,7 @@ write to C when all the block results are ready.</p>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Opt5: </span><span class="si">%f</span><span class="s2">"</span> <span class="o">%</span> <span class="n">evaluator</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span><span class="o">.</span><span class="n">mean</span><span class="p">)</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111485
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111054
</pre></div>
</div>
<p>Here is the generated IR after blocking.</p>
@@ -884,7 +884,7 @@ class Module:
<span class="nb">print</span><span class="p">(</span><span class="s2">"Opt6: </span><span class="si">%f</span><span class="s2">"</span> <span class="o">%</span> <span class="n">opt6_time</span><span class="p">)</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.147307
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.146604
</pre></div>
</div>
<p>Here is the generated IR after parallelization.</p>
diff --git a/docs/how_to/optimize_operators/sg_execution_times.html b/docs/how_to/optimize_operators/sg_execution_times.html
index 2db0123f0d..4be2cd6251 100644
--- a/docs/how_to/optimize_operators/sg_execution_times.html
+++ b/docs/how_to/optimize_operators/sg_execution_times.html
@@ -345,7 +345,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-optimize-operators-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:35.258</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
+<p><strong>00:35.163</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 83%" />
@@ -354,15 +354,15 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></td>
-<td><p>00:32.625</p></td>
+<td><p>00:32.590</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></td>
-<td><p>00:01.568</p></td>
+<td><p>00:01.495</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></td>
-<td><p>00:01.064</p></td>
+<td><p>00:01.078</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
index 1391d94789..8006c3ea3f 100644
--- a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
+++ b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
@@ -345,7 +345,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-tune-with-autoscheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>09:52.835</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
+<p><strong>10:13.691</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 85%" />
@@ -354,27 +354,27 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></td>
-<td><p>06:03.020</p></td>
+<td><p>06:17.931</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></td>
-<td><p>01:42.858</p></td>
+<td><p>01:45.180</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></td>
-<td><p>01:07.318</p></td>
+<td><p>01:09.205</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></td>
-<td><p>00:31.889</p></td>
+<td><p>00:32.432</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></td>
-<td><p>00:14.163</p></td>
+<td><p>00:14.737</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></td>
-<td><p>00:13.587</p></td>
+<td><p>00:14.205</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
index e26b2a76a1..c5c9b84edb 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
@@ -510,12 +510,12 @@ class Module:
@T.prim_func
def main(data: T.Buffer((1, 512, 7, 7), "float32"), kernel: T.Buffer((512, 512, 3, 3), "float32"), bias: T.Buffer((1, 512, 1, 1), "float32"), compute: T.Buffer((1, 512, 7, 7), "float32")):
T.func_attr({"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True})
- blockIdx_x = T.launch_thread("blockIdx.x", 32)
- conv2d_nchw = T.allocate([7], "float32", "local")
- pad_temp_shared = T.allocate([3136], "float32", "shared")
- kernel_shared = T.allocate([1024], "float32", "shared")
- threadIdx_x = T.launch_thread("threadIdx.x", 112)
- conv2d_nchw_1 = T.Buffer((1,), data=conv2d_nchw, scope="local", align=4)
+ blockIdx_x = T.launch_thread("blockIdx.x", 28)
+ conv2d_nchw = T.allocate([14], "float32", "local")
+ pad_temp_shared = T.allocate([72], "float32", "shared")
+ kernel_shared = T.allocate([3072], "float32", "shared")
+ threadIdx_x = T.launch_thread("threadIdx.x", 64)
+ conv2d_nchw_1 = T.Buffer((14,), data=conv2d_nchw, scope="local", align=32)
conv2d_nchw_1[0] = T.float32(0)
conv2d_nchw_1[1] = T.float32(0)
conv2d_nchw_1[2] = T.float32(0)
@@ -523,36 +523,466 @@ class Module:
conv2d_nchw_1[4] = T.float32(0)
conv2d_nchw_1[5] = T.float32(0)
conv2d_nchw_1[6] = T.float32(0)
- for rc_outer_outer, ry_outer_outer, rx_outer_outer in T.grid(8, 3, 3):
- pad_temp_shared_1 = T.Buffer((3136,), data=pad_temp_shared, scope="shared")
- for ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer in range(28):
- cse_var_1: T.int32 = ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 112
- threadIdx_x_1 = T.launch_thread("threadIdx.x", 112)
+ conv2d_nchw_1[7] = T.float32(0)
+ conv2d_nchw_1[8] = T.float32(0)
+ conv2d_nchw_1[9] = T.float32(0)
+ conv2d_nchw_1[10] = T.float32(0)
+ conv2d_nchw_1[11] = T.float32(0)
+ conv2d_nchw_1[12] = T.float32(0)
+ conv2d_nchw_1[13] = T.float32(0)
+ for rc_outer_outer, ry_outer_outer in T.grid(64, 3):
+ cse_var_2: T.int32 = rc_outer_outer * 72
+ cse_var_1: T.int32 = ry_outer_outer * 3
+ pad_temp_shared_1 = T.Buffer((72,), data=pad_temp_shared, scope="shared")
+ with T.launch_thread("threadIdx.x", 64) as threadIdx_x_1:
data_1 = T.Buffer((25088,), data=data.data)
- pad_temp_shared_1[cse_var_1 + threadIdx_x_1] = T.if_then_else(1 <= ry_outer_outer + (ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 2 + threadIdx_x_1 // 7) % 7 and ry_outer_outer + (ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 2 + threadIdx_x_1 // 7) % 7 < 8 and 1 <= rx_outer_outer + threadIdx_x_1 % 7 and rx_outer_outer + threadIdx_x_1 % 7 < 8, data_1[rc_outer_outer * 3136 + cse_var_1 + ry_outer_outer * 7 + threadIdx_x_1 + rx_outer_outer - 8], T.float32(0))
- kernel_shared_1 = T.Buffer((1024,), data=kernel_shared, scope="shared")
- for ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer in range(10):
- threadIdx_x_1 = T.launch_thread("threadIdx.x", 112)
- if T.likely(ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 7 + threadIdx_x_1 // 16 < 64):
- kernel_1 = T.Buffer((2359296,), data=kernel.data)
- kernel_shared_1[ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 112 + threadIdx_x_1] = kernel_1[blockIdx_x * 73728 + (ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 7 + threadIdx_x_1 // 16) // 4 * 4608 + rc_outer_outer * 576 + (ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 48 + threadIdx_x_1) % 64 * 9 + ry_outer_outer * 3 + rx_outer_outer]
- for rc_outer_inner, rc_inner in T.grid(2, 32):
- conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 1] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 2] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 3] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 4] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 5] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[rc_outer_inner * 1568 + rc_inner * 49 + threadIdx_x % 7 * 7 + 6] * kernel_shared_1[threadIdx_x // 7 * 64 + rc_outer_inner * 32 + rc_inner]
- compute_1 = T.Buffer((25088,), data=compute.data)
- bias_1 = T.Buffer((512,), data=bias.data)
- compute_1[blockIdx_x * 784 + threadIdx_x * 7] = T.max(conv2d_nchw_1[0] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 1] = T.max(conv2d_nchw_1[1] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 2] = T.max(conv2d_nchw_1[2] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 3] = T.max(conv2d_nchw_1[3] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 4] = T.max(conv2d_nchw_1[4] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 5] = T.max(conv2d_nchw_1[5] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
- compute_1[blockIdx_x * 784 + threadIdx_x * 7 + 6] = T.max(conv2d_nchw_1[6] + bias_1[blockIdx_x * 16 + threadIdx_x // 7], T.float32(0))
+ if T.likely(threadIdx_x_1 < 18):
+ pad_temp_shared_1[threadIdx_x_1 * 4] = T.if_then_else(1 <= ry_outer_outer + blockIdx_x % 7 and ry_outer_outer + blockIdx_x % 7 < 8 and 1 <= threadIdx_x_1 * 4 % 9 and threadIdx_x_1 * 4 % 9 < 8, data_1[rc_outer_outer * 392 + threadIdx_x_1 * 4 // 9 * 49 + ry_outer_outer * 7 + blockIdx_x % 7 * 7 + threadIdx_x_1 * 4 % 9 - 8], T.float32(0))
+ if T.likely(threadIdx_x_1 < 18):
+ pad_temp_shared_1[threadIdx_x_1 * 4 + 1] = T.if_then_else(1 <= ry_outer_outer + blockIdx_x % 7 and ry_outer_outer + blockIdx_x % 7 < 8 and 1 <= (threadIdx_x_1 * 4 + 1) % 9 and (threadIdx_x_1 * 4 + 1) % 9 < 8, data_1[rc_outer_outer * 392 + (threadIdx_x_1 * 4 + 1) // 9 * 49 + ry_outer_outer * 7 + blockIdx_x % 7 * 7 + (threadIdx_x_1 * 4 + 1) % 9 - 8], T.float32(0))
+ if T.likely(threadIdx_x_1 < 18):
+ pad_temp_shared_1[threadIdx_x_1 * 4 + 2] = T.if_then_else(1 <= ry_outer_outer + blockIdx_x % 7 and ry_outer_outer + blockIdx_x % 7 < 8 and 1 <= (threadIdx_x_1 * 4 + 2) % 9 and (threadIdx_x_1 * 4 + 2) % 9 < 8, data_1[rc_outer_outer * 392 + (threadIdx_x_1 * 4 + 2) // 9 * 49 + ry_outer_outer * 7 + blockIdx_x % 7 * 7 + (threadIdx_x_1 * 4 + 2) % 9 - 8], T.float32(0))
+ if T.likely(threadIdx_x_1 < 18):
+ pad_temp_shared_1[threadIdx_x_1 * 4 + 3] = T.if_then_else(1 <= ry_outer_outer + blockIdx_x % 7 and ry_outer_outer + blockIdx_x % 7 < 8 and 1 <= (threadIdx_x_1 * 4 + 3) % 9 and (threadIdx_x_1 * 4 + 3) % 9 < 8, data_1[rc_outer_outer * 392 + (threadIdx_x_1 * 4 + 3) // 9 * 49 + ry_outer_outer * 7 + blockIdx_x % 7 * 7 + (threadIdx_x_1 * 4 + 3) % 9 - 8], T.float32(0))
+ threadIdx_x_1 = T.env_thread("threadIdx.x")
+ kernel_shared_1 = T.Buffer((3072,), data=kernel_shared, scope="shared")
+ kernel_1 = T.Buffer((2359296,), data=kernel.data)
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 64) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 64) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 128) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 128) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 192] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 36864]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 256) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 256) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 320) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 320) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 384] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 73728]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 448) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 448) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 512) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 512) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 576] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 110592]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 640) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 640) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 704) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 704) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 768] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 147456]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 832) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 832) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 896) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 896) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 960] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 184320]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1024) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1024) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1088) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1088) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1152] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 221184]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1216) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1216) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1280) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1280) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1344] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 258048]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1408) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1408) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1472) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1472) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1536] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 294912]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1600) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1600) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1664) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1664) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1728] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 331776]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1792) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1792) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1856) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1856) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 1920] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 368640]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 1984) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 1984) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2048) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2048) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2112] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 405504]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2176) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2176) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2240) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2240) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2304] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 442368]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2368) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2368) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2432) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2432) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2496] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 479232]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2560) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2560) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2624) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2624) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2688] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 516096]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2752) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2752) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2816) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2816) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[threadIdx_x_1 + 2880] = kernel_1[blockIdx_x // 7 * 589824 + threadIdx_x_1 // 24 * 4608 + cse_var_2 + threadIdx_x_1 % 24 // 3 * 9 + cse_var_1 + threadIdx_x_1 % 3 + 552960]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 2944) // 24 * 24 + (threadIdx_x_1 + 16) % 24 // 3 * 3 + (threadIdx_x_1 + 1) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 2944) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 16) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 1) % 3]
+ with T.launch_thread(threadIdx_x_1, 64):
+ kernel_shared_1[(threadIdx_x_1 + 3008) // 24 * 24 + (threadIdx_x_1 + 8) % 24 // 3 * 3 + (threadIdx_x_1 + 2) % 3] = kernel_1[blockIdx_x // 7 * 589824 + (threadIdx_x_1 + 3008) // 24 * 4608 + cse_var_2 + (threadIdx_x_1 + 8) % 24 // 3 * 9 + cse_var_1 + (threadIdx_x_1 + 2) % 3]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[0] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[9] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[1] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[10] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 3]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[0] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[9] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[1] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[10] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 24]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 27]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[1] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[10] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[7] * kernel_shared_1[threadIdx_x * 48 + 1]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[16] * kernel_shared_1[threadIdx_x * 48 + 4]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[1] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[10] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[7] * kernel_shared_1[threadIdx_x * 48 + 25]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[16] * kernel_shared_1[threadIdx_x * 48 + 28]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[7] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[16] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[8] * kernel_shared_1[threadIdx_x * 48 + 2]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[17] * kernel_shared_1[threadIdx_x * 48 + 5]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[2] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[11] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[3] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[12] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[4] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[13] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[5] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[14] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[6] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[15] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[7] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[16] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[8] * kernel_shared_1[threadIdx_x * 48 + 26]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[17] * kernel_shared_1[threadIdx_x * 48 + 29]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[18] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[27] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[19] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[28] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 6]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 9]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[18] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[27] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[19] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[28] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 30]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 33]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[19] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[28] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[25] * kernel_shared_1[threadIdx_x * 48 + 7]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[34] * kernel_shared_1[threadIdx_x * 48 + 10]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[19] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[28] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[25] * kernel_shared_1[threadIdx_x * 48 + 31]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[34] * kernel_shared_1[threadIdx_x * 48 + 34]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[25] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[34] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[26] * kernel_shared_1[threadIdx_x * 48 + 8]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[35] * kernel_shared_1[threadIdx_x * 48 + 11]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[20] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[29] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[21] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[30] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[22] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[31] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[23] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[32] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[24] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[33] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[25] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[34] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[26] * kernel_shared_1[threadIdx_x * 48 + 32]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[35] * kernel_shared_1[threadIdx_x * 48 + 35]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[36] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[45] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[37] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[46] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 12]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 15]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[36] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[45] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[37] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[46] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 36]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 39]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[37] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[46] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[43] * kernel_shared_1[threadIdx_x * 48 + 13]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[52] * kernel_shared_1[threadIdx_x * 48 + 16]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[37] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[46] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[43] * kernel_shared_1[threadIdx_x * 48 + 37]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[52] * kernel_shared_1[threadIdx_x * 48 + 40]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[43] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[52] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[44] * kernel_shared_1[threadIdx_x * 48 + 14]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[53] * kernel_shared_1[threadIdx_x * 48 + 17]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[38] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[47] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[39] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[48] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[40] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[49] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[41] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[50] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[42] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[51] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[43] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[52] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[44] * kernel_shared_1[threadIdx_x * 48 + 38]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[53] * kernel_shared_1[threadIdx_x * 48 + 41]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[54] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[63] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[55] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[64] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 18]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 21]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[54] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[63] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[55] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[64] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 42]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 45]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[55] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[64] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[61] * kernel_shared_1[threadIdx_x * 48 + 19]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[70] * kernel_shared_1[threadIdx_x * 48 + 22]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[55] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[64] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[61] * kernel_shared_1[threadIdx_x * 48 + 43]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[70] * kernel_shared_1[threadIdx_x * 48 + 46]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[0] = conv2d_nchw_1[0] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[1] = conv2d_nchw_1[1] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[2] = conv2d_nchw_1[2] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[3] = conv2d_nchw_1[3] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[4] = conv2d_nchw_1[4] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[61] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[5] = conv2d_nchw_1[5] + pad_temp_shared_1[70] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[62] * kernel_shared_1[threadIdx_x * 48 + 20]
+ conv2d_nchw_1[6] = conv2d_nchw_1[6] + pad_temp_shared_1[71] * kernel_shared_1[threadIdx_x * 48 + 23]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[56] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[7] = conv2d_nchw_1[7] + pad_temp_shared_1[65] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[57] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[8] = conv2d_nchw_1[8] + pad_temp_shared_1[66] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[58] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[9] = conv2d_nchw_1[9] + pad_temp_shared_1[67] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[59] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[10] = conv2d_nchw_1[10] + pad_temp_shared_1[68] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[60] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[11] = conv2d_nchw_1[11] + pad_temp_shared_1[69] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[61] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[12] = conv2d_nchw_1[12] + pad_temp_shared_1[70] * kernel_shared_1[threadIdx_x * 48 + 47]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[62] * kernel_shared_1[threadIdx_x * 48 + 44]
+ conv2d_nchw_1[13] = conv2d_nchw_1[13] + pad_temp_shared_1[71] * kernel_shared_1[threadIdx_x * 48 + 47]
+ for i1_inner, i3_inner in T.grid(2, 7):
+ compute_1 = T.Buffer((25088,), data=compute.data)
+ bias_1 = T.Buffer((512,), data=bias.data)
+ compute_1[blockIdx_x // 7 * 6272 + threadIdx_x * 98 + i1_inner * 49 + blockIdx_x % 7 * 7 + i3_inner] = T.max(conv2d_nchw_1[i1_inner * 7 + i3_inner] + bias_1[blockIdx_x // 7 * 128 + threadIdx_x * 2 + i1_inner], T.float32(0))
</pre></div>
</div>
</div>
@@ -586,7 +1016,7 @@ class Module:
<span class="p">)</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.375 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.359 ms
</pre></div>
</div>
</div>
@@ -616,36 +1046,36 @@ conv2d_nchw_nn_o_o_i, conv2d_nchw_nn_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o
conv2d_nchw_nn_o_o_o_i, conv2d_nchw_nn_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_i, factor=1)
conv2d_nchw_nn_o_o_o_o, conv2d_nchw_nn_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_o_i, factor=1)
conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=1)
-conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=1)
-conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=16)
+conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=2)
+conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=64)
conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=1)
conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=1)
conv2d_nchw_yy_o_o_i, conv2d_nchw_yy_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_i, factor=1)
-conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=7)
+conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=1)
conv2d_nchw_yy_o_o_o_o, conv2d_nchw_yy_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_o_i, factor=1)
conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=1)
-conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=1)
+conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=7)
conv2d_nchw_xx_o_o_o_i, conv2d_nchw_xx_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_i, factor=1)
-conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=7)
-conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=32)
-conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=2)
+conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=1)
+conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=2)
+conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=4)
conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=1)
conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=1)
conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=1)
-conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=1)
+conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=3)
s[conv2d_nchw].reorder(conv2d_nchw_nn_o_o_o_o, conv2d_nchw_ff_o_o_o_o, conv2d_nchw_yy_o_o_o_o, conv2d_nchw_xx_o_o_o_o, conv2d_nchw_nn_o_o_o_i, conv2d_nchw_ff_o_o_o_i, conv2d_nchw_yy_o_o_o_i, conv2d_nchw_xx_o_o_o_i, conv2d_nchw_nn_o_o_i, conv2d_nchw_ff_o_o_i, conv2d_nchw_yy_o_o_i, conv2d_nchw_xx_o_o_i, conv2d_nchw_rc_o_o, conv2d_nchw_ry_o_o, conv2d_nchw_rx_o_o, conv2d_nchw_rc_o_i, conv2d_nchw_ry_o_i, conv2d_nchw_rx_o_i, conv2d_nchw_nn_o_i, conv2d_nchw_ff_o_i, conv2d_nchw_yy_o_i, conv2d_nc [...]
compute_i0_o_i, compute_i0_i = s[compute].split(compute_i0, factor=1)
compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
-compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=1)
-compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=16)
+compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=2)
+compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=64)
compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
-compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
+compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=1)
compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
-compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=1)
+compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=7)
compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=1)
-compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=7)
+compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=1)
s[compute].reorder(compute_i0_o_o_o, compute_i1_o_o_o, compute_i2_o_o_o, compute_i3_o_o_o, compute_i0_o_o_i, compute_i1_o_o_i, compute_i2_o_o_i, compute_i3_o_o_i, compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i, compute_i0_i, compute_i1_i, compute_i2_i, compute_i3_i)
s[conv2d_nchw].compute_at(s[compute], compute_i3_o_i)
kernel_shared = s.cache_read(kernel, "shared", [conv2d_nchw])
@@ -664,14 +1094,14 @@ s[compute].bind(compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused, te.thread
kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=112)
+kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
-pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
+pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=4)
s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=112)
+pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
-s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 0)
+s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 512)
s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "unroll_explicit", True)
CUDA source code:
@@ -689,10 +1119,10 @@ CUDA source code:
#define int64_t long long
#define uint64_t unsigned long long
#endif
-extern "C" __global__ void __launch_bounds__(112) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
- float conv2d_nchw[7];
- __shared__ float pad_temp_shared[3136];
- __shared__ float kernel_shared[1024];
+extern "C" __global__ void __launch_bounds__(64) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
+ float conv2d_nchw[14];
+ __shared__ float pad_temp_shared[72];
+ __shared__ float kernel_shared[3072];
conv2d_nchw[0] = 0.000000e+00f;
conv2d_nchw[1] = 0.000000e+00f;
conv2d_nchw[2] = 0.000000e+00f;
@@ -700,40 +1130,420 @@ extern "C" __global__ void __launch_bounds__(112) default_function_ker
conv2d_nchw[4] = 0.000000e+00f;
conv2d_nchw[5] = 0.000000e+00f;
conv2d_nchw[6] = 0.000000e+00f;
- for (int rc_outer_outer = 0; rc_outer_outer < 8; ++rc_outer_outer) {
+ conv2d_nchw[7] = 0.000000e+00f;
+ conv2d_nchw[8] = 0.000000e+00f;
+ conv2d_nchw[9] = 0.000000e+00f;
+ conv2d_nchw[10] = 0.000000e+00f;
+ conv2d_nchw[11] = 0.000000e+00f;
+ conv2d_nchw[12] = 0.000000e+00f;
+ conv2d_nchw[13] = 0.000000e+00f;
+ for (int rc_outer_outer = 0; rc_outer_outer < 64; ++rc_outer_outer) {
for (int ry_outer_outer = 0; ry_outer_outer < 3; ++ry_outer_outer) {
- for (int rx_outer_outer = 0; rx_outer_outer < 3; ++rx_outer_outer) {
- __syncthreads();
- for (int ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer = 0; ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer < 28; ++ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer) {
- pad_temp_shared[((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 112) + ((int)threadIdx.x))] = (((((1 <= (ry_outer_outer + (((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 2) + (((int)threadIdx.x) / 7)) % 7))) && ((ry_outer_outer + (((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer * 2) + (((int)threadIdx.x) / 7)) % 7)) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[( [...]
- }
- for (int ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 = 0; ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 < 10; ++ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1) {
- if (((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 * 7) + (((int)threadIdx.x) >> 4)) < 64) {
- kernel_shared[((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 * 112) + ((int)threadIdx.x))] = kernel[((((((((int)blockIdx.x) * 73728) + ((((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 * 7) + (((int)threadIdx.x) >> 4)) >> 2) * 4608)) + (rc_outer_outer * 576)) + ((((ax0_ax1_fused_ax2_fused_ax3_fused_outer_outer_1 * 48) + ((int)threadIdx.x)) & 63) * 9)) + (ry_outer_outer * 3)) + rx_outer_outer)];
- }
- }
- __syncthreads();
- for (int rc_outer_inner = 0; rc_outer_inner < 2; ++rc_outer_inner) {
- for (int rc_inner = 0; rc_inner < 32; ++rc_inner) {
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7))] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 1)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 2)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 3)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 4)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 5)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[((((rc_outer_inner * 1568) + (rc_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + 6)] * kernel_shared[((((((int)threadIdx.x) / 7) * 64) + (rc_outer_inner * 32)) + rc_inner)]));
- }
- }
+ __syncthreads();
+ if (((int)threadIdx.x) < 18) {
+ pad_temp_shared[(((int)threadIdx.x) * 4)] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= ((((int)threadIdx.x) * 4) % 9))) && (((((int)threadIdx.x) * 4) % 9) < 8)) ? data[((((((rc_outer_outer * 392) + (((((int)threadIdx.x) * 4) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + ((((int)threadIdx.x) * 4) % 9)) - 8)] : 0.000000e+00f);
}
+ if (((int)threadIdx.x) < 18) {
+ pad_temp_shared[((((int)threadIdx.x) * 4) + 1)] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((((int)threadIdx.x) * 4) + 1) % 9))) && ((((((int)threadIdx.x) * 4) + 1) % 9) < 8)) ? data[((((((rc_outer_outer * 392) + ((((((int)threadIdx.x) * 4) + 1) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((((int)threadIdx.x) * 4) + 1) % 9)) - 8)] : 0.000000e+00f);
+ }
+ if (((int)threadIdx.x) < 18) {
+ pad_temp_shared[((((int)threadIdx.x) * 4) + 2)] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((((int)threadIdx.x) * 4) + 2) % 9))) && ((((((int)threadIdx.x) * 4) + 2) % 9) < 8)) ? data[((((((rc_outer_outer * 392) + ((((((int)threadIdx.x) * 4) + 2) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((((int)threadIdx.x) * 4) + 2) % 9)) - 8)] : 0.000000e+00f);
+ }
+ if (((int)threadIdx.x) < 18) {
+ pad_temp_shared[((((int)threadIdx.x) * 4) + 3)] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((((int)threadIdx.x) * 4) + 3) % 9))) && ((((((int)threadIdx.x) * 4) + 3) % 9) < 8)) ? data[((((((rc_outer_outer * 392) + ((((((int)threadIdx.x) * 4) + 3) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((((int)threadIdx.x) * 4) + 3) % 9)) - 8)] : 0.000000e+00f);
+ }
+ kernel_shared[((int)threadIdx.x)] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 64) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 64) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 128) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 128) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 192)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 36864)];
+ kernel_shared[(((((((int)threadIdx.x) + 256) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 256) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 320) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 320) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 384)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 73728)];
+ kernel_shared[(((((((int)threadIdx.x) + 448) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 448) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 512) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 512) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 576)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 110592)];
+ kernel_shared[(((((((int)threadIdx.x) + 640) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 640) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 704) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 704) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 768)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 147456)];
+ kernel_shared[(((((((int)threadIdx.x) + 832) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 832) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 896) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 896) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 960)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 184320)];
+ kernel_shared[(((((((int)threadIdx.x) + 1024) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1024) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1088) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1088) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1152)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 221184)];
+ kernel_shared[(((((((int)threadIdx.x) + 1216) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1216) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1280) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1280) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1344)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 258048)];
+ kernel_shared[(((((((int)threadIdx.x) + 1408) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1408) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1472) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1472) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1536)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 294912)];
+ kernel_shared[(((((((int)threadIdx.x) + 1600) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1600) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1664) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1664) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1728)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 331776)];
+ kernel_shared[(((((((int)threadIdx.x) + 1792) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1792) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 1856) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1856) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 1920)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 368640)];
+ kernel_shared[(((((((int)threadIdx.x) + 1984) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1984) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2048) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2048) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2112)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 405504)];
+ kernel_shared[(((((((int)threadIdx.x) + 2176) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2176) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2240) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2240) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2304)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 442368)];
+ kernel_shared[(((((((int)threadIdx.x) + 2368) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2368) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2432) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2432) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2496)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 479232)];
+ kernel_shared[(((((((int)threadIdx.x) + 2560) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2560) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2624) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2624) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2688)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 516096)];
+ kernel_shared[(((((((int)threadIdx.x) + 2752) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2752) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 2816) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2816) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ kernel_shared[(((int)threadIdx.x) + 2880)] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 552960)];
+ kernel_shared[(((((((int)threadIdx.x) + 2944) / 24) * 24) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 1) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2944) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3))];
+ kernel_shared[(((((((int)threadIdx.x) + 3008) / 24) * 24) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 3)) + ((((int)threadIdx.x) + 2) % 3))] = kernel[(((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 3008) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3))];
+ __syncthreads();
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[0] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[9] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[1] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[10] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[2] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[3] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[4] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[5] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[6] * kernel_shared[(((int)threadIdx.x) * 48)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 3)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[0] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[9] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[1] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[10] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 24)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 27)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[1] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[10] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[7] * kernel_shared[((((int)threadIdx.x) * 48) + 1)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[16] * kernel_shared[((((int)threadIdx.x) * 48) + 4)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[1] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[10] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[7] * kernel_shared[((((int)threadIdx.x) * 48) + 25)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[16] * kernel_shared[((((int)threadIdx.x) * 48) + 28)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[7] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[16] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[8] * kernel_shared[((((int)threadIdx.x) * 48) + 2)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[17] * kernel_shared[((((int)threadIdx.x) * 48) + 5)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[2] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[11] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[3] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[12] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[4] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[13] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[5] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[14] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[6] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[15] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[7] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[16] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[8] * kernel_shared[((((int)threadIdx.x) * 48) + 26)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[17] * kernel_shared[((((int)threadIdx.x) * 48) + 29)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[18] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[27] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[19] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[28] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 6)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 9)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[18] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[27] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[19] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[28] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 30)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 33)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[19] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[28] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[25] * kernel_shared[((((int)threadIdx.x) * 48) + 7)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[34] * kernel_shared[((((int)threadIdx.x) * 48) + 10)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[19] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[28] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[25] * kernel_shared[((((int)threadIdx.x) * 48) + 31)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[34] * kernel_shared[((((int)threadIdx.x) * 48) + 34)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[25] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[34] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[26] * kernel_shared[((((int)threadIdx.x) * 48) + 8)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[35] * kernel_shared[((((int)threadIdx.x) * 48) + 11)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[20] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[29] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[21] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[30] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[22] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[31] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[23] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[32] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[24] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[33] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[25] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[34] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[26] * kernel_shared[((((int)threadIdx.x) * 48) + 32)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[35] * kernel_shared[((((int)threadIdx.x) * 48) + 35)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[36] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[45] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[37] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[46] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 12)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 15)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[36] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[45] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[37] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[46] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 36)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 39)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[37] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[46] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[43] * kernel_shared[((((int)threadIdx.x) * 48) + 13)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[52] * kernel_shared[((((int)threadIdx.x) * 48) + 16)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[37] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[46] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[43] * kernel_shared[((((int)threadIdx.x) * 48) + 37)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[52] * kernel_shared[((((int)threadIdx.x) * 48) + 40)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[43] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[52] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[44] * kernel_shared[((((int)threadIdx.x) * 48) + 14)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[53] * kernel_shared[((((int)threadIdx.x) * 48) + 17)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[38] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[47] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[39] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[48] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[40] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[49] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[41] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[50] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[42] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[51] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[43] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[52] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[44] * kernel_shared[((((int)threadIdx.x) * 48) + 38)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[53] * kernel_shared[((((int)threadIdx.x) * 48) + 41)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[54] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[63] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[55] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[64] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 18)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 21)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[54] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[63] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[55] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[64] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 42)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 45)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[55] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[64] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[61] * kernel_shared[((((int)threadIdx.x) * 48) + 19)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[70] * kernel_shared[((((int)threadIdx.x) * 48) + 22)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[55] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[64] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[61] * kernel_shared[((((int)threadIdx.x) * 48) + 43)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[70] * kernel_shared[((((int)threadIdx.x) * 48) + 46)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[61] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[70] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[62] * kernel_shared[((((int)threadIdx.x) * 48) + 20)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[71] * kernel_shared[((((int)threadIdx.x) * 48) + 23)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[56] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[65] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[57] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[66] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[58] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[67] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[59] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[68] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[60] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[69] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[61] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[70] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[62] * kernel_shared[((((int)threadIdx.x) * 48) + 44)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[71] * kernel_shared[((((int)threadIdx.x) * 48) + 47)]));
+ }
+ }
+ for (int i1_inner = 0; i1_inner < 2; ++i1_inner) {
+ for (int i3_inner = 0; i3_inner < 7; ++i3_inner) {
+ compute[((((((((int)blockIdx.x) / 7) * 6272) + (((int)threadIdx.x) * 98)) + (i1_inner * 49)) + ((((int)blockIdx.x) % 7) * 7)) + i3_inner)] = max((conv2d_nchw[((i1_inner * 7) + i3_inner)] + bias[((((((int)blockIdx.x) / 7) * 128) + (((int)threadIdx.x) * 2)) + i1_inner)]), 0.000000e+00f);
}
}
- compute[((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7))] = max((conv2d_nchw[0] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 1)] = max((conv2d_nchw[1] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 2)] = max((conv2d_nchw[2] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 3)] = max((conv2d_nchw[3] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 4)] = max((conv2d_nchw[4] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 5)] = max((conv2d_nchw[5] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
- compute[(((((int)blockIdx.x) * 784) + (((int)threadIdx.x) * 7)) + 6)] = max((conv2d_nchw[6] + bias[((((int)blockIdx.x) * 16) + (((int)threadIdx.x) / 7))]), 0.000000e+00f);
}
</pre></div>
</div>
@@ -767,7 +1577,7 @@ In the example below we resume the status and do more 5 trials.</p>
Get devices for measurement successfully!
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 3.020 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 17.931 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/e3e540f3b477c0c52d8eb73e674e8ffd/tune_conv2d_layer_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_conv2d_layer_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
index c307aca6cd..34634fd23d 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
@@ -921,7 +921,7 @@ so we can read the log file and load the best schedules.</p>
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 7.9057 7.9119 7.9122 7.8930 0.0090
+ 7.8962 7.8949 7.9029 7.8909 0.0050
</pre></div>
</div>
</div>
@@ -943,7 +943,7 @@ to learn how to use the RPC Tracker and RPC Server.
To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
</ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 7.318 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 9.205 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-cuda-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/eafe360d52540634c9eea0fa89e804bd/tune_network_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
index 15fafcf3ce..5eb247037e 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
@@ -940,7 +940,7 @@ so we can read the log file and load the best schedules.</p>
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 756.2868 756.3433 756.3818 756.1353 0.1083
+ 758.8034 759.8965 760.0964 756.4172 1.6892
</pre></div>
</div>
</div>
@@ -962,7 +962,7 @@ to learn how to use the RPC Tracker and RPC Server.
To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
</ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 42.858 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 45.180 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-x86-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/e416b94ca1090b0897c0f6e0df95b911/tune_network_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_x86.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html b/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
index 5db5681e61..84a36f7e6e 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
@@ -637,26 +637,74 @@ class Module:
@T.prim_func
def main(placeholder: T.Buffer((128, 256), "float32"), placeholder_1: T.Buffer((4916, 16, 1), "float32"), placeholder_2: T.Buffer((4916,), "int32"), placeholder_3: T.Buffer((33,), "int32"), placeholder_4: T.Buffer((128, 512), "float32"), compute: T.Buffer((128, 512), "float32")):
T.func_attr({"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True})
- for i0_outer_i1_outer_fused in T.parallel(128):
- compute_1 = T.allocate([512], "float32", "global")
- compute_2 = T.Buffer((512,), data=compute_1)
- for i_outer_inner, nb_j_inner in T.grid(4, 2):
- for i_inner_init, j_init in T.grid(4, 16):
- compute_2[i_outer_inner * 128 + i_inner_init * 32 + nb_j_inner * 16 + j_init] = T.float32(0)
- for elem_idx, i_inner, j in T.grid(T.Let(placeholder_5[cse_var_1 + 1] - placeholder_5[cse_var_1], where={cse_var_1: i0_outer_i1_outer_fused % 16 * 2 + nb_j_inner}), 4, 16):
- cse_var_1 = T.int32()
+ for i0_outer_i1_outer_fused in T.parallel(16):
+ compute_1 = T.allocate([4096], "float32", "global")
+ compute_2 = T.Buffer((4096,), data=compute_1)
+ for i_outer_inner, nb_j_inner in T.grid(2, 2):
+ for i_inner_init in range(64):
+ cse_var_1: T.int32 = i_outer_inner * 2048 + i_inner_init * 32 + nb_j_inner * 16
+ compute_2[cse_var_1] = T.float32(0)
+ compute_2[cse_var_1 + 1] = T.float32(0)
+ compute_2[cse_var_1 + 2] = T.float32(0)
+ compute_2[cse_var_1 + 3] = T.float32(0)
+ compute_2[cse_var_1 + 4] = T.float32(0)
+ compute_2[cse_var_1 + 5] = T.float32(0)
+ compute_2[cse_var_1 + 6] = T.float32(0)
+ compute_2[cse_var_1 + 7] = T.float32(0)
+ compute_2[cse_var_1 + 8] = T.float32(0)
+ compute_2[cse_var_1 + 9] = T.float32(0)
+ compute_2[cse_var_1 + 10] = T.float32(0)
+ compute_2[cse_var_1 + 11] = T.float32(0)
+ compute_2[cse_var_1 + 12] = T.float32(0)
+ compute_2[cse_var_1 + 13] = T.float32(0)
+ compute_2[cse_var_1 + 14] = T.float32(0)
+ compute_2[cse_var_1 + 15] = T.float32(0)
+ for elem_idx, i_inner in T.grid(T.Let(placeholder_5[cse_var_2 + 1] - placeholder_5[cse_var_2], where={cse_var_2: i0_outer_i1_outer_fused * 2 + nb_j_inner}), 64):
+ cse_var_2 = T.int32()
placeholder_5 = T.Buffer((33,), "int32", data=placeholder_3.data)
- cse_var_3: T.int32 = i0_outer_i1_outer_fused % 16 * 2 + nb_j_inner
- cse_var_2: T.int32 = i_outer_inner * 128 + i_inner * 32 + nb_j_inner * 16 + j
+ cse_var_21: T.int32 = elem_idx * 16
+ cse_var_20: T.int32 = i0_outer_i1_outer_fused * 2 + nb_j_inner
+ cse_var_19: T.int32 = i_outer_inner * 16384 + i_inner * 256
+ cse_var_18: T.int32 = i_outer_inner * 2048 + i_inner * 32 + nb_j_inner * 16
+ cse_var_17: T.int32 = cse_var_18 + 9
+ cse_var_16: T.int32 = cse_var_18 + 8
+ cse_var_15: T.int32 = cse_var_18 + 7
+ cse_var_14: T.int32 = cse_var_18 + 6
+ cse_var_13: T.int32 = cse_var_18 + 5
+ cse_var_12: T.int32 = cse_var_18 + 4
+ cse_var_11: T.int32 = cse_var_18 + 3
+ cse_var_10: T.int32 = cse_var_18 + 2
+ cse_var_9: T.int32 = cse_var_18 + 15
+ cse_var_8: T.int32 = cse_var_18 + 14
+ cse_var_7: T.int32 = cse_var_18 + 13
+ cse_var_6: T.int32 = cse_var_18 + 12
+ cse_var_5: T.int32 = cse_var_18 + 11
+ cse_var_4: T.int32 = cse_var_18 + 10
+ cse_var_3: T.int32 = cse_var_18 + 1
placeholder_6 = T.Buffer((78656,), data=placeholder_1.data)
placeholder_7 = T.Buffer((32768,), data=placeholder.data)
placeholder_8 = T.Buffer((4916,), "int32", data=placeholder_2.data)
- compute_2[cse_var_2] = compute_2[cse_var_2] + placeholder_6[placeholder_5[cse_var_3] * 16 + elem_idx * 16 + j] * T.max(placeholder_7[i0_outer_i1_outer_fused // 16 * 4096 + i_outer_inner * 1024 + i_inner * 256 + placeholder_8[placeholder_5[cse_var_3] + elem_idx]], T.float32(0))
- for i0_inner in range(16):
- cse_var_4: T.int32 = i0_outer_i1_outer_fused // 16 * 8192 + i0_inner * 512 + i0_outer_i1_outer_fused % 16 * 32
+ compute_2[cse_var_18] = compute_2[cse_var_18] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_3] = compute_2[cse_var_3] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 1] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_10] = compute_2[cse_var_10] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 2] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_11] = compute_2[cse_var_11] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 3] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_12] = compute_2[cse_var_12] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 4] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_13] = compute_2[cse_var_13] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 5] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_14] = compute_2[cse_var_14] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 6] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_15] = compute_2[cse_var_15] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 7] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_16] = compute_2[cse_var_16] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 8] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_17] = compute_2[cse_var_17] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 9] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_4] = compute_2[cse_var_4] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 10] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_5] = compute_2[cse_var_5] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 11] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_6] = compute_2[cse_var_6] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 12] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_7] = compute_2[cse_var_7] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 13] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_8] = compute_2[cse_var_8] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 14] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ compute_2[cse_var_9] = compute_2[cse_var_9] + placeholder_6[placeholder_5[cse_var_20] * 16 + cse_var_21 + 15] * T.max(placeholder_7[cse_var_19 + placeholder_8[placeholder_5[cse_var_20] + elem_idx]], T.float32(0))
+ for i0_inner in range(128):
+ cse_var_22: T.int32 = i0_inner * 512 + i0_outer_i1_outer_fused * 32
compute_3 = T.Buffer((65536,), data=compute.data)
placeholder_5 = T.Buffer((65536,), data=placeholder_4.data)
- compute_3[cse_var_4:cse_var_4 + 32] = T.max(compute_2[i0_inner * 32:i0_inner * 32 + 32] + placeholder_5[cse_var_4:cse_var_4 + 32], T.Broadcast(T.float32(0), 32))
+ compute_3[cse_var_22:cse_var_22 + 32] = T.max(compute_2[i0_inner * 32:i0_inner * 32 + 32] + placeholder_5[cse_var_22:cse_var_22 + 32], T.Broadcast(T.float32(0), 32))
</pre></div>
</div>
</div>
@@ -690,7 +738,7 @@ class Module:
<span class="p">)</span>
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 1.496 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 1.870 ms
</pre></div>
</div>
<div class="admonition note">
diff --git a/docs/how_to/tune_with_autotvm/sg_execution_times.html b/docs/how_to/tune_with_autotvm/sg_execution_times.html
index b785aecfeb..1250a3a0e0 100644
--- a/docs/how_to/tune_with_autotvm/sg_execution_times.html
+++ b/docs/how_to/tune_with_autotvm/sg_execution_times.html
@@ -345,7 +345,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-tune-with-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:28.687</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
+<p><strong>00:44.950</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 84%" />
@@ -354,7 +354,7 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></td>
-<td><p>00:28.652</p></td>
+<td><p>00:44.915</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></td>
@@ -366,7 +366,7 @@
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></td>
-<td><p>00:00.004</p></td>
+<td><p>00:00.005</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></td>
diff --git a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
index f431cd76cd..def95d37d0 100644
--- a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
@@ -695,7 +695,7 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 1, 16]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 8, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1270006
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 16, 8]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 64]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1128978
No: 2 GFLOPS: 0.00/0.00 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
@@ -818,8 +818,10 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 1, 32]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 128, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3579805
-No: 3 GFLOPS: 0.00/0.00 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 2, 4]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 64, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,5896331
+No: 3 GFLOPS: 17.53/17.53 result: MeasureResult(costs=(0.01320753811111111,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.656529664993286, timestamp=1678838216.3901742) [('tile_f', [-1, 4, 2, 8]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 2, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2714505
+No: 4 GFLOPS: 58.24/58.24 result: MeasureResult(costs=(0.003975269448275862,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.8088016510009766, timestamp=1678838218.727577) [('tile_f', [-1, 2, 1, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2614205
+No: 5 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -941,8 +943,8 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 2, 8]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 256]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,186484
-No: 4 GFLOPS: 0.00/0.00 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 16, 16, 1]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 128, 2]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9742958
+No: 6 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1064,501 +1066,161 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 64, 4, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 2, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1260458
-No: 5 GFLOPS: 25.12/25.12 result: MeasureResult(costs=(0.009214138363636363,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.087787389755249, timestamp=1678816859.364998) [('tile_f', [-1, 8, 2, 1]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 16]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,5153733
-No: 6 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
- func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
- func = build(s, args, target=target, runtime=runtime)
- File "/workspace/python/tvm/driver/build_module.py", line 227, in build
- input_mod = lower(inputs, args, name=name, binds=binds)
- File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
- return ffi.lower_schedule(inp, args, name, binds, simple_mode)
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 2, 256]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6307618
+No: 7 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
+ File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 742, in __call__
+ yield remote, remote.load_module(os.path.split(build_result.filename)[1])
+ File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 706, in run_through_rpc
+ costs = time_f(*args).results
+ File "/workspace/python/tvm/runtime/module.py", line 357, in evaluator
+ blob = feval(*args)
File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
- File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
+ File "tvm/_ffi/_cython/./packed_func.pxi", line 262, in tvm._ffi._cy3.core.FuncCall
+ File "tvm/_ffi/_cython/./packed_func.pxi", line 251, in tvm._ffi._cy3.core.FuncCall3
File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
- 24: TVMFuncCall
+ 4: TVMFuncCall
at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
+ 3: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+ at ../include/tvm/runtime/packed_func.h:1217
+ 2: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
+ at ../src/runtime/rpc/rpc_module.cc:129
+ 1: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)
+ at ../src/runtime/rpc/rpc_endpoint.cc:1012
+ 0: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)
+ at ../src/runtime/rpc/rpc_endpoint.cc:804
+ File "../src/runtime/rpc/rpc_endpoint.cc", line 804
+TVMError:
+---------------------------------------------------------------
+An error occurred during the execution of TVM.
+For more information, please see: https://tvm.apache.org/docs/errors.html
+---------------------------------------------------------------
+ Check failed: (code == RPCCode::kReturn) is false: code=kShutdown
+
+During handling of the above exception, another exception occurred:
Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 16, 4, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 16]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7869863
-No: 7 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
- func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
- func = build(s, args, target=target, runtime=runtime)
- File "/workspace/python/tvm/driver/build_module.py", line 227, in build
- input_mod = lower(inputs, args, name=name, binds=binds)
- File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
- return ffi.lower_schedule(inp, args, name, binds, simple_mode)
- File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
- File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
- File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
+ File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 706, in run_through_rpc
+ costs = time_f(*args).results
+ File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
+ self.gen.throw(type, value, traceback)
+ File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 746, in __call__
+ remote.remove(build_result.filename)
+ File "/workspace/python/tvm/rpc/client.py", line 144, in remove
+ self._remote_funcs["remove"] = self.get_function("tvm.rpc.server.remove")
+ File "/workspace/python/tvm/rpc/client.py", line 72, in get_function
+ return self._sess.get_function(name)
+ File "/workspace/python/tvm/runtime/module.py", line 171, in get_function
+ self.handle, c_str(name), ctypes.c_int(query_imports), ctypes.byref(ret_handle)
+ File "/workspace/python/tvm/_ffi/base.py", line 348, in check_call
+ raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
+ 52: 0xffffffffffffffff
+ 51: _start
+ 50: __libc_start_main
+ 49: _Py_UnixMain
+ 48: 0x0000000000650da0
+ 47: 0x0000000000650afa
+ 46: _PyFunction_FastCallDict
+ 45: _PyEval_EvalCodeWithName
+ 44: _PyEval_EvalFrameDefault
+ 43: _PyFunction_FastCallKeywords
+ 42: _PyEval_EvalCodeWithName
+ 41: _PyEval_EvalFrameDefault
+ 40: _PyMethodDef_RawFastCallKeywords
+ 39: 0x0000000000546369
+ 38: _PyEval_EvalCodeWithName
+ 37: _PyEval_EvalFrameDefault
+ 36: _PyFunction_FastCallKeywords
+ 35: _PyEval_EvalCodeWithName
+ 34: _PyEval_EvalFrameDefault
+ 33: _PyFunction_FastCallDict
+ 32: _PyEval_EvalCodeWithName
+ 31: _PyEval_EvalFrameDefault
+ 30: _PyObject_FastCallDict
+ 29: 0x00000000004c06e1
+ 28: _PyFunction_FastCallDict
+ 27: _PyEval_EvalFrameDefault
+ 26: _PyMethodDescr_FastCallKeywords
+ 25: 0x00000000005dcb58
+ 24: 0x00000000005dc83f
+ 23: 0x00000000004ba127
+ 22: _PyEval_EvalFrameDefault
+ 21: _PyFunction_FastCallKeywords
+ 20: _PyEval_EvalFrameDefault
+ 19: _PyFunction_FastCallKeywords
+ 18: _PyEval_EvalFrameDefault
+ 17: _PyFunction_FastCallKeywords
+ 16: _PyEval_EvalCodeWithName
+ 15: _PyEval_EvalFrameDefault
+ 14: 0x0000000000537c30
+ 13: _PyObject_FastCallKeywords
+ 12: 0x00007f670fd0bfa2
+ 11: _ctypes_callproc
+ 10: ffi_call
+ 9: ffi_call_unix64
+ 8: TVMModGetFunction
+ at ../src/runtime/c_runtime_api.cc:408
+ 7: tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)
+ at ../src/runtime/module.cc:66
+ 6: tvm::runtime::RPCModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)
+ at ../src/runtime/rpc/rpc_module.cc:185
+ 5: tvm::runtime::RPCClientSession::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+ at ../src/runtime/rpc/rpc_endpoint.cc:1007
+ 4: tvm::runtime::TVMRetValue tvm::runtime::RPCEndpoint::SysCallRemote<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(tvm::runtime::RPCCode, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
+ at ../src/runtime/rpc/rpc_endpoint.h:223
+ 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(int&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
at ../include/tvm/runtime/packed_func.h:1621
2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
at ../include/tvm/runtime/packed_func.h:1217
1: Call
at ../include/tvm/runtime/packed_func.h:1213
0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
+ at ../src/runtime/rpc/rpc_endpoint.cc:684
+ File "../src/runtime/rpc/rpc_endpoint.cc", line 684
+TVMError:
+---------------------------------------------------------------
+An error occurred during the execution of TVM.
+For more information, please see: https://tvm.apache.org/docs/errors.html
+---------------------------------------------------------------
+ Check failed: (code == RPCCode::kReturn) is false: code=1
Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 4, 64]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8011727
-No: 8 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
- func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
- func = build(s, args, target=target, runtime=runtime)
- File "/workspace/python/tvm/driver/build_module.py", line 227, in build
- input_mod = lower(inputs, args, name=name, binds=binds)
- File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
- return ffi.lower_schedule(inp, args, name, binds, simple_mode)
- File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
- File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
- File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
-tvm._ffi.base.TVMError: Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
-
-Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 16, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 128, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,413980
-No: 9 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
- func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
- func = build(s, args, target=target, runtime=runtime)
- File "/workspace/python/tvm/driver/build_module.py", line 227, in build
- input_mod = lower(inputs, args, name=name, binds=binds)
- File "/workspace/python/tvm/driver/build_module.py", line 134, in lower
- return ffi.lower_schedule(inp, args, name, binds, simple_mode)
- File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
- File "tvm/_ffi/_cython/./packed_func.pxi", line 276, in tvm._ffi._cy3.core.FuncCall
- File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
-tvm._ffi.base.TVMError: Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
-
-Traceback (most recent call last):
- 24: TVMFuncCall
- at ../src/runtime/c_runtime_api.cc:477
- 23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 22: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 21: operator()
- at ../include/tvm/runtime/packed_func.h:1734
- 20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
- at ../include/tvm/runtime/packed_func.h:1674
- 19: run<>
- at ../include/tvm/runtime/packed_func.h:1634
- 18: run<tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1634
- 14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
- at ../include/tvm/runtime/packed_func.h:1649
- 13: operator()
- at ../src/driver/driver_api.cc:402
- 12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
- at ../src/driver/driver_api.cc:388
- 11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
- at ../src/driver/driver_api.cc:283
- 10: tvm::transform::Pass::operator()(tvm::IRModule) const
- at ../src/ir/transform.cc:258
- 9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:451
- 7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/ir/transform.cc:274
- 6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
- at ../src/tir/ir/transform.cc:100
- 5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
- at ../include/tvm/runtime/packed_func.h:1753
- 4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
- at ../include/tvm/runtime/packed_func.h:1697
- 3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
- at ../include/tvm/runtime/packed_func.h:1621
- 2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
- at ../include/tvm/runtime/packed_func.h:1217
- 1: Call
- at ../include/tvm/runtime/packed_func.h:1213
- 0: operator()
- at ../src/runtime/c_runtime_api.cc:534
- File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
- File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
- raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 16, 8, 2]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8532123
-No: 10 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
+ 52: 0xffffffffffffffff
+ 51: _start
+ 50: __libc_start_main
+ 49: _Py_UnixMain
+ 48: 0x0000000000650da0
+ 47: 0x0000000000650afa
+ 46: _PyFunction_FastCallDict
+ 45: _PyEval_EvalCodeWithName
+ 44: _PyEval_EvalFrameDefault
+ 43: _PyFunction_FastCallKeywords
+ 42: _PyEval_EvalCodeWithName
+ 41: _PyEval_EvalFrameDefault
+ 40: _PyMethodDef_RawFastCallKeywords
+ 39: 0x0000000000546369
+ 38: _PyEval_EvalCodeWithName
+ 37: _PyEval_EvalFrameDefault
+ 36: _PyFunction_FastCallKeywords
+ 35: _PyEval_EvalCodeWithName
+ 34: _PyEval_EvalFrameDefault
+ 33: _PyFunction_FastCallDict
+ 32: _PyEval_EvalCodeWithName
+ 31: _PyEval_EvalFrameDefault
+ 30: _PyObject_FastCallDict
+ 29: 0x00000000004c06e1
+ 28: _PyFunction_FastCallDict
+ 27: _PyEval_EvalFrameDefault
+ 26: _PyMethodDescr_FastCallKeywords
+ 25: 0x00000000005dcb58
+ 24: 0x00000000005dc83f
+ 23: 0x00000000004ba127
+ 22: _PyEval_EvalFrameDefault
+ 21: _PyFunction_FastCallKeywords
+ 20: _PyEval_EvalFrameDefault
+ 19: _PyFunction_FastCall [('tile_f', [-1, 16, 1, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1259640
+No: 8 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1680,8 +1342,8 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 16, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 64, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9769187
-No: 11 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 1, 8]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7686938
+No: 9 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1803,8 +1465,9 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 1, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 128, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8192195
-No: 12 GFLOPS: 0.00/25.12 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 64, 2, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 16, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8846050
+No: 10 GFLOPS: 0.86/58.24 result: MeasureResult(costs=(0.26825750725,), error_no=MeasureErrorNo.NO_ERROR, all_cost=5.713873624801636, timestamp=1678838232.9064837) [('tile_f', [-1, 64, 2, 2]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 4]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,5297450
+No: 11 GFLOPS: 0.00/58.24 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -1926,9 +1589,10 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 8, 2]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 16, 16]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1491239
-No: 13 GFLOPS: 273.35/273.35 result: MeasureResult(costs=(0.0008468909682539682,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.3628065586090088, timestamp=1678816861.156233) [('tile_f', [-1, 4, 2, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,206592
-No: 14 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 8, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 32]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,8279340
+No: 12 GFLOPS: 160.00/160.00 result: MeasureResult(costs=(0.0014468562027027028,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.133188247680664, timestamp=1678838233.686128) [('tile_f', [-1, 1, 8, 1]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 64, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,21367
+No: 13 GFLOPS: 8.73/160.00 result: MeasureResult(costs=(0.026505223999999997,), error_no=MeasureErrorNo.NO_ERROR, all_cost=3.9389443397521973, timestamp=1678838237.7996368) [('tile_f', [-1, 8, 2, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3889393
+No: 14 GFLOPS: 0.00/160.00 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2050,8 +1714,9 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 4, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 8, 32]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,10219178
-No: 15 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 4, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 128, 2]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4514515
+No: 15 GFLOPS: 765.53/765.53 result: MeasureResult(costs=(0.00030240633888888887,), error_no=MeasureErrorNo.NO_ERROR, all_cost=2.8589601516723633, timestamp=1678838238.8367987) [('tile_f', [-1, 2, 2, 1]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7950151
+No: 16 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2173,8 +1838,8 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 512, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 16, 32]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4026934
-No: 16 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 1, 512]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 32]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4408799
+No: 17 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2296,9 +1961,8 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 32, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 512, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6617031
-No: 17 GFLOPS: 121.12/273.35 result: MeasureResult(costs=(0.0019113225283018869,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4811546802520752, timestamp=1678816862.8299277) [('tile_f', [-1, 2, 1, 1]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 16]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2835801
-No: 18 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 2, 8]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 64, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1667746
+No: 18 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2420,8 +2084,8 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 16, 4, 8]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 128, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2735413
-No: 19 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 1, 256]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 128, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7963557
+No: 19 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2543,8 +2207,8 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 32, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 256]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9094891
-No: 20 GFLOPS: 0.00/273.35 result: Traceback (most recent call last):
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 4, 32]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 128]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7731434
+No: 20 GFLOPS: 0.00/765.53 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 592, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 544, in _build_func_common
@@ -2666,7 +2330,7 @@ Traceback (most recent call last):
File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 875, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
-tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 2, 16]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 8, 32]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4217353
+tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 4, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 4, 32]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9443896
</pre></div>
</div>
<p>Finally we can inspect the best config from log file, check correctness,
@@ -2705,9 +2369,9 @@ and measure running time.</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Finish loading 20 records
Best config:
-[('tile_f', [-1, 4, 2, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,206592
+[('tile_f', [-1, 2, 2, 1]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 8, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7950151
Finish loading 20 records
-Time cost of this operator: 0.001257
+Time cost of this operator: 0.000597
</pre></div>
</div>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autotvm-tune-conv2d-cuda-py">
diff --git a/docs/how_to/work_with_microtvm/micro_autotune.html b/docs/how_to/work_with_microtvm/micro_autotune.html
index de06665860..67150332f0 100644
--- a/docs/how_to/work_with_microtvm/micro_autotune.html
+++ b/docs/how_to/work_with_microtvm/micro_autotune.html
@@ -649,10 +649,10 @@ the tuned operator.</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build without Autotuning ##########
Node Name Ops Time(us) Time(%) Shape Inputs Outputs Measurements(us)
--------- --- -------- ------- ----- ------ ------- ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 317.7 98.752 (1, 2, 10, 10, 3) 2 1 [317.7]
-tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 3.048 0.947 (1, 6, 10, 10) 1 1 [3.048]
-tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.966 0.3 (1, 1, 10, 10, 3) 1 1 [0.966]
-Total_time - 321.714 - - - - -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 315.7 98.73 (1, 2, 10, 10, 3) 2 1 [315.7]
+tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 3.094 0.968 (1, 6, 10, 10) 1 1 [3.094]
+tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.967 0.303 (1, 1, 10, 10, 3) 1 1 [0.967]
+Total_time - 319.762 - - - - -
</pre></div>
</div>
</div>
@@ -704,13 +704,13 @@ Total_time -
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build with Autotuning ##########
Node Name Ops Time(us) Time(%) Shape Inputs Outputs Measurements(us)
--------- --- -------- ------- ----- ------ ------- ----------------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 136.3 98.043 (1, 6, 10, 10, 1) 2 1 [136.3]
-tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 1.763 1.268 (1, 6, 10, 10) 1 1 [1.763]
-tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.957 0.688 (1, 1, 10, 10, 3) 1 1 [0.957]
-Total_time - 139.02 - - - - -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 101.3 97.386 (1, 6, 10, 10, 1) 2 1 [101.3]
+tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 1.765 1.697 (1, 6, 10, 10) 1 1 [1.765]
+tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.954 0.917 (1, 1, 10, 10, 3) 1 1 [0.954]
+Total_time - 104.019 - - - - -
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 22.178 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 24.008 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-autotune-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/9ccca8fd489a1486ac71b55a55c320c5/micro_autotune.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_autotune.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_pytorch.html b/docs/how_to/work_with_microtvm/micro_pytorch.html
index 12f117aedd..9b7cceebce 100644
--- a/docs/how_to/work_with_microtvm/micro_pytorch.html
+++ b/docs/how_to/work_with_microtvm/micro_pytorch.html
@@ -460,8 +460,8 @@ download a cat image and preprocess it to use as the model input.</p>
Downloading: "https://download.pytorch.org/models/quantized/mobilenet_v2_qnnpack_37f702c5.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2_qnnpack_37f702c5.pth
0%| | 0.00/3.42M [00:00<?, ?B/s]
- 61%|###### | 2.09M/3.42M [00:00<00:00, 17.2MB/s]
-100%|##########| 3.42M/3.42M [00:00<00:00, 26.3MB/s]
+ 61%|###### | 2.09M/3.42M [00:00<00:00, 19.7MB/s]
+100%|##########| 3.42M/3.42M [00:00<00:00, 30.6MB/s]
/workspace/python/tvm/relay/frontend/pytorch_utils.py:47: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
return LooseVersion(torch_ver) > ver
/venv/apache-tvm-py3.7/lib/python3.7/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
@@ -587,7 +587,7 @@ via the host <cite>main.cc`</cite> or if a Zephyr emulated board is selected as
Torch top-1 id: 282, class name: tiger cat
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 18.102 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 21.449 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-pytorch-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/12b9ecc04c41abaa12022061771821d1/micro_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/micro_train.html b/docs/how_to/work_with_microtvm/micro_train.html
index 0445100fa7..cdc81205d0 100644
--- a/docs/how_to/work_with_microtvm/micro_train.html
+++ b/docs/how_to/work_with_microtvm/micro_train.html
@@ -529,7 +529,7 @@ take about <strong>2 minutes</strong> to download the Stanford Cars, while COCO
<a href="https://docs.python.org/3/library/shutil.html#shutil.move" title="shutil.move" class="sphx-glr-backref-module-shutil sphx-glr-backref-type-py-function"><span class="n">shutil</span><span class="o">.</span><span class="n">move</span></a><span class="p">(</span><span class="sa">f</span><span class="s2">"</span><span class="si">{</span><a href="https://docs.python.org/3/library/stdtypes.html#str" title="builtins.str" class="sphx-glr-backref-module-builtins sphx-glr-backref-typ [...]
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>'/tmp/tmpr78_oloq/images/random'
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>'/tmp/tmp_w894do9/images/random'
</pre></div>
</div>
</div>
@@ -589,8 +589,8 @@ objects to other stuff? We can display some examples from our datasets using <co
<span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">"off"</span><span class="p">)</span>
</pre></div>
</div>
-<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmpr78_oloq/images/target contains 8144 images
-/tmp/tmpr78_oloq/images/random contains 5000 images
+<img src="../../_images/sphx_glr_micro_train_001.png" srcset="../../_images/sphx_glr_micro_train_001.png" alt="[1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [1.0, 0.0]" class = "sphx-glr-single-img"/><div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/tmp/tmp_w894do9/images/target contains 8144 images
+/tmp/tmp_w894do9/images/random contains 5000 images
</pre></div>
</div>
</div>
@@ -702,13 +702,13 @@ the time on our validation set).</p>
</pre></div>
</div>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Epoch 1/3
-328/328 - 48s - loss: 0.2333 - accuracy: 0.9183 - val_loss: 0.1236 - val_accuracy: 0.9592 - 48s/epoch - 145ms/step
+328/328 - 48s - loss: 0.2109 - accuracy: 0.9269 - val_loss: 0.1022 - val_accuracy: 0.9641 - 48s/epoch - 148ms/step
Epoch 2/3
-328/328 - 43s - loss: 0.0977 - accuracy: 0.9638 - val_loss: 0.1380 - val_accuracy: 0.9517 - 43s/epoch - 132ms/step
+328/328 - 44s - loss: 0.0939 - accuracy: 0.9648 - val_loss: 0.0982 - val_accuracy: 0.9641 - 44s/epoch - 134ms/step
Epoch 3/3
-328/328 - 43s - loss: 0.0666 - accuracy: 0.9756 - val_loss: 0.1027 - val_accuracy: 0.9622 - 43s/epoch - 132ms/step
+328/328 - 44s - loss: 0.0745 - accuracy: 0.9721 - val_loss: 0.1042 - val_accuracy: 0.9656 - 44s/epoch - 134ms/step
-<keras.callbacks.History object at 0x7f915a584d50>
+<keras.callbacks.History object at 0x7fdd8856d6d0>
</pre></div>
</div>
</div>
@@ -972,7 +972,7 @@ as intended.</p>
<p>From here, we could modify the model to read live images from the camera - we have another
Arduino tutorial for how to do that <a class="reference external" href="https://github.com/guberti/tvm-arduino-demos/tree/master/examples/person_detection">on GitHub</a>. Alternatively, we could also
<a class="reference external" href="https://tvm.apache.org/docs/how_to/work_with_microtvm/micro_autotune.html">use TVM’s autotuning capabilities</a> to dramatically improve the model’s performance.</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 4 minutes 43.389 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 5.128 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-train-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/b52cec46baf4f78d6bcd94cbe269c8a6/micro_train.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">micro_train.py</span></code></a></p>
diff --git a/docs/how_to/work_with_microtvm/sg_execution_times.html b/docs/how_to/work_with_microtvm/sg_execution_times.html
index 7cb47b53d8..d41d9c92ce 100644
--- a/docs/how_to/work_with_microtvm/sg_execution_times.html
+++ b/docs/how_to/work_with_microtvm/sg_execution_times.html
@@ -345,7 +345,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-work-with-microtvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>07:49.439</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
+<p><strong>08:17.309</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 82%" />
@@ -354,27 +354,27 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="micro_train.html#sphx-glr-how-to-work-with-microtvm-micro-train-py"><span class="std std-ref">5. Training Vision Models for microTVM on Arduino</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_train.py</span></code>)</p></td>
-<td><p>04:43.389</p></td>
+<td><p>05:05.128</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">6. Model Tuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></td>
-<td><p>01:22.178</p></td>
+<td><p>01:24.008</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="micro_pytorch.html#sphx-glr-how-to-work-with-microtvm-micro-pytorch-py"><span class="std std-ref">4. microTVM PyTorch Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_pytorch.py</span></code>)</p></td>
-<td><p>01:18.102</p></td>
+<td><p>01:21.449</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="micro_aot.html#sphx-glr-how-to-work-with-microtvm-micro-aot-py"><span class="std std-ref">3. microTVM Ahead-of-Time (AOT) Compilation</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_aot.py</span></code>)</p></td>
-<td><p>00:10.188</p></td>
+<td><p>00:10.568</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="micro_custom_ide.html#sphx-glr-how-to-work-with-microtvm-micro-custom-ide-py"><span class="std std-ref">9. Bring microTVM to your own development environment</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_custom_ide.py</span></code>)</p></td>
-<td><p>00:08.224</p></td>
+<td><p>00:08.380</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">2. microTVM TFLite Tutorial</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></td>
-<td><p>00:07.359</p></td>
+<td><p>00:07.776</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">7. Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_relay/sg_execution_times.html b/docs/how_to/work_with_relay/sg_execution_times.html
index b2e492dea6..f0bf7a17c8 100644
--- a/docs/how_to/work_with_relay/sg_execution_times.html
+++ b/docs/how_to/work_with_relay/sg_execution_times.html
@@ -345,7 +345,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-work-with-relay-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:45.897</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
+<p><strong>00:47.014</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 84%" />
@@ -354,15 +354,15 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="using_pipeline_executor.html#sphx-glr-how-to-work-with-relay-using-pipeline-executor-py"><span class="std std-ref">Using Pipeline Executor in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_pipeline_executor.py</span></code>)</p></td>
-<td><p>00:33.664</p></td>
+<td><p>00:34.614</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></td>
-<td><p>00:10.582</p></td>
+<td><p>00:10.721</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></td>
-<td><p>00:01.646</p></td>
+<td><p>00:01.673</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></td>
diff --git a/docs/how_to/work_with_schedules/intrin_math.html b/docs/how_to/work_with_schedules/intrin_math.html
index c7ba950ade..580061295e 100644
--- a/docs/how_to/work_with_schedules/intrin_math.html
+++ b/docs/how_to/work_with_schedules/intrin_math.html
@@ -540,7 +540,7 @@ The following example customizes CUDA lowering rule for <code class="code docuti
<a href="../../reference/api/python/ir.html#tvm.ir.register_intrin_lowering" title="tvm.ir.register_intrin_lowering" class="sphx-glr-backref-module-tvm-ir sphx-glr-backref-type-py-function"><span class="n">register_intrin_lowering</span></a><span class="p">(</span><span class="s2">"tir.exp"</span><span class="p">,</span> <span class="n">target</span><span class="o">=</span><span class="s2">"cuda"</span><span class="p">,</span> <span class="n">f</span><span class="o">= [...]
</pre></div>
</div>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span><function my_cuda_math_rule at 0x7f900eaaab90>
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span><function my_cuda_math_rule at 0x7fdc323297a0>
</pre></div>
</div>
<p>Register the rule to TVM with override option to override existing rule.
diff --git a/docs/how_to/work_with_schedules/sg_execution_times.html b/docs/how_to/work_with_schedules/sg_execution_times.html
index f72a29aac9..7dc3c9bf9e 100644
--- a/docs/how_to/work_with_schedules/sg_execution_times.html
+++ b/docs/how_to/work_with_schedules/sg_execution_times.html
@@ -345,7 +345,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-work-with-schedules-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:08.791</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
+<p><strong>00:07.455</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 83%" />
@@ -354,35 +354,35 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></td>
-<td><p>00:06.208</p></td>
+<td><p>00:04.846</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></td>
-<td><p>00:01.223</p></td>
+<td><p>00:01.178</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></td>
-<td><p>00:00.575</p></td>
+<td><p>00:00.605</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></td>
-<td><p>00:00.557</p></td>
+<td><p>00:00.560</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></td>
-<td><p>00:00.118</p></td>
+<td><p>00:00.123</p></td>
<td><p>0.0 MB</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></td>
-<td><p>00:00.051</p></td>
+<tr class="row-even"><td><p><a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></td>
+<td><p>00:00.062</p></td>
<td><p>0.0 MB</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></td>
-<td><p>00:00.033</p></td>
+<tr class="row-odd"><td><p><a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></td>
+<td><p>00:00.053</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="tuple_inputs.html#sphx-glr-how-to-work-with-schedules-tuple-inputs-py"><span class="std std-ref">Compute and Reduce with Tuple Inputs</span></a> (<code class="docutils literal notranslate"><span class="pre">tuple_inputs.py</span></code>)</p></td>
-<td><p>00:00.026</p></td>
+<td><p>00:00.028</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>
diff --git a/docs/install/nnpack.html b/docs/install/nnpack.html
index e285b427a5..1515c4b747 100644
--- a/docs/install/nnpack.html
+++ b/docs/install/nnpack.html
@@ -234,17 +234,7 @@
<p class="caption" role="heading"><span class="caption-text">Getting Started</span></p>
<ul class="current">
<li class="toctree-l1 current"><a class="reference internal" href="index.html">Installing TVM</a><ul class="current">
-<li class="toctree-l2 current"><a class="reference internal" href="from_source.html">Install from Source</a><ul class="current">
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#developers-get-source-from-github">Developers: Get Source from Github</a></li>
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#build-the-shared-library">Build the Shared Library</a></li>
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#python-package-installation">Python Package Installation</a></li>
-<li class="toctree-l3 current"><a class="reference internal" href="from_source.html#install-contrib-libraries">Install Contrib Libraries</a><ul class="current">
-<li class="toctree-l4 current"><a class="current reference internal" href="#">NNPACK Contrib Installation</a></li>
-</ul>
-</li>
-<li class="toctree-l3"><a class="reference internal" href="from_source.html#enable-c-tests">Enable C++ Tests</a></li>
-</ul>
-</li>
+<li class="toctree-l2"><a class="reference internal" href="from_source.html">Install from Source</a></li>
<li class="toctree-l2"><a class="reference internal" href="docker.html">Docker Images</a></li>
<li class="toctree-l2 current"><a class="current reference internal" href="#">NNPACK Contrib Installation</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#conditions">Conditions</a></li>
diff --git a/docs/reference/api/doxygen/annotated.html b/docs/reference/api/doxygen/annotated.html
index 922e594d11..d3dfaec958 100644
--- a/docs/reference/api/doxygen/annotated.html
+++ b/docs/reference/api/doxygen/annotated.html
@@ -809,27 +809,28 @@ $(function() {
<tr id="row_1_9_19_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1ScanOp.html" target="_self">ScanOp</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1ScanOpNode.html" title="Symbolic scan. ">ScanOpNode</a> </td></tr>
<tr id="row_1_9_20_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1ScanOpNode.html" target="_self">ScanOpNode</a></td><td class="desc">Symbolic scan </td></tr>
<tr id="row_1_9_21_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Schedule.html" target="_self">Schedule</a></td><td class="desc">Global schedule container For operations and all the operations they depend on. The schedule per <a class="el" href="classtvm_1_1te_1_1Operation.html" title="Operation that produces tensors. ">Operation [...]
-<tr id="row_1_9_22_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html" target="_self">ScheduleNode</a></td><td class="desc">Node container for schedule </td></tr>
-<tr id="row_1_9_23_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Singleton.html" target="_self">Singleton</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1SingletonNode.html" title="Singleton iterator [0, 1) ">SingletonNode</a> </td></tr>
-<tr id="row_1_9_24_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1SingletonNode.html" target="_self">SingletonNode</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Singleton.html" title="Managed reference to SingletonNode. ">Singleton</a> iterator [0, 1) </td></tr>
-<tr id="row_1_9_25_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1SpecializedCondition.html" target="_self">SpecializedCondition</a></td><td class="desc">Specialized condition to enable op specialization </td></tr>
-<tr id="row_1_9_26_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1SpecializedConditionNode.html" target="_self">SpecializedConditionNode</a></td><td class="desc">Container for specialization conditions </td></tr>
-<tr id="row_1_9_27_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Split.html" target="_self">Split</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1SplitNode.html" title="Split the parent domain into product of outer and iter. ">SplitNode</a> </td></tr>
-<tr id="row_1_9_28_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1SplitNode.html" target="_self">SplitNode</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Split.html" title="Managed reference to SplitNode. ">Split</a> the parent domain into product of outer and iter </td></tr>
-<tr id="row_1_9_29_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Stage.html" target="_self">Stage</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Stage.html" title="Stage, contains scheduling for a stage of computation. ">Stage</a>, contains scheduling for a stage of computation </td></tr>
-<tr id="row_1_9_30_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1StageNode.html" target="_self">StageNode</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Stage.html" title="Stage, contains scheduling for a stage of computation. ">Stage</a> </td></tr>
-<tr id="row_1_9_31_" class="even" style="display:none;"><td class="entry"><span style="width:32px;display:inline-block;"> </span><span id="arr_1_9_31_" class="arrow" onclick="toggleFolder('1_9_31_')">►</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Tensor.html" target="_self">Tensor</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Tensor.html" title="Tensor structure representing a possible input, or intermediate [...]
-<tr id="row_1_9_31_0_" class="even" style="display:none;"><td class="entry"><span style="width:64px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Tensor_1_1Slice.html" target="_self">Slice</a></td><td class="desc">Data structure to represent a slice that fixes first k coordinates. This is used to enable syntax sugar of <a class="el" href="classtvm_1_1te_1_1Tensor.html" title="Tensor structure representing a [...]
-<tr id="row_1_9_32_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorComputeOp.html" target="_self">TensorComputeOp</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1TensorComputeOpNode.html" title="A TenorCompute op that compute a tensor with an tensor intrinsic. ">TensorComputeOpNode</a> </td></tr>
-<tr id="row_1_9_33_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorComputeOpNode.html" target="_self">TensorComputeOpNode</a></td><td class="desc">A TenorCompute op that compute a tensor with an tensor intrinsic </td></tr>
-<tr id="row_1_9_34_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1te_1_1TensorDom.html" target="_self">TensorDom</a></td><td class="desc">Temporary data structure to store union of bounds of each axis of <a class="el" href="classtvm_1_1te_1_1Tensor.html" title="Tensor structure representing a possible input, or intermediate computation [...]
-<tr id="row_1_9_35_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorIntrin.html" target="_self">TensorIntrin</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1TensorIntrinNode.html" title="Node to represent a Tensor intrinsic operator. ">TensorIntrinNode</a> </td></tr>
-<tr id="row_1_9_36_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorIntrinCall.html" target="_self">TensorIntrinCall</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1TensorIntrinCallNode.html">TensorIntrinCallNode</a> </td></tr>
-<tr id="row_1_9_37_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorIntrinCallNode.html" target="_self">TensorIntrinCallNode</a></td><td class="desc"></td></tr>
-<tr id="row_1_9_38_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorIntrinNode.html" target="_self">TensorIntrinNode</a></td><td class="desc">Node to represent a <a class="el" href="classtvm_1_1te_1_1Tensor.html" title="Tensor structure representing a possible input, or intermediate computation result. ">Tensor</a> intrinsic o [...]
-<tr id="row_1_9_39_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorNode.html" target="_self">TensorNode</a></td><td class="desc">Node to represent a tensor </td></tr>
-<tr id="row_1_9_40_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Transform.html" target="_self">Transform</a></td><td class="desc"></td></tr>
-<tr id="row_1_9_41_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TransformNode.html" target="_self">TransformNode</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Transform.html">Transform</a> iterator according to some arbitrary expression </td></tr>
+<tr id="row_1_9_22_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1ScheduleContext.html" target="_self">ScheduleContext</a></td><td class="desc">Context helper to collect debug information of <a class="el" href="classtvm_1_1te_1_1Schedule.html" title="Global schedule container For operations and all the operations they depend on. T [...]
+<tr id="row_1_9_23_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html" target="_self">ScheduleNode</a></td><td class="desc">Node container for schedule </td></tr>
+<tr id="row_1_9_24_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Singleton.html" target="_self">Singleton</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1SingletonNode.html" title="Singleton iterator [0, 1) ">SingletonNode</a> </td></tr>
+<tr id="row_1_9_25_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1SingletonNode.html" target="_self">SingletonNode</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Singleton.html" title="Managed reference to SingletonNode. ">Singleton</a> iterator [0, 1) </td></tr>
+<tr id="row_1_9_26_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1SpecializedCondition.html" target="_self">SpecializedCondition</a></td><td class="desc">Specialized condition to enable op specialization </td></tr>
+<tr id="row_1_9_27_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1SpecializedConditionNode.html" target="_self">SpecializedConditionNode</a></td><td class="desc">Container for specialization conditions </td></tr>
+<tr id="row_1_9_28_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Split.html" target="_self">Split</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1SplitNode.html" title="Split the parent domain into product of outer and iter. ">SplitNode</a> </td></tr>
+<tr id="row_1_9_29_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1SplitNode.html" target="_self">SplitNode</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Split.html" title="Managed reference to SplitNode. ">Split</a> the parent domain into product of outer and iter </td></tr>
+<tr id="row_1_9_30_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Stage.html" target="_self">Stage</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Stage.html" title="Stage, contains scheduling for a stage of computation. ">Stage</a>, contains scheduling for a stage of computation </td></tr>
+<tr id="row_1_9_31_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1StageNode.html" target="_self">StageNode</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Stage.html" title="Stage, contains scheduling for a stage of computation. ">Stage</a> </td></tr>
+<tr id="row_1_9_32_" class="even" style="display:none;"><td class="entry"><span style="width:32px;display:inline-block;"> </span><span id="arr_1_9_32_" class="arrow" onclick="toggleFolder('1_9_32_')">►</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Tensor.html" target="_self">Tensor</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Tensor.html" title="Tensor structure representing a possible input, or intermediate [...]
+<tr id="row_1_9_32_0_" class="even" style="display:none;"><td class="entry"><span style="width:64px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Tensor_1_1Slice.html" target="_self">Slice</a></td><td class="desc">Data structure to represent a slice that fixes first k coordinates. This is used to enable syntax sugar of <a class="el" href="classtvm_1_1te_1_1Tensor.html" title="Tensor structure representing a [...]
+<tr id="row_1_9_33_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorComputeOp.html" target="_self">TensorComputeOp</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1TensorComputeOpNode.html" title="A TenorCompute op that compute a tensor with an tensor intrinsic. ">TensorComputeOpNode</a> </td></tr>
+<tr id="row_1_9_34_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorComputeOpNode.html" target="_self">TensorComputeOpNode</a></td><td class="desc">A TenorCompute op that compute a tensor with an tensor intrinsic </td></tr>
+<tr id="row_1_9_35_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1te_1_1TensorDom.html" target="_self">TensorDom</a></td><td class="desc">Temporary data structure to store union of bounds of each axis of <a class="el" href="classtvm_1_1te_1_1Tensor.html" title="Tensor structure representing a possible input, or intermediate computation [...]
+<tr id="row_1_9_36_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorIntrin.html" target="_self">TensorIntrin</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1TensorIntrinNode.html" title="Node to represent a Tensor intrinsic operator. ">TensorIntrinNode</a> </td></tr>
+<tr id="row_1_9_37_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorIntrinCall.html" target="_self">TensorIntrinCall</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1te_1_1TensorIntrinCallNode.html">TensorIntrinCallNode</a> </td></tr>
+<tr id="row_1_9_38_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorIntrinCallNode.html" target="_self">TensorIntrinCallNode</a></td><td class="desc"></td></tr>
+<tr id="row_1_9_39_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorIntrinNode.html" target="_self">TensorIntrinNode</a></td><td class="desc">Node to represent a <a class="el" href="classtvm_1_1te_1_1Tensor.html" title="Tensor structure representing a possible input, or intermediate computation result. ">Tensor</a> intrinsic o [...]
+<tr id="row_1_9_40_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TensorNode.html" target="_self">TensorNode</a></td><td class="desc">Node to represent a tensor </td></tr>
+<tr id="row_1_9_41_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1Transform.html" target="_self">Transform</a></td><td class="desc"></td></tr>
+<tr id="row_1_9_42_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1te_1_1TransformNode.html" target="_self">TransformNode</a></td><td class="desc"><a class="el" href="classtvm_1_1te_1_1Transform.html">Transform</a> iterator according to some arbitrary expression </td></tr>
<tr id="row_1_10_" class="even" style="display:none;"><td class="entry"><span style="width:16px;display:inline-block;"> </span><span id="arr_1_10_" class="arrow" onclick="toggleFolder('1_10_')">►</span><span class="icona"><span class="icon">N</span></span><a class="el" href="namespacetvm_1_1tir.html" target="_self">tir</a></td><td class="desc"></td></tr>
<tr id="row_1_10_0_" class="even" style="display:none;"><td class="entry"><span style="width:32px;display:inline-block;"> </span><span id="arr_1_10_0_" class="arrow" onclick="toggleFolder('1_10_0_')">►</span><span class="icona"><span class="icon">N</span></span><a class="el" href="namespacetvm_1_1tir_1_1usmp.html" target="_self">usmp</a></td><td class="desc"></td></tr>
<tr id="row_1_10_0_0_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;"> </span><span id="arr_1_10_0_0_" class="arrow" onclick="toggleFolder('1_10_0_0_')">►</span><span class="icona"><span class="icon">N</span></span><a class="el" href="namespacetvm_1_1tir_1_1usmp_1_1algo.html" target="_self">algo</a></td><td class="desc"></td></tr>
diff --git a/docs/reference/api/doxygen/classes.html b/docs/reference/api/doxygen/classes.html
index 66f5d546e1..23142af82a 100644
--- a/docs/reference/api/doxygen/classes.html
+++ b/docs/reference/api/doxygen/classes.html
@@ -65,260 +65,260 @@ $(function() {
<div class="qindex"><a class="qindex" href="#letter_a">a</a> | <a class="qindex" href="#letter_b">b</a> | <a class="qindex" href="#letter_c">c</a> | <a class="qindex" href="#letter_d">d</a> | <a class="qindex" href="#letter_e">e</a> | <a class="qindex" href="#letter_f">f</a> | <a class="qindex" href="#letter_g">g</a> | <a class="qindex" href="#letter_h">h</a> | <a class="qindex" href="#letter_i">i</a> |& [...]
<table class="classindex">
<tr><td rowspan="2" valign="bottom"><a name="letter_a"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">  a  </div></td></tr></table>
-</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DTransposeAttrs.html">Conv3DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IntImm.html">IntImm</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::r [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DWinogradAttrs.html">Conv3DWinogradAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IntImmNode.html">IntImmNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVarNode.html">PatternVarNode</a> (<a class="el" href="namespacetvm_1_1relay [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AccessAnalyzer.html">AccessAnalyzer</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConvGemmWeightTransformAttrs.html">ConvGemmWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1In [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AccessAnalyzerNode.html">AccessAnalyzerNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConvWinogradWeightTransformAttrs.html">ConvWinogradWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtv [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool1DAttrs.html">AdaptivePool1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CorrelationAttrs.html">CorrelationAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntSetNode.html">IntSetNode</a> (<a class="e [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool2DAttrs.html">AdaptivePool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1CostModel.html">CostModel</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1IRBuilder.html">I [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool3DAttrs.html">AdaptivePool3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CostModel.html">CostModel</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1IRBuilderFrame [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Add.html">Add</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1CostModelNode.html">CostModelNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1IRBuilderFrameNode.html">IRBuilderFrameNode</a> [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AddNode.html">AddNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CostModelNode.html">CostModelNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1IRBuilderNode.html">IRBuilderNode</a> [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ADT.html">ADT</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1profiling_1_1CountNode.html">CountNode</a> (<a class="el" href="namespacetvm_1_1runtime_1_1profiling.html">tvm::runtime::profiling</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IRDocsifier.html">IRDocsifier</ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ADTObj.html">ADTObj</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CropAndResizeAttrs.html">CropAndResizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IRDocsifierFunctor.html">IRDocsifierFunctor</a> ( [...]
+</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DTransposeAttrs.html">Conv3DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IntImm.html">IntImm</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::r [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DWinogradAttrs.html">Conv3DWinogradAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IntImmNode.html">IntImmNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVarNode.html">PatternVarNode</a> (<a class="el" href="namespacetvm_1_1relay [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AccessAnalyzer.html">AccessAnalyzer</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConvGemmWeightTransformAttrs.html">ConvGemmWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1In [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AccessAnalyzerNode.html">AccessAnalyzerNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConvWinogradWeightTransformAttrs.html">ConvWinogradWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtv [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool1DAttrs.html">AdaptivePool1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CorrelationAttrs.html">CorrelationAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntSetNode.html">IntSetNode</a> (<a class="e [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool2DAttrs.html">AdaptivePool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1CostModel.html">CostModel</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1IRBuilder.html">I [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool3DAttrs.html">AdaptivePool3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CostModel.html">CostModel</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1IRBuilderFrame [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Add.html">Add</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1CostModelNode.html">CostModelNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1IRBuilderFrameNode.html">IRBuilderFrameNode</a> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AddNode.html">AddNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CostModelNode.html">CostModelNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1IRBuilderNode.html">IRBuilderNode</a> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ADT.html">ADT</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1profiling_1_1CountNode.html">CountNode</a> (<a class="el" href="namespacetvm_1_1runtime_1_1profiling.html">tvm::runtime::profiling</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IRDocsifier.html">IRDocsifier</ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ADTObj.html">ADTObj</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CropAndResizeAttrs.html">CropAndResizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IRDocsifierFunctor.html">IRDocsifierFunctor</a> ( [...]
<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AffineGridAttrs.html">AffineGridAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_d"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">  d  </div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IRDocsifierNode.html">IRDocsifierNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1usmp_1_1PoolAllocationNode.html">PoolAllocationNode</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1auto__ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AffineType.html">AffineType</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IRModule.html">IRModule</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1PoolInfo.html">PoolInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="cla [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IRDocsifierNode.html">IRDocsifierNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1usmp_1_1PoolAllocationNode.html">PoolAllocationNode</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1S [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AffineType.html">AffineType</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IRModule.html">IRModule</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1PoolInfo.html">PoolInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="str [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1AffineTypeNode.html">AffineTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1Database.html">Database</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1ir_1_1IRModuleFrame.html">IRModuleFrame</a> (<a class=" [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllClassNonMaximumSuppressionAttrs.html">AllClassNonMaximumSuppressionAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1DatabaseNode.html">DatabaseNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Allocate.html">Allocate</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataProducer.html">DataProducer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IRModuleNode.html">IRModuleNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)  & [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AllocateConst.html">AllocateConst</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataProducerNode.html">DataProducerNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1is__specialized.html">is_specialized</a> (<a class="el" href="namesp [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AllocateConstFrame.html">AllocateConstFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DataType.html">DataType</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1de [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AllocateConstFrameNode.html">AllocateConstFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataTypeLegalizer.html">DataTypeLegalizer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="s [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AllocateConstNode.html">AllocateConstNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DataTypePattern.html">DataTypePattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1is__valid__iterator_3_01Optional_3_01T_01_4_00_01IterTy [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1usmp_1_1AllocatedPoolInfo.html">AllocatedPoolInfo</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DataTypePatternNode.html">DataTypePatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1IterAdapter.html">IterAdap [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1tir_1_1usmp_1_1AllocatedPoolInfoNode.html">AllocatedPoolInfoNode</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DebugAttrs.html">DebugAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1MapNode_1_1iterator.html">MapNode: [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AllocateFrame.html">AllocateFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DeclBuffer.html">DeclBuffer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Map_1_1ite [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AllocateFrameNode.html">AllocateFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1DeclBufferFrame.html">DeclBufferFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)  [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AllocateNode.html">AllocateNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1DeclBufferFrameNode.html">DeclBufferFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1suppo [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1Allocator.html">Allocator</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DeclBufferNode.html">DeclBufferNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1IteratorNode.html">IteratorNode</a> (<a clas [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllocStorageAttrs.html">AllocStorageAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DeformableConv2DAttrs.html">DeformableConv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1auto__scheduler_1_1AttachMapNode_1_1IterKeyHas [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllocTensorAttrs.html">AllocTensorAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DenseAttrs.html">DenseAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapExpr.html">IterMapExpr</a> (<a class="el" href="namespa [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AltPattern.html">AltPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DenseMapNode.html">DenseMapNode</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapExprNode.html">IterMapExprNode</a> (<a class="el" href="nam [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AltPatternNode.html">AltPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DensePackAttrs.html">DensePackAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapResult.html">IterMapResult</a> (<a class="el" href=" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1Analyzer.html">Analyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Dependency.html">Dependency</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapResultNode.html">IterMapResultNode</a> (<a class="el" href="namespacetvm_1_1ari [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1And.html">And</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DependencyNode.html">DependencyNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMark.html">IterMark</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&# [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AndNode.html">AndNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1qnn_1_1DequantizeAttrs.html">DequantizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay_1_1qnn.html">tvm::relay::qnn</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMarkNode.html">IterMarkNode</a> (<a class="el" href="n [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AnnotationStep.html">AnnotationStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DeviceAPI.html">DeviceAPI</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSplitExpr.html">IterSplitExpr</ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AnnotationStepNode.html">AnnotationStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DeviceCopyAttrs.html">DeviceCopyAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSplitExprNode.ht [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Any.html">Any</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1profiling_1_1DeviceWrapper.html">DeviceWrapper</a> (<a class="el" href="namespacetvm_1_1runtime_1_1profiling.html">tvm::runtime::profiling</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSumExpr.html">IterSumExpr</a> (<a class="el [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AnyNode.html">AnyNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1profiling_1_1DeviceWrapperNode.html">DeviceWrapperNode</a> (<a class="el" href="namespacetvm_1_1runtime_1_1profiling.html">tvm::runtime::profiling</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSumExprNode.html">IterSumE [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArangeAttrs.html">ArangeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPattern.html">DFPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IterVar.html">IterVar</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm: [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ArgInfo.html">ArgInfo</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternCallback.html">DFPatternCallback</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarAttr.html">IterVarAttr</a> (<a class=" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ArgInfoNode.html">ArgInfoNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternCallbackNode.html">DFPatternCallbackNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarAttrNode.html">IterVar [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArgReduceAttrs.html">ArgReduceAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor.html">DFPatternFunctor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IterVarNode.html">IterVarNode</a> (<a class="el" href="na [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArgsortAttrs.html">ArgsortAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor_3_01R_07const_01DFPattern_01_6n_00_01Args_8_8_8_08_4.html">DFPatternFunctor< R(const DFPattern &n, Args...)></a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a c [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternNode.html">DFPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarRelationNode.html">IterVarRelationNode</a> (<a class="el" href="namesp [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllClassNonMaximumSuppressionAttrs.html">AllClassNonMaximumSuppressionAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1DatabaseNode.html">DatabaseNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Allocate.html">Allocate</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataProducer.html">DataProducer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IRModuleNode.html">IRModuleNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)  & [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AllocateConst.html">AllocateConst</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataProducerNode.html">DataProducerNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1is__specialized.html">is_specialized</a> (<a class="el" href="namesp [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AllocateConstFrame.html">AllocateConstFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DataType.html">DataType</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1de [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AllocateConstFrameNode.html">AllocateConstFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataTypeLegalizer.html">DataTypeLegalizer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="s [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AllocateConstNode.html">AllocateConstNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DataTypePattern.html">DataTypePattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1is__valid__iterator_3_01Optional_3_01T_01_4_00_01IterTy [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1usmp_1_1AllocatedPoolInfo.html">AllocatedPoolInfo</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DataTypePatternNode.html">DataTypePatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1IterAdapter.html">IterAdap [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1tir_1_1usmp_1_1AllocatedPoolInfoNode.html">AllocatedPoolInfoNode</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DebugAttrs.html">DebugAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Map_1_1iterator.html">Map::iterato [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AllocateFrame.html">AllocateFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DeclBuffer.html">DeclBuffer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1MapNode_1_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AllocateFrameNode.html">AllocateFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1DeclBufferFrame.html">DeclBufferFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)  [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AllocateNode.html">AllocateNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1DeclBufferFrameNode.html">DeclBufferFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1suppo [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1Allocator.html">Allocator</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DeclBufferNode.html">DeclBufferNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1IteratorNode.html">IteratorNode</a> (<a clas [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllocStorageAttrs.html">AllocStorageAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DeformableConv2DAttrs.html">DeformableConv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1auto__scheduler_1_1AttachMapNode_1_1IterKeyHas [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllocTensorAttrs.html">AllocTensorAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DenseAttrs.html">DenseAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapExpr.html">IterMapExpr</a> (<a class="el" href="namespa [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AltPattern.html">AltPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DenseMapNode.html">DenseMapNode</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapExprNode.html">IterMapExprNode</a> (<a class="el" href="nam [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AltPatternNode.html">AltPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DensePackAttrs.html">DensePackAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapResult.html">IterMapResult</a> (<a class="el" href=" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1Analyzer.html">Analyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Dependency.html">Dependency</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapResultNode.html">IterMapResultNode</a> (<a class="el" href="namespacetvm_1_1ari [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1And.html">And</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DependencyNode.html">DependencyNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMark.html">IterMark</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&# [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AndNode.html">AndNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1qnn_1_1DequantizeAttrs.html">DequantizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay_1_1qnn.html">tvm::relay::qnn</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMarkNode.html">IterMarkNode</a> (<a class="el" href="n [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AnnotationStep.html">AnnotationStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DeviceAPI.html">DeviceAPI</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSplitExpr.html">IterSplitExpr</ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AnnotationStepNode.html">AnnotationStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DeviceCopyAttrs.html">DeviceCopyAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSplitExprNode.ht [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Any.html">Any</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1profiling_1_1DeviceWrapper.html">DeviceWrapper</a> (<a class="el" href="namespacetvm_1_1runtime_1_1profiling.html">tvm::runtime::profiling</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSumExpr.html">IterSumExpr</a> (<a class="el [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AnyNode.html">AnyNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1profiling_1_1DeviceWrapperNode.html">DeviceWrapperNode</a> (<a class="el" href="namespacetvm_1_1runtime_1_1profiling.html">tvm::runtime::profiling</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSumExprNode.html">IterSumE [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArangeAttrs.html">ArangeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPattern.html">DFPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IterVar.html">IterVar</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm: [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ArgInfo.html">ArgInfo</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternCallback.html">DFPatternCallback</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarAttr.html">IterVarAttr</a> (<a class=" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ArgInfoNode.html">ArgInfoNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternCallbackNode.html">DFPatternCallbackNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarAttrNode.html">IterVar [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArgReduceAttrs.html">ArgReduceAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor.html">DFPatternFunctor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IterVarNode.html">IterVarNode</a> (<a class="el" href="na [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArgsortAttrs.html">ArgsortAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor_3_01R_07const_01DFPattern_01_6n_00_01Args_8_8_8_08_4.html">DFPatternFunctor< R(const DFPattern &n, Args...)></a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a c [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternNode.html">DFPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarRelationNode.html">IterVarRelationNode</a> (<a class="el" href="namesp [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ArrayAccessor.html">ArrayAccessor</a> (<a class="el" href="namespacetvm_1_1runtime_1_1metadata.html">tvm::runtime::metadata</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternVisitor.html">DFPatternVisitor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_l"></a><table border="0" cel [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ProfilerNode.html">ProfilerNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1StringObj.html">StringObj</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ArrayAccessor_3_01const_01char_01_5_00_01_1_1tvm_1_1runtime_1_1String_01_4.html">ArrayAccessor< const char *, ::tvm::runtime::String ></a> (<a class="el" href="namespacetvm_1_1runtime_1_1metadata.html">tvm::runtime::metadata</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DFTAttrs.html">DFTAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>) & [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator_1_1ArrayHandler.html">SimpleObjAllocator::ArrayHandler</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1Diagnostic.html">Diagnostic</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1L2NormalizeAttrs.html">L2NormalizeAttrs</a> (<a [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1ArrayIndexPath.html">ArrayIndexPath</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticBuilder.html">DiagnosticBuilder</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1LambdaDoc.html">LambdaDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">t [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1ArrayIndexPathNode.html">ArrayIndexPathNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticContext.html">DiagnosticContext</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1LambdaDocNode.html">LambdaDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ArrayIterator.html">ArrayIterator</a> (<a class="el" href="namespacetvm_1_1runtime_1_1metadata.html">tvm::runtime::metadata</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticContextNode.html">DiagnosticContextNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1LaunchThre [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ArrayNode.html">ArrayNode</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticNode.html">DiagnosticNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1LaunchThreadFrameNode.html">LaunchThreadFrameNode</a> (<a class="e [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ProfilerNode.html">ProfilerNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1StringImmNode.html">StringImmNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ArrayAccessor_3_01const_01char_01_5_00_01_1_1tvm_1_1runtime_1_1String_01_4.html">ArrayAccessor< const char *, ::tvm::runtime::String ></a> (<a class="el" href="namespacetvm_1_1runtime_1_1metadata.html">tvm::runtime::metadata</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DFTAttrs.html">DFTAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>) & [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator_1_1ArrayHandler.html">SimpleObjAllocator::ArrayHandler</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1Diagnostic.html">Diagnostic</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1L2NormalizeAttrs.html">L2NormalizeAttrs</a> (<a [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1ArrayIndexPath.html">ArrayIndexPath</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticBuilder.html">DiagnosticBuilder</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1LambdaDoc.html">LambdaDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">t [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1ArrayIndexPathNode.html">ArrayIndexPathNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticContext.html">DiagnosticContext</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1LambdaDocNode.html">LambdaDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ArrayIterator.html">ArrayIterator</a> (<a class="el" href="namespacetvm_1_1runtime_1_1metadata.html">tvm::runtime::metadata</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticContextNode.html">DiagnosticContextNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1LaunchThre [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ArrayNode.html">ArrayNode</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticNode.html">DiagnosticNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1LaunchThreadFrameNode.html">LaunchThreadFrameNode</a> (<a class="e [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AssertDoc.html">AssertDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticRenderer.html">DiagnosticRenderer</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LayerNormAttrs.html">LayerNormAttrs</a> (<a class="e [...]
</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AssertDoc.html">AssertDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticRenderer.html">DiagnosticRenderer</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LayerNormAttrs.html">LayerNormAttrs</a> (<a class="e [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AssertDocNode.html">AssertDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticRendererNode.html">DiagnosticRendererNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Layout.html">Layout</a> (<a class="el" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AssertFrame.html">AssertFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DictAttrs.html">DictAttrs</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LayoutAxis.html">LayoutAxis</a> (<a cla [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AssertFrameNode.html">AssertFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DictAttrsNode.html">DictAttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LayoutNode.html">Layout [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AssertStmt.html">AssertStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DictDoc.html">DictDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LayoutTransformAttrs.html">LayoutTransformAttrs</a> ( [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AssertStmtNode.html">AssertStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DictDocNode.html">DictDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LE.html">LE</a> (<a class="el" href="nam [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AssignDoc.html">AssignDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DilateAttrs.html">DilateAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LeakyReluAttrs.html">LeakyReluAttrs</a> [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AssignDocNode.html">AssignDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Dilation2DAttrs.html">Dilation2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1LENode.html">LENode</a> (< [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AttachMap.html">AttachMap</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Div.html">Div</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Let.html">Let</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::t [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AttachMapNode.html">AttachMapNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DivNode.html">DivNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Let.html">Let</a> (<a class="el" href="namespacetvm_1 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AttrAccessDoc.html">AttrAccessDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1Doc.html">Doc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AttrAccessDocNode.html">AttrAccessDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DocNode.html">DocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrDocEntry.html">AttrDocEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DocStringDoc.html">DocStringDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetNode.html">LetNode</a> (<a cla [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrDocVisitor.html">AttrDocVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DocStringDocNode.html">DocStringDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1LetNode.html">LetNo [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1AttrError.html">AttrError</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DominatorPattern.html">DominatorPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1LetPattern.html">LetPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::rela [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrExistVisitor.html">AttrExistVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DominatorPatternNode.html">DominatorPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1LetPatternNode.html">LetPatternNode</a> ( [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrFieldInfo.html">AttrFieldInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DropoutAttrs.html">DropoutAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetStmt.html">LetStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)  [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AssertDocNode.html">AssertDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticRendererNode.html">DiagnosticRendererNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Layout.html">Layout</a> (<a class="el" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AssertFrame.html">AssertFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DictAttrs.html">DictAttrs</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LayoutAxis.html">LayoutAxis</a> (<a cla [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AssertFrameNode.html">AssertFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1DictAttrsNode.html">DictAttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LayoutNode.html">Layout [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AssertStmt.html">AssertStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DictDoc.html">DictDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LayoutTransformAttrs.html">LayoutTransformAttrs</a> ( [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AssertStmtNode.html">AssertStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DictDocNode.html">DictDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LE.html">LE</a> (<a class="el" href="nam [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AssignDoc.html">AssignDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DilateAttrs.html">DilateAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LeakyReluAttrs.html">LeakyReluAttrs</a> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AssignDocNode.html">AssignDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Dilation2DAttrs.html">Dilation2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1LENode.html">LENode</a> (< [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AttachMap.html">AttachMap</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Div.html">Div</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Let.html">Let</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::t [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AttachMapNode.html">AttachMapNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DivNode.html">DivNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Let.html">Let</a> (<a class="el" href="namespacetvm_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AttrAccessDoc.html">AttrAccessDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1Doc.html">Doc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1AttrAccessDocNode.html">AttrAccessDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DocNode.html">DocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrDocEntry.html">AttrDocEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DocStringDoc.html">DocStringDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetNode.html">LetNode</a> (<a cla [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrDocVisitor.html">AttrDocVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1DocStringDocNode.html">DocStringDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1LetNode.html">LetNo [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1AttrError.html">AttrError</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DominatorPattern.html">DominatorPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1LetPattern.html">LetPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::rela [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrExistVisitor.html">AttrExistVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DominatorPatternNode.html">DominatorPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1LetPatternNode.html">LetPatternNode</a> ( [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrFieldInfo.html">AttrFieldInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DropoutAttrs.html">DropoutAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetStmt.html">LetStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)  [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1AttrFieldInfoNode.html">AttrFieldInfoNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1profiling_1_1DurationNode.html">DurationNode</a> (<a class="el" href="namespacetvm_1_1runtime_1_1profiling.html">tvm::runtime::profiling</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetStmtNode.html">LetStmtNode</a> (<a clas [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Tensor.html">Tensor</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AttrFrame.html">AttrFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DynExpandDimsAttrs.html">DynExpandDimsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1suppor [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1TempExprNode.html">TempExprNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AttrFrame.html">AttrFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DynExpandDimsAttrs.html">DynExpandDimsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1suppor [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1AttrFrameNode.html">AttrFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_e"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">  e  </div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ListDoc.html">ListDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1qnn_1_1QuantizeAttrs.html">QuantizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay_1_1qnn.html">tvm::relay::qnn</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1TensorAffineTypeNode.html">Ten [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ListDoc.html">ListDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1qnn_1_1QuantizeAttrs.html">QuantizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay_1_1qnn.html">tvm::relay::qnn</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1TensorAffineType.html">TensorA [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1AttributeAccessPath.html">AttributeAccessPath</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ListDocNode.html">ListDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_r"></a><table border="0" cellspacing="0" cellpadding="0"> [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1TensorComputeOp.html">TensorComputeOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttributeAccessPathNode.html">AttributeAccessPathNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1EinsumAttrs.html">EinsumAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1LiteralDoc.html">LiteralDoc</a> (<a class="el" href="namespa [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrInitEntry.html">AttrInitEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1topi_1_1EinsumEquation.html">EinsumEquation</a> (<a class="el" href="namespacetvm_1_1topi.html">tvm::topi</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1LiteralDocNode.html">LiteralDocNode</a> (<a clas [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrInitVisitor.html">AttrInitVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1ElseFrame.html">ElseFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Loa [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1TensorAffineTypeNode.html">TensorAffineTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttributeAccessPathNode.html">AttributeAccessPathNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1EinsumAttrs.html">EinsumAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1LiteralDoc.html">LiteralDoc</a> (<a class="el" href="namespa [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrInitEntry.html">AttrInitEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1topi_1_1EinsumEquation.html">EinsumEquation</a> (<a class="el" href="namespacetvm_1_1topi.html">tvm::topi</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1LiteralDocNode.html">LiteralDocNode</a> (<a clas [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrInitVisitor.html">AttrInitVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1ElseFrame.html">ElseFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Loa [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrNonDefaultVisitor.html">AttrNonDefaultVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1ElseFrameNode.html">ElseFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="cl [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrNopEntry.html">AttrNopEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1EnvFunc.html">EnvFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalBuilder.html">LocalBuilder</a> (<a class="el" href="namespacetvm_1_1auto__scheduler. [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrNopEntry.html">AttrNopEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1EnvFunc.html">EnvFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalBuilder.html">LocalBuilder</a> (<a class="el" href="namespacetvm_1_1auto__scheduler. [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrNormalVisitor.html">AttrNormalVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1EnvFuncNode.html">EnvFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalBuilderNode.html">LocalBuilderNode</a> (<a class="el" href="namespa [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AttrPattern.html">AttrPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EQ.html">EQ</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalRunner.html">LocalRunner</a> (<a class="el" href="namespacetvm_1_1auto__scheduler [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AttrPattern.html">AttrPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EQ.html">EQ</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalRunner.html">LocalRunner</a> (<a class="el" href="namespacetvm_1_1auto__scheduler [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AttrPatternNode.html">AttrPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EQNode.html">EQNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalRunnerNode.html">LocalRunnerNode</a> (<a class="el" href="namespa [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistry.html">AttrRegistry</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ErrorBuilder.html">ErrorBuilder</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LoopRV.html">LoopRV</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)  [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistryMap.html">AttrRegistryMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ErrorReporter.html">ErrorReporter</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LoopRVNode.html">LoopRVNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistryMapContainerMap.html">AttrRegistryMapContainerMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Evaluate.html">Evaluate</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LRNAttrs.html">LRNAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistry.html">AttrRegistry</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ErrorBuilder.html">ErrorBuilder</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LoopRV.html">LoopRV</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)  [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistryMap.html">AttrRegistryMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ErrorReporter.html">ErrorReporter</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LoopRVNode.html">LoopRVNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistryMapContainerMap.html">AttrRegistryMapContainerMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Evaluate.html">Evaluate</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LRNAttrs.html">LRNAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html" [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1Attrs.html">Attrs</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EvaluateNode.html">EvaluateNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LT.html">LT</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrsNode.html">AttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1Executable.html">Executable</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LTNode.html">LTNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrsNode.html">AttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1Executable.html">Executable</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LTNode.html">LTNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a> [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrsSEqualVisitor.html">AttrsSEqualVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Executor.html">Executor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_m"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div c [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1RecClosureObj.html">RecClosureObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1TensorType.html">TensorType</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrsSHashVisitor.html">AttrsSHashVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExecutorNode.html">ExecutorNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReader.html">RecordReader</a> (<a class [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AttrStmt.html">AttrStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExecutorRegEntry.html">ExecutorRegEntry</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Map.html">Map</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AttrStmtNode.html">AttrStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ExpandDimsAttrs.html">ExpandDimsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1MapNode.html">MapNode</a> (<a class="el" href="namespacetvm_1_1ru [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html">AttrTriggerNonDefaultEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1ExprDeepEqual.html">ExprDeepEqual</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MapValuePath.html">MapValuePath</a> (<a class="e [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrVisitor.html">AttrVisitor</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ExprDoc.html">ExprDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MapValuePathNode.html">MapValuePathNode</a> (<a class="el" href="namespacetvm.ht [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AutoSchedulerLayoutTransformAttrs.html">AutoSchedulerLayoutTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ExprDocNode.html">ExprDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1re [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool1DAttrs.html">AvgPool1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprFunctor.html">ExprFunctor</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MatchBufferRegion.html">MatchBufferRegion</a> (<a class="el" href="namesp [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool2DAttrs.html">AvgPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html">ExprFunctor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MatchBufferRegionNode.html">MatchBufferRegionNode</a> (<a class="el [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool3DAttrs.html">AvgPool3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprFunctor_3_01R_07const_01Expr_01_6n_00_01Args_8_8_8_08_4.html">ExprFunctor< R(const Expr &n, Args...)></a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href=" [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1RecClosureObj.html">RecClosureObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1TensorNode.html">TensorNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrsSHashVisitor.html">AttrsSHashVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExecutorNode.html">ExecutorNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReader.html">RecordReader</a> (<a class [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AttrStmt.html">AttrStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExecutorRegEntry.html">ExecutorRegEntry</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Map.html">Map</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AttrStmtNode.html">AttrStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ExpandDimsAttrs.html">ExpandDimsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1MapNode.html">MapNode</a> (<a class="el" href="namespacetvm_1_1ru [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html">AttrTriggerNonDefaultEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1ExprDeepEqual.html">ExprDeepEqual</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MapValuePath.html">MapValuePath</a> (<a class="e [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrVisitor.html">AttrVisitor</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ExprDoc.html">ExprDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MapValuePathNode.html">MapValuePathNode</a> (<a class="el" href="namespacetvm.ht [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AutoSchedulerLayoutTransformAttrs.html">AutoSchedulerLayoutTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ExprDocNode.html">ExprDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1re [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool1DAttrs.html">AvgPool1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprFunctor.html">ExprFunctor</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MatchBufferRegion.html">MatchBufferRegion</a> (<a class="el" href="namesp [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool2DAttrs.html">AvgPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html">ExprFunctor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MatchBufferRegionNode.html">MatchBufferRegionNode</a> (<a class="el [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool3DAttrs.html">AvgPool3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprFunctor_3_01R_07const_01Expr_01_6n_00_01Args_8_8_8_08_4.html">ExprFunctor< R(const Expr &n, Args...)></a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href=" [...]
<tr><td rowspan="2" valign="bottom"><a name="letter_b"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">  b  </div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprFunctor_3_01R_07const_01PrimExpr_01_6n_00_01Args_8_8_8_08_4.html">ExprFunctor< R(const PrimExpr &n, Args...)></a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MatmulAttrs.html">MatmulAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="str [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprMutator.html">ExprMutator</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MatrixSetDiagAttrs.html">MatrixSetDiagAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1ReflectionVTable.html">ReflectionVTable</a> (<a class="el" href="namespa [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseAttrsNode.html">BaseAttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprMutator.html">ExprMutator</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Max.html">Max</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)    [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1BaseComputeOpNode.html">BaseComputeOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprPattern.html">ExprPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MaxNode.html">MaxNode</a> (<a class="el" href="namespacetvm_1_1tir.html [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseExpr.html">BaseExpr</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprPatternNode.html">ExprPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool1DAttrs.html">MaxPool1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm:: [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseExprNode.html">BaseExprNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprRewriter.html">ExprRewriter</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool2DAttrs.html">MaxPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseFunc.html">BaseFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ExprStmtDoc.html">ExprStmtDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool3DAttrs.html">MaxPool3DAttrs</a> (<a class="el" href="namespa [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseFuncNode.html">BaseFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ExprStmtDocNode.html">ExprStmtDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1MeasureCallback.html">MeasureCallback</a> (< [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseTensorType.html">BaseTensorType</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprVisitor.html">ExprVisitor</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureCallback.html">MeasureCallback</a> (<a class="el" href="namespacetvm_1_1auto__sc [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseTensorTypeNode.html">BaseTensorTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprVisitor.html">ExprVisitor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1MeasureCallbackNode.html">MeasureCallbackNode</a> (<a class="el" href="nam [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseValueEqual.html">BaseValueEqual</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ExternOp.html">ExternOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureCallbackNode.html">MeasureCallbackNode</a> (<a class="el" href="namespacetvm_1_1auto__sch [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseValueHash.html">BaseValueHash</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ExternOpNode.html">ExternOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1MeasureCandidate.html">MeasureCandidate</a> (<a class="el" href="namespacetvm_1_1meta__sche [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchMatmulAttrs.html">BatchMatmulAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ExtractedTask.html">ExtractedTask</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1MeasureCandidateNode.h [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchNormAttrs.html">BatchNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ExtractedTaskNode.html">ExtractedTaskNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureInput.html [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchToSpaceNDAttrs.html">BatchToSpaceNDAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncObj_1_1Extractor.html">PackedFuncObj::Extractor</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureInp [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprFunctor_3_01R_07const_01PrimExpr_01_6n_00_01Args_8_8_8_08_4.html">ExprFunctor< R(const PrimExpr &n, Args...)></a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MatmulAttrs.html">MatmulAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="str [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprMutator.html">ExprMutator</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MatrixSetDiagAttrs.html">MatrixSetDiagAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1ReflectionVTable.html">ReflectionVTable</a> (<a class="el" href="namespa [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseAttrsNode.html">BaseAttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprMutator.html">ExprMutator</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Max.html">Max</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)    [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1BaseComputeOpNode.html">BaseComputeOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprPattern.html">ExprPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MaxNode.html">MaxNode</a> (<a class="el" href="namespacetvm_1_1tir.html [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseExpr.html">BaseExpr</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprPatternNode.html">ExprPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool1DAttrs.html">MaxPool1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm:: [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseExprNode.html">BaseExprNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprRewriter.html">ExprRewriter</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool2DAttrs.html">MaxPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseFunc.html">BaseFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ExprStmtDoc.html">ExprStmtDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool3DAttrs.html">MaxPool3DAttrs</a> (<a class="el" href="namespa [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseFuncNode.html">BaseFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ExprStmtDocNode.html">ExprStmtDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1MeasureCallback.html">MeasureCallback</a> (< [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseTensorType.html">BaseTensorType</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprVisitor.html">ExprVisitor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureCallback.html">MeasureCallback</a> (<a class="el" href="namespacetvm_1_1au [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseTensorTypeNode.html">BaseTensorTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprVisitor.html">ExprVisitor</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1MeasureCallbackNode.html">MeasureCallbackNode</a> (<a class="el" href="namespace [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseValueEqual.html">BaseValueEqual</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ExternOp.html">ExternOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureCallbackNode.html">MeasureCallbackNode</a> (<a class="el" href="namespacetvm_1_1auto__sch [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseValueHash.html">BaseValueHash</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ExternOpNode.html">ExternOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1MeasureCandidate.html">MeasureCandidate</a> (<a class="el" href="namespacetvm_1_1meta__sche [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchMatmulAttrs.html">BatchMatmulAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ExtractedTask.html">ExtractedTask</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1MeasureCandidateNode.h [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchNormAttrs.html">BatchNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1ExtractedTaskNode.html">ExtractedTaskNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureInput.html [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchToSpaceNDAttrs.html">BatchToSpaceNDAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncObj_1_1Extractor.html">PackedFuncObj::Extractor</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureInp [...]
<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BiasAddAttrs.html">BiasAddAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_f"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">  f  </div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureResult.html">MeasureResult</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStep.html">ReorderStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Tuple.htm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BijectiveLayout.html">BijectiveLayout</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureResultNode.html">MeasureResultNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStepNode.htm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BijectiveLayoutNode.html">BijectiveLayoutNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1FeatureExtractor.html">FeatureExtractor</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1MemCpyDetails.html">MemCpyD [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BinaryConv2DAttrs.html">BinaryConv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1FeatureExtractorNode.html">FeatureExtractorNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MemoryInfo.html">MemoryI [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BinaryDenseAttrs.html">BinaryDenseAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FeatureSet.html">FeatureSet</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MemoryInfoNode.html">MemoryInfoNode</a> (<a class="el" href="namespacetv [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BinaryOpNode.html">BinaryOpNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FIFOBufferAttrs.html">FIFOBufferAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1MemoryManager.html">MemoryManager</a> (<a class="el" href=" [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BitPackAttrs.html">BitPackAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FixedPointMultiplyAttrs.html">FixedPointMultiplyAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structMemoryManagerInterface.html">MemoryManagerInterface</a> [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Block.html">Block</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FixedPointMultiplyPerAxisAttrs.html">FixedPointMultiplyPerAxisAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MeshgridAttrs.html">MeshgridAttrs</a> (<a class="e [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1BlockFrame.html">BlockFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1SeqStmt_1_1Flattener.html">SeqStmt::Flattener</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1BlockFrameNode.html">BlockFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1FloatImm.html">FloatImm</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1MetadataArray.html" [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1tir_1_1BlockInfo.html">BlockInfo</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1FloatImmNode.html">FloatImmNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1MetadataArrayNode.html">MetadataArrayNode</a> (<a class="el" href="namespacetvm_1_1runtime [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1BlockInitFrame.html">BlockInitFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorDiv.html">FloorDiv</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1BlockInitFrameNode.html">BlockInitFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorDivNode.html">FloorDivNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockNode.html">BlockNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorMod.html">FloorMod</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1MetadataNode.html">MetadataNode</a> (<a class="el" href="namespacetvm_1_1runtime [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockRealize.html">BlockRealize</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorModNode.html">FloorModNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MetaScheduleLayoutTransformAttrs.html">MetaScheduleLayoutTransformAttrs</a> (<a [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockRealizeNode.html">BlockRealizeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowFusedSplitStep.html">FollowFusedSplitStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1profiling_1_1Metric [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockRV.html">BlockRV</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowFusedSplitStepNode.html">FollowFusedSplitStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1profiling_1_1MetricCollectorN [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockRVNode.html">BlockRVNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowSplitStep.html">FollowSplitStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Min.html">Min</a> (<a class="el" href="name [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockScope.html">BlockScope</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowSplitStepNode.html">FollowSplitStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MinNode.html">MinNode</a> (<a class=" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockScopeNode.html">BlockScopeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1For.html">For</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MirrorPadAttrs.html">MirrorPadAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm: [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1Bool.html">Bool</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ForDoc.html">ForDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MissingArrayElementPath.html">MissingArrayElementPath</a> (<a class="el" href="namespacetvm.html [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Broadcast.html">Broadcast</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ForDocNode.html">ForDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MissingArrayElementPathNode.html">MissingArrayElementPathNo [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1qnn_1_1BroadcastAttrs.html">BroadcastAttrs</a> (<a class="el" href="namespacetvm_1_1relay_1_1qnn.html">tvm::relay::qnn</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1ForFrame.html">ForFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BroadcastNode.html">BroadcastNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1ForFrameNode.html">ForFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MissingMapEntryPa [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Buffer.html">Buffer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ForNode.html">ForNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MixedModeMutator.html">MixedModeMutator</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::rela [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1vm_1_1Buffer.html">Buffer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1Frame.html">Frame</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MixedModeVisitor.html">MixedModeVisit [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1usmp_1_1BufferInfo.html">BufferInfo</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1FrameBuffer.html">FrameBuffer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1micro__rpc.html">tvm::runtime::micro_rpc</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Mod.html">Mod</ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1usmp_1_1BufferInfoAnalysis.html">BufferInfoAnalysis</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1FrameNode.html">FrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ModNode.html"> [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1tir_1_1usmp_1_1BufferInfoAnalysisNode.html">BufferInfoAnalysisNode</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html">Framer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1micro__rpc.html">tvm::runtime::micro_rpc</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1 [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1tir_1_1usmp_1_1BufferInfoNode.html">BufferInfoNode</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ShapeTupleObj_1_1FromStd.html">ShapeTupleObj::FromStd</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ModularSetAnalyzer. [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferLoad.html">BufferLoad</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1StringObj_1_1FromStd.html">StringObj::FromStd</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ModularSetNode.html">ModularSetNode</a> (<a class="el" hre [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferLoadNode.html">BufferLoadNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Function.html">Function</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Module.html">Module</a> (<a class="el" href="namespacetvm_1_1runtime.html">t [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferNode.html">BufferNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1FunctionDoc.html">FunctionDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ModuleNode.html">ModuleNode</a> (<a class="e [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRealize.html">BufferRealize</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1FunctionDocNode.html">FunctionDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Mul.html">Mul</a> (<a class="el" h [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRealizeNode.html">BufferRealizeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionNode.html">FunctionNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MulNode.html">MulNode</a> (<a class="el" href="namespacetvm_1_1tir [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRegion.html">BufferRegion</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionPattern.html">FunctionPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultiBoxPriorAttrs.html">MultiBoxPriorAttrs</a> (<a class="el" href [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRegionNode.html">BufferRegionNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionPatternNode.html">FunctionPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultiBoxTransformLocAttrs.html">MultiBoxTransformLo [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferStore.html">BufferStore</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1FuncType.html">FuncType</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultinomialAttrs.html">MultinomialAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>) [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferStoreNode.html">BufferStoreNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1FuncTypeNode.html">FuncTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html"> [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureResult.html">MeasureResult</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStep.html">ReorderStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BijectiveLayout.html">BijectiveLayout</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureResultNode.html">MeasureResultNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStepNode.htm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BijectiveLayoutNode.html">BijectiveLayoutNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1FeatureExtractor.html">FeatureExtractor</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1MemCpyDetails.html">MemCpyD [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BinaryConv2DAttrs.html">BinaryConv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1FeatureExtractorNode.html">FeatureExtractorNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MemoryInfo.html">MemoryI [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BinaryDenseAttrs.html">BinaryDenseAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FeatureSet.html">FeatureSet</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MemoryInfoNode.html">MemoryInfoNode</a> (<a class="el" href="namespacetv [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BinaryOpNode.html">BinaryOpNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FIFOBufferAttrs.html">FIFOBufferAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1MemoryManager.html">MemoryManager</a> (<a class="el" href=" [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BitPackAttrs.html">BitPackAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FixedPointMultiplyAttrs.html">FixedPointMultiplyAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structMemoryManagerInterface.html">MemoryManagerInterface</a> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Block.html">Block</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FixedPointMultiplyPerAxisAttrs.html">FixedPointMultiplyPerAxisAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MeshgridAttrs.html">MeshgridAttrs</a> (<a class="e [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1BlockFrame.html">BlockFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1SeqStmt_1_1Flattener.html">SeqStmt::Flattener</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1BlockFrameNode.html">BlockFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1FloatImm.html">FloatImm</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1MetadataArray.html" [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1tir_1_1BlockInfo.html">BlockInfo</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1FloatImmNode.html">FloatImmNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1MetadataArrayNode.html">MetadataArrayNode</a> (<a class="el" href="namespacetvm_1_1runtime [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1BlockInitFrame.html">BlockInitFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorDiv.html">FloorDiv</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1BlockInitFrameNode.html">BlockInitFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorDivNode.html">FloorDivNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockNode.html">BlockNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorMod.html">FloorMod</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1MetadataNode.html">MetadataNode</a> (<a class="el" href="namespacetvm_1_1runtime [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockRealize.html">BlockRealize</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorModNode.html">FloorModNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MetaScheduleLayoutTransformAttrs.html">MetaScheduleLayoutTransformAttrs</a> (<a [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockRealizeNode.html">BlockRealizeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowFusedSplitStep.html">FollowFusedSplitStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1profiling_1_1Metric [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockRV.html">BlockRV</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowFusedSplitStepNode.html">FollowFusedSplitStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1profiling_1_1MetricCollectorN [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockRVNode.html">BlockRVNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowSplitStep.html">FollowSplitStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Min.html">Min</a> (<a class="el" href="name [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockScope.html">BlockScope</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowSplitStepNode.html">FollowSplitStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MinNode.html">MinNode</a> (<a class=" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BlockScopeNode.html">BlockScopeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1For.html">For</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MirrorPadAttrs.html">MirrorPadAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm: [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1Bool.html">Bool</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ForDoc.html">ForDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MissingArrayElementPath.html">MissingArrayElementPath</a> (<a class="el" href="namespacetvm.html [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Broadcast.html">Broadcast</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ForDocNode.html">ForDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MissingArrayElementPathNode.html">MissingArrayElementPathNo [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1qnn_1_1BroadcastAttrs.html">BroadcastAttrs</a> (<a class="el" href="namespacetvm_1_1relay_1_1qnn.html">tvm::relay::qnn</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1ForFrame.html">ForFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BroadcastNode.html">BroadcastNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1ForFrameNode.html">ForFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1MissingMapEntryPa [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Buffer.html">Buffer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ForNode.html">ForNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MixedModeMutator.html">MixedModeMutator</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::rela [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1vm_1_1Buffer.html">Buffer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1Frame.html">Frame</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MixedModeVisitor.html">MixedModeVisit [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1usmp_1_1BufferInfo.html">BufferInfo</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1FrameBuffer.html">FrameBuffer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1micro__rpc.html">tvm::runtime::micro_rpc</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Mod.html">Mod</ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1usmp_1_1BufferInfoAnalysis.html">BufferInfoAnalysis</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1FrameNode.html">FrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ModNode.html"> [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1tir_1_1usmp_1_1BufferInfoAnalysisNode.html">BufferInfoAnalysisNode</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html">Framer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1micro__rpc.html">tvm::runtime::micro_rpc</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1 [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1tir_1_1usmp_1_1BufferInfoNode.html">BufferInfoNode</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp.html">tvm::tir::usmp</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ShapeTupleObj_1_1FromStd.html">ShapeTupleObj::FromStd</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ModularSetAnalyzer. [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferLoad.html">BufferLoad</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1StringObj_1_1FromStd.html">StringObj::FromStd</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ModularSetNode.html">ModularSetNode</a> (<a class="el" hre [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferLoadNode.html">BufferLoadNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Function.html">Function</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Module.html">Module</a> (<a class="el" href="namespacetvm_1_1runtime.html">t [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferNode.html">BufferNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1FunctionDoc.html">FunctionDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ModuleNode.html">ModuleNode</a> (<a class="e [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRealize.html">BufferRealize</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1FunctionDocNode.html">FunctionDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Mul.html">Mul</a> (<a class="el" h [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRealizeNode.html">BufferRealizeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionNode.html">FunctionNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MulNode.html">MulNode</a> (<a class="el" href="namespacetvm_1_1tir [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRegion.html">BufferRegion</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionPattern.html">FunctionPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultiBoxPriorAttrs.html">MultiBoxPriorAttrs</a> (<a class="el" href [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRegionNode.html">BufferRegionNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionPatternNode.html">FunctionPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultiBoxTransformLocAttrs.html">MultiBoxTransformLo [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferStore.html">BufferStore</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1FuncType.html">FuncType</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultinomialAttrs.html">MultinomialAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>) [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferStoreNode.html">BufferStoreNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1FuncTypeNode.html">FuncTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html"> [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1Builder.html">Builder</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Fuse.html">Fuse</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1MutatorNode.html">MutatorNode</a> (<a class="el" href="namespacetvm_ [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1TypeCallNode.html">TypeCallNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
+</td><td valign="top"><a class="el" href="classtvm_1_1TypeCall.html">TypeCall</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1BuilderInput.html">BuilderInput</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1FuseNode.html">FuseNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_n"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1TypeConstraint.html">TypeConstraint</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1BuilderInputNode.html">BuilderInputNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FuseStep.html">FuseStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ScanOp.html">Sca [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1BuilderNode.html">BuilderNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FuseStepNode.html">FuseStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_ [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1TypeCallNode.html">TypeCallNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1BuilderInputNode.html">BuilderInputNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FuseStep.html">FuseStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ScanOp.html">Sca [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1BuilderNode.html">BuilderNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FuseStepNode.html">FuseStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_ [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1BuilderResult.html">BuilderResult</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_g"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">  g  </div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1NameSupply.html">NameSupply</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ScanOpNode.html">ScanOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1TypeDataNode.html">TypeDataNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1BuilderResultNode.html">BuilderResultNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1NameSupplyNode.html">NameSupplyNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ScatterElementsAttrs.html">ScatterElementsAttrs</a> [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1BuildResult.html">BuildResult</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GatherAttrs.html">GatherAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray.html">NDArray</a> (<a class="el" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1BuildResultNode.html">BuildResultNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GatherNDAttrs.html">GatherNDAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1NDArrayContainerTrait.html">NDArrayCon [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1NameSupply.html">NameSupply</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ScanOpNode.html">ScanOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1TypeData.html">TypeData</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1BuilderResultNode.html">BuilderResultNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1NameSupplyNode.html">NameSupplyNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ScatterElementsAttrs.html">ScatterElementsAttrs</a> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1BuildResult.html">BuildResult</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GatherAttrs.html">GatherAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray.html">NDArray</a> (<a class="el" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1BuildResultNode.html">BuildResultNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GatherNDAttrs.html">GatherNDAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1NDArrayContainerTrait.html">NDArrayCon [...]
<tr><td rowspan="2" valign="bottom"><a name="letter_c"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">  c  </div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GE.html">GE</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NdarraySizeAttrs.html">NdarraySizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Schedule.html">Schedule</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)& [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1GenericFunc.html">GenericFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NE.html">NE</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ScheduleNode.html">ScheduleNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheReadStep.html">CacheReadStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GenericFuncNode.html">GenericFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NENode.html">NENode</a> (<a class="el" href="namespacetvm_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheReadStepNode.html">CacheReadStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GENode.html">GENode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NLLLossAttrs.html">NLLLossAttrs</a> (<a class= [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheWriteStep.html">CacheWriteStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GetValidCountsAttrs.html">GetValidCountsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1NodeFunctor.html">NodeFunctor [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheWriteStepNode.html">CacheWriteStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GlobalPool2DAttrs.html">GlobalPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1NodeFunctor_3_01R_07const [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Call.html">Call</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalTypeVar.html">GlobalTypeVar</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NonMaximumSuppressionAttrs.html">NonMaximumSuppressionAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Call.html">Call</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalTypeVarNode.html">GlobalTypeVarNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NormalAttrs.html">NormalAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>) [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1CallDoc.html">CallDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classGlobalVar.html">GlobalVar</a>   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Not.html">Not</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" h [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1CallDocNode.html">CallDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalVar.html">GlobalVar</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NotNode.html">NotNode</a> (<a class="el" href="namespacetvm_1_1tir.ht [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1profiling_1_1CallFrame.html">CallFrame</a> (<a class="el" href="namespacetvm_1_1runtime_1_1profiling.html">tvm::runtime::profiling</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalVarNode.html">GlobalVarNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1NullOptType.html">NullOptType</a> (<a class="el" h [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GE.html">GE</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NdarraySizeAttrs.html">NdarraySizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Schedule.html">Schedule</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)& [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1GenericFunc.html">GenericFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NE.html">NE</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ScheduleContext.html">ScheduleContext</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td>< [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheReadStep.html">CacheReadStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GenericFuncNode.html">GenericFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NENode.html">NENode</a> (<a class="el" href="namespacetvm_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheReadStepNode.html">CacheReadStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GENode.html">GENode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NLLLossAttrs.html">NLLLossAttrs</a> (<a class= [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheWriteStep.html">CacheWriteStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GetValidCountsAttrs.html">GetValidCountsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1NodeFunctor.html">NodeFunctor [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheWriteStepNode.html">CacheWriteStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GlobalPool2DAttrs.html">GlobalPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1NodeFunctor_3_01R_07const [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Call.html">Call</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalTypeVar.html">GlobalTypeVar</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NonMaximumSuppressionAttrs.html">NonMaximumSuppressionAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Call.html">Call</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalTypeVarNode.html">GlobalTypeVarNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NormalAttrs.html">NormalAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>) [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1CallDoc.html">CallDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classGlobalVar.html">GlobalVar</a>   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Not.html">Not</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" h [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1CallDocNode.html">CallDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalVar.html">GlobalVar</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NotNode.html">NotNode</a> (<a class="el" href="namespacetvm_1_1tir.ht [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1profiling_1_1CallFrame.html">CallFrame</a> (<a class="el" href="namespacetvm_1_1runtime_1_1profiling.html">tvm::runtime::profiling</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalVarNode.html">GlobalVarNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1NullOptType.html">NullOptType</a> (<a class="el" h [...]
<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CallLoweredAttrs.html">CallLoweredAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalVarSupply.html">GlobalVarSupply</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_o"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah"> & [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchCallbackNode.html">SearchCallbackNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1TypeName_3_01uint64__t_01_4.html">TypeName< uint64_t ></a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CallNode.html">CallNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalVarSupplyNode.html">GlobalVarSupplyNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchPolicy.html">SearchPolicy</a> (<a class="el" href="namespacetvm_1_1auto__sche [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallNode.html">CallNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1usmp_1_1algo_1_1GreedyBase.html">GreedyBase</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp_1_1algo.html">tvm::tir::usmp::algo</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjAllocatorBase.html">ObjAllocatorBase< [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallPattern.html">CallPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GridSampleAttrs.html">GridSampleAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Object.html">Object</a> (<a class="el" href="namespacetvm_1_1 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallPatternNode.html">CallPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GroupNormAttrs.html">GroupNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectEqual.html">ObjectEqual</a> (<a class="el" href= [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1CanonicalSimplifier.html">CanonicalSimplifier</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GT.html">GT</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectHash.html">ObjectHash</a> (<a class="el" href="namespacetvm_1_1runtime. [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Cast.html">Cast</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GTNode.html">GTNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1ObjectPath.html">ObjectPath</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="to [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchCallback.html">SearchCallback</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1TypeName_3_01int64__t_01_4.html">TypeName< int64_t ></a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CallNode.html">CallNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1GlobalVarSupplyNode.html">GlobalVarSupplyNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchCallbackNode.html">SearchCallbackNode</a> (<a class="el" href="namespacetvm_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallNode.html">CallNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1usmp_1_1algo_1_1GreedyBase.html">GreedyBase</a> (<a class="el" href="namespacetvm_1_1tir_1_1usmp_1_1algo.html">tvm::tir::usmp::algo</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjAllocatorBase.html">ObjAllocatorBase< [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallPattern.html">CallPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GridSampleAttrs.html">GridSampleAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Object.html">Object</a> (<a class="el" href="namespacetvm_1_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallPatternNode.html">CallPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GroupNormAttrs.html">GroupNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectEqual.html">ObjectEqual</a> (<a class="el" href= [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1CanonicalSimplifier.html">CanonicalSimplifier</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GT.html">GT</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectHash.html">ObjectHash</a> (<a class="el" href="namespacetvm_1_1runtime. [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Cast.html">Cast</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GTNode.html">GTNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1ObjectPath.html">ObjectPath</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="to [...]
<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CastAttrs.html">CastAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_h"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">  h  </div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1ObjectPathNode.html">ObjectPathNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchTask.html">SearchTask</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1TypeReporter.html">TypeReporter</a> (<a class="el" href="namespacetvm.ht [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CastHintAttrs.html">CastHintAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1ObjectPathPair.html">ObjectPathPair</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchTaskNode.html">SearchTaskNode</a> (<a class="el" href="namespacetvm_1_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CastNode.html">CastNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator_1_1Handler.html">SimpleObjAllocator::Handler</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1ObjectPathPairNode.html">ObjectPathPairNode</a> (<a c [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ClassDoc.html">ClassDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1SHashReducer_1_1Handler.html">SHashReducer::Handler</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html">ObjectPtr</a> (<a class="el" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ClassDocNode.html">ClassDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1SEqualReducer_1_1Handler.html">SEqualReducer::Handler</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectPtrEqual.html">ObjectPtrEqua [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Clause.html">Clause</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structdmlc_1_1serializer_1_1Handler_3_01DLDataType_01_4.html">Handler< DLDataType ></a> (<a class="el" href="namespacedmlc_1_1serializer.html">dmlc::serializer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectPtrHash.html">ObjectPt [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1ObjectPathNode.html">ObjectPathNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1meta__schedule_1_1SearchStrategyNode.html">SearchStrategyNode</a> (<a class="el" href="namespacetvm_1_1meta__schedule.html">tvm::meta_schedule</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1TypeRelationNode.html">TypeRelationNode</a> (<a class="el" [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CastHintAttrs.html">CastHintAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1ObjectPathPair.html">ObjectPathPair</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchTask.html">SearchTask</a> (<a class="el" href="namespacetvm_1_1auto__s [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CastNode.html">CastNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator_1_1Handler.html">SimpleObjAllocator::Handler</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1ObjectPathPairNode.html">ObjectPathPairNode</a> (<a c [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ClassDoc.html">ClassDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1SEqualReducer_1_1Handler.html">SEqualReducer::Handler</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html">ObjectPtr</a> (<a class="el [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1ClassDocNode.html">ClassDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1SHashReducer_1_1Handler.html">SHashReducer::Handler</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectPtrEqual.html">ObjectPtrEqual< [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Clause.html">Clause</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structdmlc_1_1serializer_1_1Handler_3_01DLDataType_01_4.html">Handler< DLDataType ></a> (<a class="el" href="namespacedmlc_1_1serializer.html">dmlc::serializer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectPtrHash.html">ObjectPt [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ClauseNode.html">ClauseNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structdmlc_1_1serializer_1_1Handler_3_01DLDevice_01_4.html">Handler< DLDevice ></a> (<a class="el" href="namespacedmlc_1_1serializer.html">dmlc::serializer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">ObjectRef [...]
</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ClauseNode.html">ClauseNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structdmlc_1_1serializer_1_1Handler_3_01DLDevice_01_4.html">Handler< DLDevice ></a> (<a class="el" href="namespacedmlc_1_1serializer.html">dmlc::serializer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">ObjectRef [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ClipAttrs.html">ClipAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1HardwareParams.html">HardwareParams</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker.html">ObjectTypeChe [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Closure.html">Closure</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1HardwareParamsNode.html">HardwareParamsNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Array_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ClosureObj.html">ClosureObj</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1HybridOp.html">HybridOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Map_3_01K_00_01V_01_4_01_4.html">ObjectTypeChecker< Map [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CmpOpNode.html">CmpOpNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1HybridOpNode.html">HybridOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1OnDeviceAttrs.html">OnDeviceAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">t [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ClipAttrs.html">ClipAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1HardwareParams.html">HardwareParams</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker.html">ObjectTypeChe [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Closure.html">Closure</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1HardwareParamsNode.html">HardwareParamsNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Array_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ClosureObj.html">ClosureObj</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1HybridOp.html">HybridOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Map_3_01K_00_01V_01_4_01_4.html">ObjectTypeChecker< Map [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CmpOpNode.html">CmpOpNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1HybridOpNode.html">HybridOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1OnDeviceAttrs.html">OnDeviceAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">t [...]
<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1CommentDoc.html">CommentDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_i"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">  i  </div></td></tr></table>
-</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1OneHotAttrs.html">OneHotAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1SeqStmtNode.html">SeqStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1UnknownAttributeAccessPathNode.html">UnknownAttributeAccessPathNode</a> (<a class="el [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1CommentDocNode.html">CommentDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1Op.html">Op</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1SEqualHandlerDefault.html">SEqualHandlerDefault</a> (<a class="el" href="namespacetv [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CommReducer.html">CommReducer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Id.html">Id</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1OpAttrMap.html">OpAttrMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CommReducerNode.html">CommReducerNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IdDoc.html">IdDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Operation.html">Operation</a> (<a class="el" href=" [...]
+</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1OneHotAttrs.html">OneHotAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1SeqStmt.html">SeqStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1UnknownAttributeAccessPath.html">UnknownAttributeAccessPath</a> (<a class="el" href="namespac [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1CommentDocNode.html">CommentDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1Op.html">Op</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1SeqStmtNode.html">SeqStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.ht [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CommReducer.html">CommReducer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Id.html">Id</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1OpAttrMap.html">OpAttrMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CommReducerNode.html">CommReducerNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IdDoc.html">IdDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Operation.html">Operation</a> (<a class="el" href=" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1CompilationConfig.html">CompilationConfig</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IdDocNode.html">IdDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1OperationDoc.html">OperationDoc</a> (<a cla [...]
</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1CompilationConfig.html">CompilationConfig</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IdDocNode.html">IdDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1OperationDoc.html">OperationDoc</a> (<a cla [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1CompilationConfigNode.html">CompilationConfigNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IdNode.html">IdNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1OperationDocNode.html">OperationDocNode</a> (<a class="el" href="namespacet [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CompileError.html">CompileError</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1If.html">If</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1OperationNode.html">OperationNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CompilerAttrs.html">CompilerAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IfDoc.html">IfDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpImplementation.html">OpImplementation</a> ( [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeAtStep.html">ComputeAtStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IfDocNode.html">IfDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpImpleme [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeAtStepNode.html">ComputeAtStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1IfFrame.html">IfFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" hr [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeDAG.html">ComputeDAG</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1IfFrameNode.html">IfFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="cl [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeDAGNode.html">ComputeDAGNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IfNode.html">IfNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpSpecialization.html">OpSpecialization</a> (<a [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeInlineStep.html">ComputeInlineStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IfPattern.html">IfPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpSpecializationNode.html">OpSpecia [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeInlineStepNode.html">ComputeInlineStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IfPatternNode.html">IfPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpStrategy.html">Op [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1ComputeOp.html">ComputeOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IfThenElse.html">IfThenElse</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpStrategyNode.html">OpStrategyNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm: [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1ComputeOpNode.html">ComputeOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IfThenElseNode.html">IfThenElseNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Optional.html">Optional</a> (<a class="el" href="namespacetvm_1_1runtime.ht [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeRootStep.html">ComputeRootStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSEqualReduce.html">ImplSEqualReduce</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Or.html">Or</a> (<a cla [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeRootStepNode.html">ComputeRootStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSEqualReduce_3_01T_00_01true_01_4.html">ImplSEqualReduce< T, true ></a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" hr [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1CompilationConfigNode.html">CompilationConfigNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IdNode.html">IdNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1OperationDocNode.html">OperationDocNode</a> (<a class="el" href="namespacet [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CompileError.html">CompileError</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1If.html">If</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1te_1_1OperationNode.html">OperationNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CompilerAttrs.html">CompilerAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IfDoc.html">IfDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpImplementation.html">OpImplementation</a> ( [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeAtStep.html">ComputeAtStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IfDocNode.html">IfDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpImpleme [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeAtStepNode.html">ComputeAtStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1IfFrame.html">IfFrame</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" hr [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeDAG.html">ComputeDAG</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1ir__builder_1_1tir_1_1IfFrameNode.html">IfFrameNode</a> (<a class="el" href="namespacetvm_1_1script_1_1ir__builder_1_1tir.html">tvm::script::ir_builder::tir</a>)   </td><td valign="top"><a class="el" href="cl [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeDAGNode.html">ComputeDAGNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IfNode.html">IfNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpSpecialization.html">OpSpecialization</a> (<a [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeInlineStep.html">ComputeInlineStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IfPattern.html">IfPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpSpecializationNode.html">OpSpecia [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeInlineStepNode.html">ComputeInlineStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IfPatternNode.html">IfPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpStrategy.html">Op [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1ComputeOp.html">ComputeOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IfThenElse.html">IfThenElse</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpStrategyNode.html">OpStrategyNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm: [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1ComputeOpNode.html">ComputeOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IfThenElseNode.html">IfThenElseNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Optional.html">Optional</a> (<a class="el" href="namespacetvm_1_1runtime.ht [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeRootStep.html">ComputeRootStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSEqualReduce.html">ImplSEqualReduce</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Or.html">Or</a> (<a cla [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeRootStepNode.html">ComputeRootStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSEqualReduce_3_01T_00_01true_01_4.html">ImplSEqualReduce< T, true ></a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" hr [...]
<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConcatenateAttrs.html">ConcatenateAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSHashReduce.html">ImplSHashReduce</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td rowspan="2" valign="bottom"><a name="letter_p"></a><table border="0" cellspacing="0" cellpadding="0"><t [...]
-</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1SignaturePrinter.html">SignaturePrinter</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1VirtualDeviceNode.html">VirtualDeviceNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Constant.html">Constant</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSHashReduce_3_01T_00_01true_01_4.html">ImplSHashReduce< T, true ></a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator.html">S [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1ConstantInfo.html">ConstantInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplVisitAttrs.html">ImplVisitAttrs</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1PackedFunc.html">PackedFunc</a> (<a class="el" href="namespacetvm_1_1runtime.html"> [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ConstantInfoMetadata.html">ConstantInfoMetadata</a> (<a class="el" href="namespacetvm_1_1runtime_1_1metadata.html">tvm::runtime::metadata</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplVisitAttrs_3_01T_00_01true_01_4.html">ImplVisitAttrs< T, true ></a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a clas [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ConstantInfoMetadataNode.html">ConstantInfoMetadataNode</a> (<a class="el" href="namespacetvm_1_1runtime_1_1metadata.html">tvm::runtime::metadata</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IncompleteType.html">IncompleteType</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1PackedFuncSubObj.html"> [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1ConstantInfoNode.html">ConstantInfoNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IncompleteTypeNode.html">IncompleteTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter.html">PackedFuncValueConverter</a> (<a class="el" href="namespacetvm_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1ConstantMemoryPools.html">ConstantMemoryPools</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IndexDataTypeNormalizer.html">IndexDataTypeNormalizer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01Optional_3_01T_01_4_01_4.html"> [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ShuffleNode.html">ShuffleNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1VirtualDeviceCache.html">VirtualDeviceCache</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Constant.html">Constant</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSHashReduce_3_01T_00_01true_01_4.html">ImplSHashReduce< T, true ></a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1SignaturePrinter.html">Si [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1ConstantInfo.html">ConstantInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplVisitAttrs.html">ImplVisitAttrs</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1PackedFunc.html">PackedFunc</a> (<a class="el" href="namespacetvm_1_1runtime.html"> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ConstantInfoMetadata.html">ConstantInfoMetadata</a> (<a class="el" href="namespacetvm_1_1runtime_1_1metadata.html">tvm::runtime::metadata</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplVisitAttrs_3_01T_00_01true_01_4.html">ImplVisitAttrs< T, true ></a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)   </td><td valign="top"><a clas [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ConstantInfoMetadataNode.html">ConstantInfoMetadataNode</a> (<a class="el" href="namespacetvm_1_1runtime_1_1metadata.html">tvm::runtime::metadata</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IncompleteType.html">IncompleteType</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1PackedFuncSubObj.html"> [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1ConstantInfoNode.html">ConstantInfoNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1IncompleteTypeNode.html">IncompleteTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter.html">PackedFuncValueConverter</a> (<a class="el" href="namespacetvm_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1ConstantMemoryPools.html">ConstantMemoryPools</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IndexDataTypeNormalizer.html">IndexDataTypeNormalizer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01Optional_3_01T_01_4_01_4.html"> [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1ConstantMemoryPoolsNode.html">ConstantMemoryPoolsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IndexDataTypeRewriter.html">IndexDataTypeRewriter</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01PrimExpr_01_4.html">Packed [...]
</td></tr>
-<tr><td valign="top"><a class="el" href="structtvm_1_1ConstantMemoryPoolsNode.html">ConstantMemoryPoolsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IndexDataTypeRewriter.html">IndexDataTypeRewriter</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01PrimExpr_01_4.html">Packed [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantNode.html">ConstantNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IndexDoc.html">IndexDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01tvm_1_1Boo [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantPattern.html">ConstantPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IndexDocNode.html">IndexDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantPatternNode.html">ConstantPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IndexMap.html">IndexMap</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_1_1tvm_1_1runtime_1_1String_01_4.html" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1ConstantPoolInfo.html">ConstantPoolInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IndexMapNode.html">IndexMapNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1PacketFieldSizeBytes.html">PacketFieldSizeBytes</a> (<a class="el" href="na [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1ConstantPoolInfoNode.html">ConstantPoolInfoNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1InitOpAttrs.html">InitOpAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1PadAttrs.html">PadAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBound.html">ConstIntBound</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1InplaceArrayBase.html">InplaceArrayBase</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1Pass.html">Pass</a> (<a class="el" href="namespa [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html">ConstIntBoundAnalyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1InstanceNormAttrs.html">InstanceNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassContext.html">PassContext</a> ( [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBoundNode.html">ConstIntBoundNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Instruction.html">Instruction</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassContextNode.html">PassContextNode</a> (<a class="el" href= [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstraintContext.html">ConstraintContext</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1vm_1_1Instruction.html">Instruction</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassInfo.html">PassInfo</a> (<a c [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1Constructor.html">Constructor</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1InstructionKind.html">InstructionKind</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassInfoNode.html">PassInfoNode</a> (<a class="el" href="namespacetvm_1_1transform.html">tv [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1ConstructorNode.html">ConstructorNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1InstructionKindNode.html">InstructionKindNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1instrument_1_1PassInstrument.html">PassInstrument</a> (<a class="el" href="namespacetvm_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstructorValue.html">ConstructorValue</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1InstructionKindRegEntry.html">InstructionKindRegEntry</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1instrument_1_1PassInstrumentNode.html">PassInstrument [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConstructorValueObj.html">ConstructorValueObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1InstructionNode.html">InstructionNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassNode.html">PassNode</a> (<a class="el" href=" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray_1_1Container.html">NDArray::Container</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraints.html">IntConstraints</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Pattern.html">Pattern</a> (<a class="el" hre [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray_1_1ContainerBase.html">NDArray::ContainerBase</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsNode.html">IntConstraintsNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternConstructor.html">Pat [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv1DAttrs.html">Conv1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsTransform.html">IntConstraintsTransform</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternConstructorNode.html">PatternConstructor [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv1DTransposeAttrs.html">Conv1DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsTransformNode.html">IntConstraintsTransformNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternFunctor.html"> [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DAttrs.html">Conv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1Integer.html">Integer</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternFunctor_3_01R_07const_01Pattern_01_6n_00_01Args_8_8_8_08_4.html">PatternFunctor< R(const Patte [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DTransposeAttrs.html">Conv2DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1InterpreterClosure.html">InterpreterClosure</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternMutator.html">PatternMutator</a> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantNode.html">ConstantNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IndexDoc.html">IndexDoc</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01tvm_1_1Boo [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantPattern.html">ConstantPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1script_1_1printer_1_1IndexDocNode.html">IndexDocNode</a> (<a class="el" href="namespacetvm_1_1script_1_1printer.html">tvm::script::printer</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantPatternNode.html">ConstantPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IndexMap.html">IndexMap</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_1_1tvm_1_1runtime_1_1String_01_4.html" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1ConstantPoolInfo.html">ConstantPoolInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IndexMapNode.html">IndexMapNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1PacketFieldSizeBytes.html">PacketFieldSizeBytes</a> (<a class="el" href="na [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1ConstantPoolInfoNode.html">ConstantPoolInfoNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1InitOpAttrs.html">InitOpAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1PadAttrs.html">PadAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBound.html">ConstIntBound</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1InplaceArrayBase.html">InplaceArrayBase</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1Pass.html">Pass</a> (<a class="el" href="namespa [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html">ConstIntBoundAnalyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1InstanceNormAttrs.html">InstanceNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassContext.html">PassContext</a> ( [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBoundNode.html">ConstIntBoundNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Instruction.html">Instruction</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassContextNode.html">PassContextNode</a> (<a class="el" href= [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstraintContext.html">ConstraintContext</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1vm_1_1Instruction.html">Instruction</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassInfo.html">PassInfo</a> (<a c [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1Constructor.html">Constructor</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1InstructionKind.html">InstructionKind</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassInfoNode.html">PassInfoNode</a> (<a class="el" href="namespacetvm_1_1transform.html">tv [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1ConstructorNode.html">ConstructorNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1InstructionKindNode.html">InstructionKindNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1instrument_1_1PassInstrument.html">PassInstrument</a> (<a class="el" href="namespacetvm_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstructorValue.html">ConstructorValue</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1InstructionKindRegEntry.html">InstructionKindRegEntry</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1instrument_1_1PassInstrumentNode.html">PassInstrument [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConstructorValueObj.html">ConstructorValueObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1InstructionNode.html">InstructionNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassNode.html">PassNode</a> (<a class="el" href=" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray_1_1Container.html">NDArray::Container</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraints.html">IntConstraints</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Pattern.html">Pattern</a> (<a class="el" hre [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray_1_1ContainerBase.html">NDArray::ContainerBase</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsNode.html">IntConstraintsNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternConstructor.html">Pat [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv1DAttrs.html">Conv1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsTransform.html">IntConstraintsTransform</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternConstructorNode.html">PatternConstructor [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv1DTransposeAttrs.html">Conv1DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsTransformNode.html">IntConstraintsTransformNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternFunctor.html"> [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DAttrs.html">Conv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1Integer.html">Integer</a> (<a class="el" href="namespacetvm.html">tvm</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternFunctor_3_01R_07const_01Pattern_01_6n_00_01Args_8_8_8_08_4.html">PatternFunctor< R(const Patte [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DTransposeAttrs.html">Conv2DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1InterpreterClosure.html">InterpreterClosure</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternMutator.html">PatternMutator</a> [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradAttrs.html">Conv2DWinogradAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1InterpreterClosureObj.html">InterpreterClosureObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternNode.html">PatternNode</a> ( [...]
</td></tr>
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradAttrs.html">Conv2DWinogradAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1InterpreterClosureObj.html">InterpreterClosureObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternNode.html">PatternNode</a> ( [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradNNPACKWeightTransformAttrs.html">Conv2DWinogradNNPACKWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntGroupBounds.html">IntGroupBounds</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Pattern [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DAttrs.html">Conv3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntGroupBoundsNode.html">IntGroupBoundsNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternTupleNode.html">PatternTupleNode</a> (<a class="el [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradNNPACKWeightTransformAttrs.html">Conv2DWinogradNNPACKWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntGroupBounds.html">IntGroupBounds</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Pattern [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DAttrs.html">Conv3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntGroupBoundsNode.html">IntGroupBoundsNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)   </td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternTupleNode.html">PatternTupleNode</a> (<a class="el [...]
<tr><td></td><td></td><td></td><td></td><td></td></tr>
</table>
<div class="qindex"><a class="qindex" href="#letter_a">a</a> | <a class="qindex" href="#letter_b">b</a> | <a class="qindex" href="#letter_c">c</a> | <a class="qindex" href="#letter_d">d</a> | <a class="qindex" href="#letter_e">e</a> | <a class="qindex" href="#letter_f">f</a> | <a class="qindex" href="#letter_g">g</a> | <a class="qindex" href="#letter_h">h</a> | <a class="qindex" href="#letter_i">i</a> |& [...]
diff --git a/docs/reference/api/doxygen/classtvm_1_1runtime_1_1ObjectRef.html b/docs/reference/api/doxygen/classtvm_1_1runtime_1_1ObjectRef.html
index 710bb47462..5845b67bc2 100644
--- a/docs/reference/api/doxygen/classtvm_1_1runtime_1_1ObjectRef.html
+++ b/docs/reference/api/doxygen/classtvm_1_1runtime_1_1ObjectRef.html
@@ -81,7 +81,7 @@ $(function() {
<p><code>#include <<a class="el" href="object_8h_source.html">object.h</a>></code></p>
-<p>Inherited by <a class="el" href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array< Range ></a>, <a class="el" href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array< Region ></a>, <a class="el" href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array< tvm::arith::IterSplitExpr ></a>, <a class="el" href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array< tvm::arith::IterSumExpr ></a>, <a class="el" href="classtvm_1_1runtime_1_1Array.html">tv [...]
+<p>Inherited by <a class="el" href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array< Range ></a>, <a class="el" href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array< Region ></a>, <a class="el" href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array< tvm::arith::IterSplitExpr ></a>, <a class="el" href="classtvm_1_1runtime_1_1Array.html">tvm::runtime::Array< tvm::arith::IterSumExpr ></a>, <a class="el" href="classtvm_1_1runtime_1_1Array.html">tv [...]
<div class="dynheader">
Collaboration diagram for tvm::runtime::ObjectRef:</div>
<div class="dyncontent">
diff --git a/docs/reference/api/doxygen/classtvm_1_1runtime_1_1ObjectRef__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1runtime_1_1ObjectRef__coll__graph.svg
index 44286f6163..aa6113fd16 100644
--- a/docs/reference/api/doxygen/classtvm_1_1runtime_1_1ObjectRef__coll__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1runtime_1_1ObjectRef__coll__graph.svg
@@ -9,9 +9,9 @@
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 453)">
<title>tvm::runtime::ObjectRef</title>
<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-453 144,-453 144,4 -4,4"/>
-<!-- Node486 -->
+<!-- Node488 -->
<g id="node1" class="node">
-<title>Node486</title>
+<title>Node488</title>
<polygon fill="#bfbfbf" stroke="#000000" points="3,-.5 3,-222.5 137,-222.5 137,-.5 3,-.5"/>
<text text-anchor="middle" x="70" y="-210.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
<polyline fill="none" stroke="#000000" points="3,-203.5 137,-203.5 "/>
@@ -34,9 +34,9 @@
<text text-anchor="start" x="11" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
<text text-anchor="start" x="11" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
</g>
-<!-- Node487 -->
+<!-- Node489 -->
<g id="node2" class="node">
-<title>Node487</title>
+<title>Node489</title>
<g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1ObjectPtr.html" target="_top" xlink:title="{tvm::runtime::ObjectPtr\l\< tvm::runtime::Object \>\n||+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ~ObjectPtr()\l+ swap()\l+ get()\l+ operator-\>()\land 11 more...\l}">
<polygon fill="#ffffff" stroke="#000000" points="0,-270.5 0,-448.5 140,-448.5 140,-270.5 0,-270.5"/>
<text text-anchor="start" x="8" y="-436.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
@@ -58,9 +58,9 @@
</a>
</g>
</g>
-<!-- Node487->Node486 -->
+<!-- Node489->Node488 -->
<g id="edge1" class="edge">
-<title>Node487->Node486</title>
+<title>Node489->Node488</title>
<path fill="none" stroke="#404040" d="M70,-270.3167C70,-258.8765 70,-247.0062 70,-235.1402"/>
<polygon fill="none" stroke="#404040" points="70.0001,-234.7944 66,-228.7944 70,-222.7944 74,-228.7943 70.0001,-234.7944"/>
<text text-anchor="middle" x="89.5" y="-244" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleContext-members.html b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleContext-members.html
new file mode 100644
index 0000000000..f252e9eeaf
--- /dev/null
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleContext-members.html
@@ -0,0 +1,81 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.13"/>
+<meta name="viewport" content="width=device-width, initial-scale=1"/>
+<title>tvm: Member List</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+ <td id="projectalign" style="padding-left: 0.5em;">
+ <div id="projectname">tvm
+ </div>
+ </td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.13 -->
+<script type="text/javascript">
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+</script>
+<script type="text/javascript" src="menudata.js"></script>
+<script type="text/javascript" src="menu.js"></script>
+<script type="text/javascript">
+$(function() {
+ initMenu('',true,false,'search.php','Search');
+ $(document).ready(function() { init_search(); });
+});
+</script>
+<div id="main-nav"></div>
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+ onmouseover="return searchBox.OnSearchSelectShow()"
+ onmouseout="return searchBox.OnSearchSelectHide()"
+ onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0"
+ name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div id="nav-path" class="navpath">
+ <ul>
+<li class="navelem"><a class="el" href="namespacetvm.html">tvm</a></li><li class="navelem"><a class="el" href="namespacetvm_1_1te.html">te</a></li><li class="navelem"><a class="el" href="classtvm_1_1te_1_1ScheduleContext.html">ScheduleContext</a></li> </ul>
+</div>
+</div><!-- top -->
+<div class="header">
+ <div class="headertitle">
+<div class="title">tvm::te::ScheduleContext Member List</div> </div>
+</div><!--header-->
+<div class="contents">
+
+<p>This is the complete list of members for <a class="el" href="classtvm_1_1te_1_1ScheduleContext.html">tvm::te::ScheduleContext</a>, including all inherited members.</p>
+<table class="directory">
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleContext.html#a10080b05885425a75e7f7281d3defb68">With< ScheduleContext ></a> class</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleContext.html">tvm::te::ScheduleContext</a></td><td class="entry"><span class="mlabel">friend</span></td></tr>
+</table></div><!-- contents -->
+<!-- start footer part -->
+<hr class="footer"/><address class="footer"><small>
+Generated by  <a href="http://www.doxygen.org/index.html">
+<img class="footer" src="doxygen.png" alt="doxygen"/>
+</a> 1.8.13
+</small></address>
+</body>
+</html>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleContext.html b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleContext.html
new file mode 100644
index 0000000000..3a36a7abe8
--- /dev/null
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleContext.html
@@ -0,0 +1,126 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.13"/>
+<meta name="viewport" content="width=device-width, initial-scale=1"/>
+<title>tvm: tvm::te::ScheduleContext Class Reference</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+ <td id="projectalign" style="padding-left: 0.5em;">
+ <div id="projectname">tvm
+ </div>
+ </td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.13 -->
+<script type="text/javascript">
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+</script>
+<script type="text/javascript" src="menudata.js"></script>
+<script type="text/javascript" src="menu.js"></script>
+<script type="text/javascript">
+$(function() {
+ initMenu('',true,false,'search.php','Search');
+ $(document).ready(function() { init_search(); });
+});
+</script>
+<div id="main-nav"></div>
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+ onmouseover="return searchBox.OnSearchSelectShow()"
+ onmouseout="return searchBox.OnSearchSelectHide()"
+ onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0"
+ name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div id="nav-path" class="navpath">
+ <ul>
+<li class="navelem"><a class="el" href="namespacetvm.html">tvm</a></li><li class="navelem"><a class="el" href="namespacetvm_1_1te.html">te</a></li><li class="navelem"><a class="el" href="classtvm_1_1te_1_1ScheduleContext.html">ScheduleContext</a></li> </ul>
+</div>
+</div><!-- top -->
+<div class="header">
+ <div class="summary">
+<a href="#friends">Friends</a> |
+<a href="classtvm_1_1te_1_1ScheduleContext-members.html">List of all members</a> </div>
+ <div class="headertitle">
+<div class="title">tvm::te::ScheduleContext Class Reference</div> </div>
+</div><!--header-->
+<div class="contents">
+
+<p>Context helper to collect debug information of <a class="el" href="classtvm_1_1te_1_1Schedule.html" title="Global schedule container For operations and all the operations they depend on. The schedule per Oper...">Schedule</a>.
+ <a href="classtvm_1_1te_1_1ScheduleContext.html#details">More...</a></p>
+
+<p><code>#include <<a class="el" href="te_2schedule_8h_source.html">schedule.h</a>></code></p>
+<div class="dynheader">
+Collaboration diagram for tvm::te::ScheduleContext:</div>
+<div class="dyncontent">
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1te_1_1ScheduleContext__coll__graph.svg" width="198" height="88"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+</div>
+</div>
+<table class="memberdecls">
+<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="friends"></a>
+Friends</h2></td></tr>
+<tr class="memitem:a10080b05885425a75e7f7281d3defb68"><td class="memItemLeft" align="right" valign="top">class </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1ScheduleContext.html#a10080b05885425a75e7f7281d3defb68">With< ScheduleContext ></a></td></tr>
+<tr class="separator:a10080b05885425a75e7f7281d3defb68"><td class="memSeparator" colspan="2"> </td></tr>
+</table>
+<a name="details" id="details"></a><h2 class="groupheader">Detailed Description</h2>
+<div class="textblock"><p>Context helper to collect debug information of <a class="el" href="classtvm_1_1te_1_1Schedule.html" title="Global schedule container For operations and all the operations they depend on. The schedule per Oper...">Schedule</a>. </p>
+<p>Attach With<ScheduleContext>(schedule_instance, primitive_name) inside function body of schedule primitives to collect the snapshot of schedule status and corresponding primitive name </p>
+</div><h2 class="groupheader">Friends And Related Function Documentation</h2>
+<a id="a10080b05885425a75e7f7281d3defb68"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a10080b05885425a75e7f7281d3defb68">◆ </a></span>With< ScheduleContext ></h2>
+
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+ <tr>
+ <td class="mlabels-left">
+ <table class="memname">
+ <tr>
+ <td class="memname">friend class <a class="el" href="classtvm_1_1With.html">With</a>< <a class="el" href="classtvm_1_1te_1_1ScheduleContext.html">ScheduleContext</a> ></td>
+ </tr>
+ </table>
+ </td>
+ <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">friend</span></span> </td>
+ </tr>
+</table>
+</div><div class="memdoc">
+
+</div>
+</div>
+<hr/>The documentation for this class was generated from the following file:<ul>
+<li>include/tvm/te/<a class="el" href="te_2schedule_8h_source.html">schedule.h</a></li>
+</ul>
+</div><!-- contents -->
+<!-- start footer part -->
+<hr class="footer"/><address class="footer"><small>
+Generated by  <a href="http://www.doxygen.org/index.html">
+<img class="footer" src="doxygen.png" alt="doxygen"/>
+</a> 1.8.13
+</small></address>
+</body>
+</html>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleContext__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleContext__coll__graph.svg
new file mode 100644
index 0000000000..886364b5fe
--- /dev/null
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleContext__coll__graph.svg
@@ -0,0 +1,23 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
+ "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by graphviz version 2.40.1 (20161225.0304)
+ -->
+<!-- Title: tvm::te::ScheduleContext Pages: 1 -->
+<svg width="148pt" height="66pt"
+ viewBox="0.00 0.00 148.00 66.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 62)">
+<title>tvm::te::ScheduleContext</title>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-62 144,-62 144,4 -4,4"/>
+<!-- Node1 -->
+<g id="node1" class="node">
+<title>Node1</title>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-57.5 140,-57.5 140,-.5 0,-.5"/>
+<text text-anchor="middle" x="70" y="-45.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::ScheduleContext</text>
+<polyline fill="none" stroke="#000000" points="0,-38.5 140,-38.5 "/>
+<text text-anchor="middle" x="70" y="-26.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="0,-19.5 140,-19.5 "/>
+<text text-anchor="middle" x="70" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+</g>
+</g>
+</svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode-members.html b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode-members.html
index 62533d43b5..169880671f 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode-members.html
@@ -91,26 +91,29 @@ $(function() {
<tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a2c2472f9cbb1d42bec661149fe801179">InitCache</a>()</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a2b6a92ec4b1c295604b55ff8e8c365e7">InvalidateCache</a>()</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a90e90b3f4ba8a590baff78c75807bbc7">IsInstance</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a133436a9ec5c4a768b94102bf95a660b">Object</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ab7968feb6ad38ecaffc320e13819d826">Object</a>(const Object &other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#aa1612f69ea5b4225d4cda759cd517323">Object</a>(Object &&other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#adbc8bfb6812add2173dcc7a6adb85d5c">op2stage_cache_</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a69c32fbd96181f5c21d2c878ab285e4f">operator=</a>(const Object &other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ae341e561272ff43cdcbc927bc29ac50d">operator=</a>(Object &&other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a27b0f687f7b20fcc6416a49e041712d8">outputs</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#ab27491f6d746b79bf94d9736566224c6">keep_schedule_record</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a133436a9ec5c4a768b94102bf95a660b">Object</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ab7968feb6ad38ecaffc320e13819d826">Object</a>(const Object &other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#aa1612f69ea5b4225d4cda759cd517323">Object</a>(Object &&other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#adbc8bfb6812add2173dcc7a6adb85d5c">op2stage_cache_</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a69c32fbd96181f5c21d2c878ab285e4f">operator=</a>(const Object &other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ae341e561272ff43cdcbc927bc29ac50d">operator=</a>(Object &&other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a27b0f687f7b20fcc6416a49e041712d8">outputs</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#aeddb87ac8fb45a6059e8ebb9659003f2">primitive_record</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a0d492efee331e2239a093f4b2017c10f">ref_counter_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a55549a6c23987890246248682560a03d">RefCounterType</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ad94d79729ac85aa7c976e23d39066383">RuntimeTypeIndex</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a612223aec2751cbd035a18c9e5453085">stage_map</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#ab5649969db603d6b7b4d155c0d09cdd5">stages</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a6ded7c6c5dfc7b8525c8048fdd9475ad">TVM_DECLARE_FINAL_OBJECT_INFO</a>(ScheduleNode, Object)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a481f01923b14e1851ebd38506e9c66ea">type_index</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4bfc2586cb55f2af47728187b3256255">type_index_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">TypeIndex2Key</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6ee32a02dd44257da105fbbe5d9c8622">TypeIndex2KeyHash</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6841f97e06e6614dd7e82c6dd41b818a">TypeKey2Index</a>(const std::string &key)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a77fbc73cef9265d8ae817903564a6e44">VisitAttrs</a>(AttrVisitor *v)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a52983b1afd658ec3b885b3b076c6203d">schedule_record</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a612223aec2751cbd035a18c9e5453085">stage_map</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#ab5649969db603d6b7b4d155c0d09cdd5">stages</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a6ded7c6c5dfc7b8525c8048fdd9475ad">TVM_DECLARE_FINAL_OBJECT_INFO</a>(ScheduleNode, Object)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a481f01923b14e1851ebd38506e9c66ea">type_index</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4bfc2586cb55f2af47728187b3256255">type_index_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">TypeIndex2Key</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6ee32a02dd44257da105fbbe5d9c8622">TypeIndex2KeyHash</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6841f97e06e6614dd7e82c6dd41b818a">TypeKey2Index</a>(const std::string &key)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a77fbc73cef9265d8ae817903564a6e44">VisitAttrs</a>(AttrVisitor *v)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">tvm::te::ScheduleNode</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
</table></div><!-- contents -->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode.html b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode.html
index 62549566b8..275c764853 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode.html
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode.html
@@ -79,13 +79,13 @@ $(function() {
<div class="dynheader">
Inheritance diagram for tvm::te::ScheduleNode:</div>
<div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1te_1_1ScheduleNode__inherit__graph.svg" width="290" height="815"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1te_1_1ScheduleNode__inherit__graph.svg" width="290" height="859"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
</div>
</div>
<div class="dynheader">
Collaboration diagram for tvm::te::ScheduleNode:</div>
<div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1te_1_1ScheduleNode__coll__graph.svg" width="896" height="1419"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1te_1_1ScheduleNode__coll__graph.svg" width="1490" height="1419"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
</div>
</div>
<table class="memberdecls">
@@ -147,6 +147,15 @@ Public Attributes</h2></td></tr>
<tr class="memitem:adbc8bfb6812add2173dcc7a6adb85d5c"><td class="memItemLeft" align="right" valign="top">std::unordered_map< const <a class="el" href="classtvm_1_1runtime_1_1Object.html">Object</a> *, <a class="el" href="classtvm_1_1te_1_1Stage.html">Stage</a> > </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#adbc8bfb6812add2173dcc7a6adb85d5c">op2stage_cache_</a></td></tr>
<tr class="memdesc:adbc8bfb6812add2173dcc7a6adb85d5c"><td class="mdescLeft"> </td><td class="mdescRight">Internal stage map to map internal ops to stages. This is created on demand and can be invalidated. <a href="#adbc8bfb6812add2173dcc7a6adb85d5c">More...</a><br /></td></tr>
<tr class="separator:adbc8bfb6812add2173dcc7a6adb85d5c"><td class="memSeparator" colspan="2"> </td></tr>
+<tr class="memitem:a52983b1afd658ec3b885b3b076c6203d"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>< <a class="el" href="classtvm_1_1te_1_1Schedule.html">Schedule</a> > </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a52983b1afd658ec3b885b3b076c6203d">schedule_record</a></td></tr>
+<tr class="memdesc:a52983b1afd658ec3b885b3b076c6203d"><td class="mdescLeft"> </td><td class="mdescRight">list of all transformed schedules User can display the optimization strategy via TEDD step by step to check the order and effect of primitives. Set "te.keep_schedule_record" in PassContext config as true to enable recording. <a href="#a52983b1afd658ec3b885b3b076c6203d">More...</a><br /></td></tr>
+<tr class="separator:a52983b1afd658ec3b885b3b076c6203d"><td class="memSeparator" colspan="2"> </td></tr>
+<tr class="memitem:aeddb87ac8fb45a6059e8ebb9659003f2"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>< <a class="el" href="classtvm_1_1runtime_1_1String.html">String</a> > </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#aeddb87ac8fb45a6059e8ebb9659003f2">primitive_record</a></td></tr>
+<tr class="memdesc:aeddb87ac8fb45a6059e8ebb9659003f2"><td class="mdescLeft"> </td><td class="mdescRight">list of all applied primitive names. <a href="#aeddb87ac8fb45a6059e8ebb9659003f2">More...</a><br /></td></tr>
+<tr class="separator:aeddb87ac8fb45a6059e8ebb9659003f2"><td class="memSeparator" colspan="2"> </td></tr>
+<tr class="memitem:ab27491f6d746b79bf94d9736566224c6"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1runtime_1_1Optional.html">Optional</a>< <a class="el" href="classtvm_1_1Bool.html">Bool</a> > </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#ab27491f6d746b79bf94d9736566224c6">keep_schedule_record</a></td></tr>
+<tr class="memdesc:ab27491f6d746b79bf94d9736566224c6"><td class="mdescLeft"> </td><td class="mdescRight">Flag to keep schedule record or not. <a href="#ab27491f6d746b79bf94d9736566224c6">More...</a><br /></td></tr>
+<tr class="separator:ab27491f6d746b79bf94d9736566224c6"><td class="memSeparator" colspan="2"> </td></tr>
</table><table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="pub-static-attribs"></a>
Static Public Attributes</h2></td></tr>
@@ -408,6 +417,22 @@ Additional Inherited Members</h2></td></tr>
<p>List of all stage groups. </p>
+</div>
+</div>
+<a id="ab27491f6d746b79bf94d9736566224c6"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#ab27491f6d746b79bf94d9736566224c6">◆ </a></span>keep_schedule_record</h2>
+
+<div class="memitem">
+<div class="memproto">
+ <table class="memname">
+ <tr>
+ <td class="memname"><a class="el" href="classtvm_1_1runtime_1_1Optional.html">Optional</a><<a class="el" href="classtvm_1_1Bool.html">Bool</a>> tvm::te::ScheduleNode::keep_schedule_record</td>
+ </tr>
+ </table>
+</div><div class="memdoc">
+
+<p>Flag to keep schedule record or not. </p>
+
</div>
</div>
<a id="adbc8bfb6812add2173dcc7a6adb85d5c"></a>
@@ -440,6 +465,38 @@ Additional Inherited Members</h2></td></tr>
<p>The output operations in original data flow graph. </p>
+</div>
+</div>
+<a id="aeddb87ac8fb45a6059e8ebb9659003f2"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#aeddb87ac8fb45a6059e8ebb9659003f2">◆ </a></span>primitive_record</h2>
+
+<div class="memitem">
+<div class="memproto">
+ <table class="memname">
+ <tr>
+ <td class="memname"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a><<a class="el" href="classtvm_1_1runtime_1_1String.html">String</a>> tvm::te::ScheduleNode::primitive_record</td>
+ </tr>
+ </table>
+</div><div class="memdoc">
+
+<p>list of all applied primitive names. </p>
+
+</div>
+</div>
+<a id="a52983b1afd658ec3b885b3b076c6203d"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a52983b1afd658ec3b885b3b076c6203d">◆ </a></span>schedule_record</h2>
+
+<div class="memitem">
+<div class="memproto">
+ <table class="memname">
+ <tr>
+ <td class="memname"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a><<a class="el" href="classtvm_1_1te_1_1Schedule.html">Schedule</a>> tvm::te::ScheduleNode::schedule_record</td>
+ </tr>
+ </table>
+</div><div class="memdoc">
+
+<p>list of all transformed schedules User can display the optimization strategy via TEDD step by step to check the order and effect of primitives. Set "te.keep_schedule_record" in PassContext config as true to enable recording. </p>
+
</div>
</div>
<a id="a612223aec2751cbd035a18c9e5453085"></a>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode__coll__graph.svg
index 5f95b599b8..e77046517b 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode__coll__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode__coll__graph.svg
@@ -4,26 +4,26 @@
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: tvm::te::ScheduleNode Pages: 1 -->
-<svg width="672pt" height="1064pt"
- viewBox="0.00 0.00 671.50 1064.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<svg width="1117pt" height="1064pt"
+ viewBox="0.00 0.00 1116.50 1064.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 1060)">
<title>tvm::te::ScheduleNode</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-1060 667.5,-1060 667.5,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-1060 1112.5,-1060 1112.5,4 -4,4"/>
<!-- Node2 -->
<g id="node1" class="node">
<title>Node2</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="289,-.5 289,-123.5 498,-123.5 498,-.5 289,-.5"/>
-<text text-anchor="middle" x="393.5" y="-111.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::ScheduleNode</text>
-<polyline fill="none" stroke="#000000" points="289,-104.5 498,-104.5 "/>
-<text text-anchor="start" x="297" y="-92.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ op2stage_cache_</text>
-<text text-anchor="start" x="297" y="-81.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
-<polyline fill="none" stroke="#000000" points="289,-74.5 498,-74.5 "/>
-<text text-anchor="start" x="297" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
-<text text-anchor="start" x="297" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ InitCache()</text>
-<text text-anchor="start" x="297" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ InvalidateCache()</text>
-<text text-anchor="start" x="297" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Contain()</text>
-<text text-anchor="start" x="297" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Contain()</text>
-<text text-anchor="start" x="297" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DECLARE_FINAL_OBJECT_INFO()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="495,-.5 495,-123.5 704,-123.5 704,-.5 495,-.5"/>
+<text text-anchor="middle" x="599.5" y="-111.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::ScheduleNode</text>
+<polyline fill="none" stroke="#000000" points="495,-104.5 704,-104.5 "/>
+<text text-anchor="start" x="503" y="-92.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ op2stage_cache_</text>
+<text text-anchor="start" x="503" y="-81.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
+<polyline fill="none" stroke="#000000" points="495,-74.5 704,-74.5 "/>
+<text text-anchor="start" x="503" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
+<text text-anchor="start" x="503" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ InitCache()</text>
+<text text-anchor="start" x="503" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ InvalidateCache()</text>
+<text text-anchor="start" x="503" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Contain()</text>
+<text text-anchor="start" x="503" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Contain()</text>
+<text text-anchor="start" x="503" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DECLARE_FINAL_OBJECT_INFO()</text>
</g>
<!-- Node3 -->
<g id="node2" class="node">
@@ -71,8 +71,8 @@
<!-- Node3->Node2 -->
<g id="edge1" class="edge">
<title>Node3->Node2</title>
-<path fill="none" stroke="#191970" d="M189.9694,-260.1968C213.2333,-233.9962 238.3188,-206.6484 262.5,-182 281.7235,-162.4051 303.414,-141.931 323.5327,-123.5573"/>
-<polygon fill="none" stroke="#191970" points="187.1421,-258.1103 183.1359,-267.9185 192.3842,-262.7494 187.1421,-258.1103"/>
+<path fill="none" stroke="#191970" d="M190.1504,-244.4858C211.7749,-221.4597 235.9202,-199.3575 261.5,-182 332.3487,-133.9247 424.5989,-102.8744 494.8217,-84.4296"/>
+<polygon fill="none" stroke="#191970" points="187.4335,-242.2688 183.2193,-251.9895 192.5756,-247.0185 187.4335,-242.2688"/>
</g>
<!-- Node3->Node3 -->
<g id="edge2" class="edge">
@@ -108,44 +108,44 @@
<!-- Node4->Node2 -->
<g id="edge3" class="edge">
<title>Node4->Node2</title>
-<path fill="none" stroke="#404040" d="M346.1104,-286.8162C355.2377,-243.1564 366.4121,-189.821 376.5,-142 376.9529,-139.8531 377.4131,-137.6738 377.878,-135.474"/>
-<polygon fill="none" stroke="#404040" points="377.9105,-135.3198 375.2395,-128.6216 380.3951,-123.5798 383.0661,-130.278 377.9105,-135.3198"/>
-<text text-anchor="middle" x="399" y="-156" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +groups</text>
-<text text-anchor="middle" x="399" y="-145" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+stages</text>
+<path fill="none" stroke="#404040" d="M343.6423,-286.7612C353.4811,-251.2775 369.06,-211.8775 393.5,-182 417.6789,-152.4417 451.1874,-128.6023 484.16,-110.1988"/>
+<polygon fill="none" stroke="#404040" points="484.2837,-110.1318 487.653,-103.7563 494.834,-104.4143 491.4647,-110.7899 484.2837,-110.1318"/>
+<text text-anchor="middle" x="453" y="-156" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +groups</text>
+<text text-anchor="middle" x="453" y="-145" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+stages</text>
</g>
<!-- Node5 -->
<g id="node4" class="node">
<title>Node5</title>
<g id="a_node4"><a xlink:href="classtvm_1_1runtime_1_1ObjectRef.html" target="_top" xlink:title="Base class of all object reference. ">
-<polygon fill="#ffffff" stroke="#000000" points="392.5,-607.5 392.5,-829.5 526.5,-829.5 526.5,-607.5 392.5,-607.5"/>
-<text text-anchor="middle" x="459.5" y="-817.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
-<polyline fill="none" stroke="#000000" points="392.5,-810.5 526.5,-810.5 "/>
-<text text-anchor="start" x="400.5" y="-798.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<polyline fill="none" stroke="#000000" points="392.5,-791.5 526.5,-791.5 "/>
-<text text-anchor="start" x="400.5" y="-779.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="400.5" y="-768.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="400.5" y="-757.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
-<text text-anchor="start" x="400.5" y="-746.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
-<text text-anchor="start" x="400.5" y="-735.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
-<text text-anchor="start" x="400.5" y="-724.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
-<text text-anchor="start" x="400.5" y="-713.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
-<text text-anchor="start" x="400.5" y="-702.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="400.5" y="-691.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="400.5" y="-680.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="400.5" y="-669.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
-<text text-anchor="start" x="400.5" y="-658.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
-<text text-anchor="start" x="400.5" y="-647.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
-<text text-anchor="start" x="400.5" y="-636.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
-<text text-anchor="start" x="400.5" y="-625.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
-<text text-anchor="start" x="400.5" y="-614.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
+<polygon fill="#ffffff" stroke="#000000" points="607.5,-607.5 607.5,-829.5 741.5,-829.5 741.5,-607.5 607.5,-607.5"/>
+<text text-anchor="middle" x="674.5" y="-817.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
+<polyline fill="none" stroke="#000000" points="607.5,-810.5 741.5,-810.5 "/>
+<text text-anchor="start" x="615.5" y="-798.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<polyline fill="none" stroke="#000000" points="607.5,-791.5 741.5,-791.5 "/>
+<text text-anchor="start" x="615.5" y="-779.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
+<text text-anchor="start" x="615.5" y="-768.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
+<text text-anchor="start" x="615.5" y="-757.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
+<text text-anchor="start" x="615.5" y="-746.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
+<text text-anchor="start" x="615.5" y="-735.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
+<text text-anchor="start" x="615.5" y="-724.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
+<text text-anchor="start" x="615.5" y="-713.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
+<text text-anchor="start" x="615.5" y="-702.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="615.5" y="-691.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="615.5" y="-680.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="615.5" y="-669.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
+<text text-anchor="start" x="615.5" y="-658.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
+<text text-anchor="start" x="615.5" y="-647.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
+<text text-anchor="start" x="615.5" y="-636.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
+<text text-anchor="start" x="615.5" y="-625.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
+<text text-anchor="start" x="615.5" y="-614.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
</a>
</g>
</g>
<!-- Node5->Node4 -->
<g id="edge4" class="edge">
<title>Node5->Node4</title>
-<path fill="none" stroke="#191970" d="M404.8679,-598.0252C400.9202,-588.5844 397.0862,-579.1624 393.5,-570 380.2738,-536.2084 367.1096,-498.5465 356.0576,-465.4629"/>
-<polygon fill="none" stroke="#191970" points="401.7374,-599.6095 408.8485,-607.4633 408.1872,-596.8891 401.7374,-599.6095"/>
+<path fill="none" stroke="#191970" d="M597.8539,-695.7191C534.7894,-673.2812 447.3186,-633.0875 393.5,-570 368.3327,-540.4982 352.6318,-500.9489 342.8987,-465.2564"/>
+<polygon fill="none" stroke="#191970" points="596.7705,-699.0479 607.3653,-699.0349 599.0748,-692.438 596.7705,-699.0479"/>
</g>
<!-- Node7 -->
<g id="node6" class="node">
@@ -175,8 +175,8 @@
<!-- Node5->Node7 -->
<g id="edge7" class="edge">
<title>Node5->Node7</title>
-<path fill="none" stroke="#191970" d="M459.5,-597.1125C459.5,-555.5481 459.5,-509.6904 459.5,-470.5637"/>
-<polygon fill="none" stroke="#191970" points="456.0001,-597.3 459.5,-607.3001 463.0001,-597.3001 456.0001,-597.3"/>
+<path fill="none" stroke="#191970" d="M599.5614,-656.0824C573.6178,-631.4278 546.1502,-601.4736 526.5,-570 507.6291,-539.7746 493.2777,-503.5608 482.7944,-470.649"/>
+<polygon fill="none" stroke="#191970" points="597.4986,-658.9457 607.1908,-663.225 602.2827,-653.8356 597.4986,-658.9457"/>
</g>
<!-- Node8 -->
<g id="node7" class="node">
@@ -205,53 +205,164 @@
<!-- Node5->Node8 -->
<g id="edge9" class="edge">
<title>Node5->Node8</title>
-<path fill="none" stroke="#191970" d="M514.6764,-597.9219C518.7486,-588.5047 522.7328,-579.1163 526.5,-570 540.5185,-536.0766 554.9549,-498.3879 567.2508,-465.3198"/>
-<polygon fill="none" stroke="#191970" points="511.3591,-596.7736 510.5796,-607.3397 517.7781,-599.5659 511.3591,-596.7736"/>
+<path fill="none" stroke="#191970" d="M648.0046,-597.5043C638.4664,-553.9464 627.886,-505.6293 619.0472,-465.2656"/>
+<polygon fill="none" stroke="#191970" points="644.5915,-598.2802 650.1496,-607.3001 651.4295,-596.7828 644.5915,-598.2802"/>
+</g>
+<!-- Node9 -->
+<g id="node8" class="node">
+<title>Node9</title>
+<g id="a_node8"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::runtime::String \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
+<polygon fill="#ffffff" stroke="#000000" points="682,-287 682,-465 819,-465 819,-287 682,-287"/>
+<text text-anchor="start" x="690" y="-453" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="750.5" y="-442" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::String ></text>
+<polyline fill="none" stroke="#000000" points="682,-435 819,-435 "/>
+<text text-anchor="middle" x="750.5" y="-423" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="682,-416 819,-416 "/>
+<text text-anchor="start" x="690" y="-404" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="690" y="-393" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="690" y="-382" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="690" y="-371" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="690" y="-360" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="690" y="-349" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="690" y="-338" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="690" y="-327" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="690" y="-316" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="690" y="-305" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="690" y="-294" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
+</a>
+</g>
+</g>
+<!-- Node5->Node9 -->
+<g id="edge11" class="edge">
+<title>Node5->Node9</title>
+<path fill="none" stroke="#191970" d="M701.3487,-597.5043C711.0141,-553.9464 721.7355,-505.6293 730.6922,-465.2656"/>
+<polygon fill="none" stroke="#191970" points="697.9245,-596.7793 699.175,-607.3001 704.7583,-598.2957 697.9245,-596.7793"/>
+</g>
+<!-- Node10 -->
+<g id="node9" class="node">
+<title>Node10</title>
+<g id="a_node9"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::te::Schedule \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
+<polygon fill="#ffffff" stroke="#000000" points="837,-287 837,-465 962,-465 962,-287 837,-287"/>
+<text text-anchor="start" x="845" y="-453" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="899.5" y="-442" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::te::Schedule ></text>
+<polyline fill="none" stroke="#000000" points="837,-435 962,-435 "/>
+<text text-anchor="middle" x="899.5" y="-423" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="837,-416 962,-416 "/>
+<text text-anchor="start" x="845" y="-404" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="845" y="-393" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="845" y="-382" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="845" y="-371" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="845" y="-360" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="845" y="-349" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="845" y="-338" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="845" y="-327" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="845" y="-316" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="845" y="-305" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="845" y="-294" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
+</a>
+</g>
+</g>
+<!-- Node5->Node10 -->
+<g id="edge13" class="edge">
+<title>Node5->Node10</title>
+<path fill="none" stroke="#191970" d="M749.3648,-658.9729C777.1625,-633.872 807.1026,-602.8591 828.5,-570 849.1476,-538.2923 864.8353,-499.7708 876.1445,-465.4367"/>
+<polygon fill="none" stroke="#191970" points="746.7331,-656.6296 741.5901,-665.8925 751.387,-661.8586 746.7331,-656.6296"/>
+</g>
+<!-- Node11 -->
+<g id="node10" class="node">
+<title>Node11</title>
+<g id="a_node10"><a xlink:href="classtvm_1_1runtime_1_1Optional.html" target="_top" xlink:title="{tvm::runtime::Optional\l\< tvm::Bool \>\n|+ _type_is_nullable\l|+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ operator=()\l+ operator=()\land 15 more...\l}">
+<polygon fill="#ffffff" stroke="#000000" points="980.5,-287 980.5,-465 1108.5,-465 1108.5,-287 980.5,-287"/>
+<text text-anchor="start" x="988.5" y="-453" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Optional</text>
+<text text-anchor="middle" x="1044.5" y="-442" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::Bool ></text>
+<polyline fill="none" stroke="#000000" points="980.5,-435 1108.5,-435 "/>
+<text text-anchor="start" x="988.5" y="-423" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<polyline fill="none" stroke="#000000" points="980.5,-416 1108.5,-416 "/>
+<text text-anchor="start" x="988.5" y="-404" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="988.5" y="-393" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="988.5" y="-382" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="988.5" y="-371" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="988.5" y="-360" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="988.5" y="-349" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="988.5" y="-338" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="988.5" y="-327" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="988.5" y="-316" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="988.5" y="-305" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="988.5" y="-294" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 15 more...</text>
+</a>
+</g>
+</g>
+<!-- Node5->Node11 -->
+<g id="edge15" class="edge">
+<title>Node5->Node11</title>
+<path fill="none" stroke="#191970" d="M751.4004,-697.5348C817.9252,-675.7527 912.4583,-635.5084 971.5,-570 997.8812,-540.7294 1015.0499,-501.0052 1026.0606,-465.1255"/>
+<polygon fill="none" stroke="#191970" points="750.2492,-694.2283 741.7961,-700.6154 752.3872,-700.8939 750.2492,-694.2283"/>
</g>
<!-- Node6 -->
<g id="node5" class="node">
<title>Node6</title>
<g id="a_node5"><a xlink:href="classtvm_1_1runtime_1_1ObjectPtr.html" target="_top" xlink:title="{tvm::runtime::ObjectPtr\l\< tvm::runtime::Object \>\n||+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ~ObjectPtr()\l+ swap()\l+ get()\l+ operator-\>()\land 11 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="389.5,-877.5 389.5,-1055.5 529.5,-1055.5 529.5,-877.5 389.5,-877.5"/>
-<text text-anchor="start" x="397.5" y="-1043.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
-<text text-anchor="middle" x="459.5" y="-1032.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::Object ></text>
-<polyline fill="none" stroke="#000000" points="389.5,-1025.5 529.5,-1025.5 "/>
-<text text-anchor="middle" x="459.5" y="-1013.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="389.5,-1006.5 529.5,-1006.5 "/>
-<text text-anchor="start" x="397.5" y="-994.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="397.5" y="-983.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="397.5" y="-972.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="397.5" y="-961.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="397.5" y="-950.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="397.5" y="-939.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="397.5" y="-928.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
-<text text-anchor="start" x="397.5" y="-917.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
-<text text-anchor="start" x="397.5" y="-906.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="397.5" y="-895.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="397.5" y="-884.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
+<polygon fill="#ffffff" stroke="#000000" points="604.5,-877.5 604.5,-1055.5 744.5,-1055.5 744.5,-877.5 604.5,-877.5"/>
+<text text-anchor="start" x="612.5" y="-1043.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
+<text text-anchor="middle" x="674.5" y="-1032.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::Object ></text>
+<polyline fill="none" stroke="#000000" points="604.5,-1025.5 744.5,-1025.5 "/>
+<text text-anchor="middle" x="674.5" y="-1013.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="604.5,-1006.5 744.5,-1006.5 "/>
+<text text-anchor="start" x="612.5" y="-994.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="612.5" y="-983.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="612.5" y="-972.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="612.5" y="-961.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="612.5" y="-950.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="612.5" y="-939.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="612.5" y="-928.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
+<text text-anchor="start" x="612.5" y="-917.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
+<text text-anchor="start" x="612.5" y="-906.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="612.5" y="-895.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="612.5" y="-884.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
</a>
</g>
</g>
<!-- Node6->Node5 -->
<g id="edge5" class="edge">
<title>Node6->Node5</title>
-<path fill="none" stroke="#404040" d="M459.5,-877.3167C459.5,-865.8765 459.5,-854.0062 459.5,-842.1402"/>
-<polygon fill="none" stroke="#404040" points="459.5001,-841.7944 455.5,-835.7944 459.5,-829.7944 463.5,-835.7943 459.5001,-841.7944"/>
-<text text-anchor="middle" x="479" y="-851" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
+<path fill="none" stroke="#404040" d="M674.5,-877.3167C674.5,-865.8765 674.5,-854.0062 674.5,-842.1402"/>
+<polygon fill="none" stroke="#404040" points="674.5001,-841.7944 670.5,-835.7944 674.5,-829.7944 678.5,-835.7943 674.5001,-841.7944"/>
+<text text-anchor="middle" x="694" y="-851" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
</g>
<!-- Node7->Node2 -->
<g id="edge6" class="edge">
<title>Node7->Node2</title>
-<path fill="none" stroke="#404040" d="M447.979,-281.2891C441.777,-238.4003 433.0042,-187.2747 421.5,-142 420.953,-139.8474 420.3732,-137.6705 419.7663,-135.4796"/>
-<polygon fill="none" stroke="#404040" points="419.6844,-135.201 414.1528,-130.5748 416.2956,-123.6894 421.8272,-128.3156 419.6844,-135.201"/>
-<text text-anchor="middle" x="458.5" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +stage_map</text>
+<path fill="none" stroke="#404040" d="M478.0654,-281.2242C489.1104,-237.1361 505.713,-185.0543 529.5,-142 530.9837,-139.3145 532.57,-136.6403 534.2391,-133.9874"/>
+<polygon fill="none" stroke="#404040" points="534.3426,-133.8338 534.3774,-126.6227 541.0471,-123.8813 541.0123,-131.0923 534.3426,-133.8338"/>
+<text text-anchor="middle" x="562.5" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +stage_map</text>
</g>
<!-- Node8->Node2 -->
<g id="edge8" class="edge">
<title>Node8->Node2</title>
-<path fill="none" stroke="#404040" d="M576.8347,-286.9889C560.8049,-239.0302 534.994,-181.7234 495.5,-142 492.1158,-138.5961 488.5875,-135.2041 484.9625,-131.8456"/>
-<polygon fill="none" stroke="#404040" points="484.6984,-131.6079 477.563,-130.5661 475.7802,-123.5789 482.9156,-124.6206 484.6984,-131.6079"/>
-<text text-anchor="middle" x="538" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +outputs</text>
+<path fill="none" stroke="#404040" d="M599.5,-286.8601C599.5,-239.1936 599.5,-181.1931 599.5,-136.0383"/>
+<polygon fill="none" stroke="#404040" points="599.5001,-135.9601 595.5,-129.9602 599.5,-123.9601 603.5,-129.9601 599.5001,-135.9601"/>
+<text text-anchor="middle" x="624" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +outputs</text>
+</g>
+<!-- Node9->Node2 -->
+<g id="edge10" class="edge">
+<title>Node9->Node2</title>
+<path fill="none" stroke="#404040" d="M719.3171,-286.782C702.1492,-242.0265 678.9621,-187.7511 652.5,-142 650.9673,-139.35 649.3815,-136.6683 647.7582,-133.973"/>
+<polygon fill="none" stroke="#404040" points="647.6326,-133.7688 641.0815,-130.7549 641.3439,-123.5486 647.895,-126.5625 647.6326,-133.7688"/>
+<text text-anchor="middle" x="707.5" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +primitive_record</text>
+</g>
+<!-- Node10->Node2 -->
+<g id="edge12" class="edge">
+<title>Node10->Node2</title>
+<path fill="none" stroke="#404040" d="M881.0441,-286.5397C870.2887,-251.0212 853.6964,-211.6552 828.5,-182 798.6171,-146.829 755.7961,-120.5521 715.251,-101.6986"/>
+<polygon fill="none" stroke="#404040" points="715.1065,-101.6335 707.9924,-102.8124 704.168,-96.6989 711.2821,-95.52 715.1065,-101.6335"/>
+<text text-anchor="middle" x="854" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +schedule_record</text>
+</g>
+<!-- Node11->Node2 -->
+<g id="edge14" class="edge">
+<title>Node11->Node2</title>
+<path fill="none" stroke="#404040" d="M1024.7593,-286.7598C1013.5807,-251.2758 996.6381,-211.8761 971.5,-182 938.157,-142.3725 812.3334,-107.3406 716.1608,-85.5113"/>
+<polygon fill="none" stroke="#404040" points="715.9998,-85.4753 709.2699,-88.0654 704.2906,-82.8494 711.0205,-80.2593 715.9998,-85.4753"/>
+<text text-anchor="middle" x="1007" y="-150.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +keep_schedule_record</text>
</g>
</g>
</svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode__inherit__graph.svg b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode__inherit__graph.svg
index 701c5707bd..bafae5417b 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode__inherit__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1ScheduleNode__inherit__graph.svg
@@ -4,22 +4,25 @@
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: tvm::te::ScheduleNode Pages: 1 -->
-<svg width="217pt" height="611pt"
- viewBox="0.00 0.00 217.00 611.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 607)">
+<svg width="217pt" height="644pt"
+ viewBox="0.00 0.00 217.00 644.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 640)">
<title>tvm::te::ScheduleNode</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-607 213,-607 213,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-640 213,-640 213,4 -4,4"/>
<!-- Node0 -->
<g id="node1" class="node">
<title>Node0</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-167.5 209,-167.5 209,-.5 0,-.5"/>
-<text text-anchor="middle" x="104.5" y="-155.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::ScheduleNode</text>
-<polyline fill="none" stroke="#000000" points="0,-148.5 209,-148.5 "/>
-<text text-anchor="start" x="8" y="-136.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ outputs</text>
-<text text-anchor="start" x="8" y="-125.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ stages</text>
-<text text-anchor="start" x="8" y="-114.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ groups</text>
-<text text-anchor="start" x="8" y="-103.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ stage_map</text>
-<text text-anchor="start" x="8" y="-92.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ op2stage_cache_</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-200.5 209,-200.5 209,-.5 0,-.5"/>
+<text text-anchor="middle" x="104.5" y="-188.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::ScheduleNode</text>
+<polyline fill="none" stroke="#000000" points="0,-181.5 209,-181.5 "/>
+<text text-anchor="start" x="8" y="-169.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ outputs</text>
+<text text-anchor="start" x="8" y="-158.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ stages</text>
+<text text-anchor="start" x="8" y="-147.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ groups</text>
+<text text-anchor="start" x="8" y="-136.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ stage_map</text>
+<text text-anchor="start" x="8" y="-125.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ op2stage_cache_</text>
+<text text-anchor="start" x="8" y="-114.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ schedule_record</text>
+<text text-anchor="start" x="8" y="-103.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ primitive_record</text>
+<text text-anchor="start" x="8" y="-92.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ keep_schedule_record</text>
<text text-anchor="start" x="8" y="-81.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
<polyline fill="none" stroke="#000000" points="0,-74.5 209,-74.5 "/>
<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
@@ -33,51 +36,51 @@
<g id="node2" class="node">
<title>Node1</title>
<g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1Object.html" target="_top" xlink:title="base class of all object containers. ">
-<polygon fill="#ffffff" stroke="#000000" points="13,-204.5 13,-602.5 196,-602.5 196,-204.5 13,-204.5"/>
-<text text-anchor="middle" x="104.5" y="-590.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Object</text>
-<polyline fill="none" stroke="#000000" points="13,-583.5 196,-583.5 "/>
-<text text-anchor="start" x="21" y="-571.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
-<text text-anchor="start" x="21" y="-560.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_final</text>
-<text text-anchor="start" x="21" y="-549.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots</text>
-<text text-anchor="start" x="21" y="-538.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots_can</text>
-<text text-anchor="start" x="21" y="-527.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_overflow</text>
-<text text-anchor="start" x="21" y="-516.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_visit</text>
-<text text-anchor="start" x="21" y="-505.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_attrs</text>
-<text text-anchor="start" x="21" y="-494.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
-<text text-anchor="start" x="21" y="-483.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="21" y="-472.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
-<text text-anchor="start" x="21" y="-461.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="21" y="-450.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_index</text>
-<text text-anchor="start" x="21" y="-439.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># type_index_</text>
-<text text-anchor="start" x="21" y="-428.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># ref_counter_</text>
-<text text-anchor="start" x="21" y="-417.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># deleter_</text>
-<polyline fill="none" stroke="#000000" points="13,-410.5 196,-410.5 "/>
-<text text-anchor="start" x="21" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ type_index()</text>
-<text text-anchor="start" x="21" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKey()</text>
-<text text-anchor="start" x="21" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKeyHash()</text>
-<text text-anchor="start" x="21" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IsInstance()</text>
-<text text-anchor="start" x="21" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="21" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
-<text text-anchor="start" x="21" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
-<text text-anchor="start" x="21" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
-<text text-anchor="start" x="21" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="21" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="21" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2Key()</text>
-<text text-anchor="start" x="21" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2KeyHash()</text>
-<text text-anchor="start" x="21" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeKey2Index()</text>
-<text text-anchor="start" x="21" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _GetOrAllocRuntimeTypeIndex()</text>
-<text text-anchor="start" x="21" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RuntimeTypeIndex()</text>
-<text text-anchor="start" x="21" y="-233.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># IncRef()</text>
-<text text-anchor="start" x="21" y="-222.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DecRef()</text>
-<text text-anchor="start" x="21" y="-211.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetOrAllocRuntimeTypeIndex()</text>
+<polygon fill="#ffffff" stroke="#000000" points="13,-237.5 13,-635.5 196,-635.5 196,-237.5 13,-237.5"/>
+<text text-anchor="middle" x="104.5" y="-623.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Object</text>
+<polyline fill="none" stroke="#000000" points="13,-616.5 196,-616.5 "/>
+<text text-anchor="start" x="21" y="-604.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
+<text text-anchor="start" x="21" y="-593.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_final</text>
+<text text-anchor="start" x="21" y="-582.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots</text>
+<text text-anchor="start" x="21" y="-571.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots_can</text>
+<text text-anchor="start" x="21" y="-560.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_overflow</text>
+<text text-anchor="start" x="21" y="-549.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_visit</text>
+<text text-anchor="start" x="21" y="-538.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_attrs</text>
+<text text-anchor="start" x="21" y="-527.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
+<text text-anchor="start" x="21" y="-516.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="21" y="-505.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
+<text text-anchor="start" x="21" y="-494.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="21" y="-483.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_index</text>
+<text text-anchor="start" x="21" y="-472.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># type_index_</text>
+<text text-anchor="start" x="21" y="-461.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># ref_counter_</text>
+<text text-anchor="start" x="21" y="-450.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># deleter_</text>
+<polyline fill="none" stroke="#000000" points="13,-443.5 196,-443.5 "/>
+<text text-anchor="start" x="21" y="-431.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ type_index()</text>
+<text text-anchor="start" x="21" y="-420.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKey()</text>
+<text text-anchor="start" x="21" y="-409.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKeyHash()</text>
+<text text-anchor="start" x="21" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IsInstance()</text>
+<text text-anchor="start" x="21" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="21" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
+<text text-anchor="start" x="21" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
+<text text-anchor="start" x="21" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
+<text text-anchor="start" x="21" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="21" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="21" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2Key()</text>
+<text text-anchor="start" x="21" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2KeyHash()</text>
+<text text-anchor="start" x="21" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeKey2Index()</text>
+<text text-anchor="start" x="21" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _GetOrAllocRuntimeTypeIndex()</text>
+<text text-anchor="start" x="21" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RuntimeTypeIndex()</text>
+<text text-anchor="start" x="21" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># IncRef()</text>
+<text text-anchor="start" x="21" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DecRef()</text>
+<text text-anchor="start" x="21" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetOrAllocRuntimeTypeIndex()</text>
</a>
</g>
</g>
<!-- Node1->Node0 -->
<g id="edge1" class="edge">
<title>Node1->Node0</title>
-<path fill="none" stroke="#191970" d="M104.5,-194.3328C104.5,-185.1966 104.5,-176.2996 104.5,-167.7785"/>
-<polygon fill="none" stroke="#191970" points="101.0001,-194.3339 104.5,-204.334 108.0001,-194.334 101.0001,-194.3339"/>
+<path fill="none" stroke="#191970" d="M104.5,-227.0222C104.5,-218.097 104.5,-209.3384 104.5,-200.856"/>
+<polygon fill="none" stroke="#191970" points="101.0001,-227.164 104.5,-237.1641 108.0001,-227.1641 101.0001,-227.164"/>
</g>
</g>
</svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1Stage-members.html b/docs/reference/api/doxygen/classtvm_1_1te_1_1Stage-members.html
index aea79434b9..f2b2ddb1ce 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1Stage-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1Stage-members.html
@@ -109,7 +109,7 @@ $(function() {
<tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html#a51432f38d9ec4792a2525023179ae604">split_by_nparts</a>(IterVar parent, PrimExpr nparts, IterVar *p_outer, IterVar *p_inner)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html">tvm::te::Stage</a></td><td class="entry"></td></tr>
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html#afec82602b9321c489b88632a005335f8">Stage</a>()</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html">tvm::te::Stage</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html#aa6ace38b6312e42aaf9389c8749ae0a4">Stage</a>(ObjectPtr< Object > n)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html">tvm::te::Stage</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">explicit</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html#a1ecdc9a000be62c9cc26a96d4c33e36e">Stage</a>(Operation op)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html">tvm::te::Stage</a></td><td class="entry"><span class="mlabel">explicit</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html#a510049e03f2152d5934cd3bd75033bab">Stage</a>(Operation op, const ScheduleNode *sch)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html">tvm::te::Stage</a></td><td class="entry"><span class="mlabel">explicit</span></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html#aa73e3a269d84c3b4f0a1994371d67bab">storage_align</a>(IterVar axis, int factor, int offset)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html">tvm::te::Stage</a></td><td class="entry"></td></tr>
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html#ab5fe485e1d730c36b096c060b8d2ef9d">tensorize</a>(IterVar var, TensorIntrin f)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html">tvm::te::Stage</a></td><td class="entry"></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html#a7a42ba3166c506fcacf596ac13553b67">tile</a>(IterVar x_parent, IterVar y_parent, PrimExpr x_factor, PrimExpr y_factor, IterVar *p_x_outer, IterVar *p_y_outer, IterVar *p_x_inner, IterVar *p_y_inner)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1Stage.html">tvm::te::Stage</a></td><td class="entry"></td></tr>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1Stage.html b/docs/reference/api/doxygen/classtvm_1_1te_1_1Stage.html
index bb8de5558a..190ed996e4 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1Stage.html
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1Stage.html
@@ -103,9 +103,9 @@ Public Member Functions</h2></td></tr>
<tr class="separator:afec82602b9321c489b88632a005335f8"><td class="memSeparator" colspan="2"> </td></tr>
<tr class="memitem:aa6ace38b6312e42aaf9389c8749ae0a4"><td class="memItemLeft" align="right" valign="top"> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1Stage.html#aa6ace38b6312e42aaf9389c8749ae0a4">Stage</a> (<a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html">ObjectPtr</a>< <a class="el" href="classtvm_1_1runtime_1_1Object.html">Object</a> > n)</td></tr>
<tr class="separator:aa6ace38b6312e42aaf9389c8749ae0a4"><td class="memSeparator" colspan="2"> </td></tr>
-<tr class="memitem:a1ecdc9a000be62c9cc26a96d4c33e36e"><td class="memItemLeft" align="right" valign="top"> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1Stage.html#a1ecdc9a000be62c9cc26a96d4c33e36e">Stage</a> (<a class="el" href="classtvm_1_1te_1_1Operation.html">Operation</a> op)</td></tr>
-<tr class="memdesc:a1ecdc9a000be62c9cc26a96d4c33e36e"><td class="mdescLeft"> </td><td class="mdescRight">create a new schedule for op. <a href="#a1ecdc9a000be62c9cc26a96d4c33e36e">More...</a><br /></td></tr>
-<tr class="separator:a1ecdc9a000be62c9cc26a96d4c33e36e"><td class="memSeparator" colspan="2"> </td></tr>
+<tr class="memitem:a510049e03f2152d5934cd3bd75033bab"><td class="memItemLeft" align="right" valign="top"> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1Stage.html#a510049e03f2152d5934cd3bd75033bab">Stage</a> (<a class="el" href="classtvm_1_1te_1_1Operation.html">Operation</a> op, const <a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">ScheduleNode</a> *sch)</td></tr>
+<tr class="memdesc:a510049e03f2152d5934cd3bd75033bab"><td class="mdescLeft"> </td><td class="mdescRight">create a new schedule for op. <a href="#a510049e03f2152d5934cd3bd75033bab">More...</a><br /></td></tr>
+<tr class="separator:a510049e03f2152d5934cd3bd75033bab"><td class="memSeparator" colspan="2"> </td></tr>
<tr class="memitem:a7a5aeafe44281a6fca4b401139407241"><td class="memItemLeft" align="right" valign="top">const <a class="el" href="classtvm_1_1te_1_1StageNode.html">StageNode</a> * </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1Stage.html#a7a5aeafe44281a6fca4b401139407241">operator-></a> () const</td></tr>
<tr class="memdesc:a7a5aeafe44281a6fca4b401139407241"><td class="mdescLeft"> </td><td class="mdescRight">access the internal node container <a href="#a7a5aeafe44281a6fca4b401139407241">More...</a><br /></td></tr>
<tr class="separator:a7a5aeafe44281a6fca4b401139407241"><td class="memSeparator" colspan="2"> </td></tr>
@@ -318,8 +318,8 @@ Additional Inherited Members</h2></td></tr>
</div>
</div>
-<a id="a1ecdc9a000be62c9cc26a96d4c33e36e"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#a1ecdc9a000be62c9cc26a96d4c33e36e">◆ </a></span>Stage() <span class="overload">[3/3]</span></h2>
+<a id="a510049e03f2152d5934cd3bd75033bab"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a510049e03f2152d5934cd3bd75033bab">◆ </a></span>Stage() <span class="overload">[3/3]</span></h2>
<div class="memitem">
<div class="memproto">
@@ -331,8 +331,18 @@ Additional Inherited Members</h2></td></tr>
<td class="memname">tvm::te::Stage::Stage </td>
<td>(</td>
<td class="paramtype"><a class="el" href="classtvm_1_1te_1_1Operation.html">Operation</a> </td>
- <td class="paramname"><em>op</em></td><td>)</td>
+ <td class="paramname"><em>op</em>, </td>
+ </tr>
+ <tr>
+ <td class="paramkey"></td>
+ <td></td>
+ <td class="paramtype">const <a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">ScheduleNode</a> * </td>
+ <td class="paramname"><em>sch</em> </td>
+ </tr>
+ <tr>
<td></td>
+ <td>)</td>
+ <td></td><td></td>
</tr>
</table>
</td>
@@ -346,6 +356,7 @@ Additional Inherited Members</h2></td></tr>
<dl class="params"><dt>Parameters</dt><dd>
<table class="params">
<tr><td class="paramname">op</td><td>The operator in the schedule </td></tr>
+ <tr><td class="paramname">sch</td><td>The schedule which current stage belongs to </td></tr>
</table>
</dd>
</dl>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode-members.html b/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode-members.html
index 35966bd75b..66cd7e4397 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode-members.html
@@ -80,47 +80,48 @@ $(function() {
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#af06a8dd105358f2c3aa5f65c8014f13f">_type_key</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#ae4592502d1c99f2515be61a6503bb7a6">all_iter_vars</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#ad8a1f14b199103ecf22e7bf021eff8d4">attach_ivar</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a75c8cf7d913a913e34abcaf6797540a5">attach_stage</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a1e77f0ad8149a5aabaa8a98907ff3bed">attach_type</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a5525978c4fe9d26848934f8e096d887c">axis_separators</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a70fb5361147634605d6595bb89381f03">DecRef</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#af4407d2b59132e803ff791482dbe0145">deleter_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#af5cb8c43f82eac4021fd06ab7c475f82">double_buffer</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#ac6bfe27a0802f257d467667522d0cbee">env_threads</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a9e84841ca982bff376a978ade0132631">FDeleter</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a726972ff315c446192df94027ddea032">GetOrAllocRuntimeTypeIndex</a>(const std::string &key, uint32_t static_tindex, uint32_t parent_tindex, uint32_t type_child_slots, bool type_child_slots_can_overflow)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4d951e51832081b85875669eac90e940">GetTypeKey</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a5693cbadcc1168b96db7b1cc5c200b86">GetTypeKeyHash</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a02935c5eeeaa3ae794e971d449b5e377">group</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ac9e5eed7719e322117bde996a171e33a">IncRef</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#af5cc9e6c2276cf8abde8f437f8bdbda4">is_output</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a90e90b3f4ba8a590baff78c75807bbc7">IsInstance</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a1d1f5c5e99f0c0c5d09a497b5c05443f">iter_var_attrs</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a99d637a0da3b9f5d688f62410c884bea">layout_transforms</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a80162bcc647f01efa9ab97da3ca57410">leaf_iter_vars</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a98769dd08ea20c6d72f9abfe80d20090">num_child_stages</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a133436a9ec5c4a768b94102bf95a660b">Object</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ab7968feb6ad38ecaffc320e13819d826">Object</a>(const Object &other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#aa1612f69ea5b4225d4cda759cd517323">Object</a>(Object &&other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a1e98ce6b9c48fd7ec5077c06f35d2ae1">op</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a69c32fbd96181f5c21d2c878ab285e4f">operator=</a>(const Object &other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ae341e561272ff43cdcbc927bc29ac50d">operator=</a>(Object &&other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a3e7c2fb80404a12a9e843fcb38accd78">origin_op</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a0d492efee331e2239a093f4b2017c10f">ref_counter_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a55549a6c23987890246248682560a03d">RefCounterType</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#ad1c0f8dc1f0f406a2abcd05fdad8fad5">relations</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a8a709edc806b64c606a12c703fab22e4">rolling_buffer</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ad94d79729ac85aa7c976e23d39066383">RuntimeTypeIndex</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a55acf027a39738cd1ddd063b27086038">scope</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a8f4ba7f2931b3541c12734af511600a7">store_predicate</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a51b9748fd004dc2f3fcb23163eb78e0f">TVM_DECLARE_FINAL_OBJECT_INFO</a>(StageNode, Object)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a481f01923b14e1851ebd38506e9c66ea">type_index</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4bfc2586cb55f2af47728187b3256255">type_index_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">TypeIndex2Key</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6ee32a02dd44257da105fbbe5d9c8622">TypeIndex2KeyHash</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6841f97e06e6614dd7e82c6dd41b818a">TypeKey2Index</a>(const std::string &key)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#ae86cbe717f924c4e30cef2a1a086308a">VisitAttrs</a>(AttrVisitor *v)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a0627160a0f180921c11b3ffcda1ab2c8">attach_sch</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a75c8cf7d913a913e34abcaf6797540a5">attach_stage</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a1e77f0ad8149a5aabaa8a98907ff3bed">attach_type</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a5525978c4fe9d26848934f8e096d887c">axis_separators</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a70fb5361147634605d6595bb89381f03">DecRef</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#af4407d2b59132e803ff791482dbe0145">deleter_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#af5cb8c43f82eac4021fd06ab7c475f82">double_buffer</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#ac6bfe27a0802f257d467667522d0cbee">env_threads</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a9e84841ca982bff376a978ade0132631">FDeleter</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a726972ff315c446192df94027ddea032">GetOrAllocRuntimeTypeIndex</a>(const std::string &key, uint32_t static_tindex, uint32_t parent_tindex, uint32_t type_child_slots, bool type_child_slots_can_overflow)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span><span class="mlabel">static</span [...]
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4d951e51832081b85875669eac90e940">GetTypeKey</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a5693cbadcc1168b96db7b1cc5c200b86">GetTypeKeyHash</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a02935c5eeeaa3ae794e971d449b5e377">group</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ac9e5eed7719e322117bde996a171e33a">IncRef</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#af5cc9e6c2276cf8abde8f437f8bdbda4">is_output</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a90e90b3f4ba8a590baff78c75807bbc7">IsInstance</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a1d1f5c5e99f0c0c5d09a497b5c05443f">iter_var_attrs</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a99d637a0da3b9f5d688f62410c884bea">layout_transforms</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a80162bcc647f01efa9ab97da3ca57410">leaf_iter_vars</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a98769dd08ea20c6d72f9abfe80d20090">num_child_stages</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a133436a9ec5c4a768b94102bf95a660b">Object</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ab7968feb6ad38ecaffc320e13819d826">Object</a>(const Object &other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#aa1612f69ea5b4225d4cda759cd517323">Object</a>(Object &&other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a1e98ce6b9c48fd7ec5077c06f35d2ae1">op</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a69c32fbd96181f5c21d2c878ab285e4f">operator=</a>(const Object &other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ae341e561272ff43cdcbc927bc29ac50d">operator=</a>(Object &&other)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a3e7c2fb80404a12a9e843fcb38accd78">origin_op</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a0d492efee331e2239a093f4b2017c10f">ref_counter_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a55549a6c23987890246248682560a03d">RefCounterType</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#ad1c0f8dc1f0f406a2abcd05fdad8fad5">relations</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a8a709edc806b64c606a12c703fab22e4">rolling_buffer</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#ad94d79729ac85aa7c976e23d39066383">RuntimeTypeIndex</a>()</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a55acf027a39738cd1ddd063b27086038">scope</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a8f4ba7f2931b3541c12734af511600a7">store_predicate</a></td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a51b9748fd004dc2f3fcb23163eb78e0f">TVM_DECLARE_FINAL_OBJECT_INFO</a>(StageNode, Object)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a481f01923b14e1851ebd38506e9c66ea">type_index</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a4bfc2586cb55f2af47728187b3256255">type_index_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">TypeIndex2Key</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6ee32a02dd44257da105fbbe5d9c8622">TypeIndex2KeyHash</a>(uint32_t tindex)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#a6841f97e06e6614dd7e82c6dd41b818a">TypeKey2Index</a>(const std::string &key)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1Object.html">tvm::runtime::Object</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html#ae86cbe717f924c4e30cef2a1a086308a">VisitAttrs</a>(AttrVisitor *v)</td><td class="entry"><a class="el" href="classtvm_1_1te_1_1StageNode.html">tvm::te::StageNode</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
</table></div><!-- contents -->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode.html b/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode.html
index 3b37734efc..e2acb6c73d 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode.html
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode.html
@@ -85,7 +85,7 @@ Inheritance diagram for tvm::te::StageNode:</div>
<div class="dynheader">
Collaboration diagram for tvm::te::StageNode:</div>
<div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1te_1_1StageNode__coll__graph.svg" width="1990" height="1930"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1te_1_1StageNode__coll__graph.svg" width="2566" height="1974"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
</div>
</div>
<table class="memberdecls">
@@ -153,6 +153,9 @@ Public Attributes</h2></td></tr>
<tr class="memitem:a75c8cf7d913a913e34abcaf6797540a5"><td class="memItemLeft" align="right" valign="top"><a class="el" href="classtvm_1_1te_1_1Stage.html">Stage</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a75c8cf7d913a913e34abcaf6797540a5">attach_stage</a></td></tr>
<tr class="memdesc:a75c8cf7d913a913e34abcaf6797540a5"><td class="mdescLeft"> </td><td class="mdescRight">The stage this node attaches to. <a href="#a75c8cf7d913a913e34abcaf6797540a5">More...</a><br /></td></tr>
<tr class="separator:a75c8cf7d913a913e34abcaf6797540a5"><td class="memSeparator" colspan="2"> </td></tr>
+<tr class="memitem:a0627160a0f180921c11b3ffcda1ab2c8"><td class="memItemLeft" align="right" valign="top">const <a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">ScheduleNode</a> * </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a0627160a0f180921c11b3ffcda1ab2c8">attach_sch</a></td></tr>
+<tr class="memdesc:a0627160a0f180921c11b3ffcda1ab2c8"><td class="mdescLeft"> </td><td class="mdescRight">The schedule current stage is attached to. <a href="#a0627160a0f180921c11b3ffcda1ab2c8">More...</a><br /></td></tr>
+<tr class="separator:a0627160a0f180921c11b3ffcda1ab2c8"><td class="memSeparator" colspan="2"> </td></tr>
<tr class="memitem:a55acf027a39738cd1ddd063b27086038"><td class="memItemLeft" align="right" valign="top">std::string </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1te_1_1StageNode.html#a55acf027a39738cd1ddd063b27086038">scope</a></td></tr>
<tr class="memdesc:a55acf027a39738cd1ddd063b27086038"><td class="mdescLeft"> </td><td class="mdescRight">The thread storage scope level of the stage. <a href="#a55acf027a39738cd1ddd063b27086038">More...</a><br /></td></tr>
<tr class="separator:a55acf027a39738cd1ddd063b27086038"><td class="memSeparator" colspan="2"> </td></tr>
@@ -358,6 +361,22 @@ Additional Inherited Members</h2></td></tr>
<p>The attach point of this schedule. </p>
+</div>
+</div>
+<a id="a0627160a0f180921c11b3ffcda1ab2c8"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a0627160a0f180921c11b3ffcda1ab2c8">◆ </a></span>attach_sch</h2>
+
+<div class="memitem">
+<div class="memproto">
+ <table class="memname">
+ <tr>
+ <td class="memname">const <a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">ScheduleNode</a>* tvm::te::StageNode::attach_sch</td>
+ </tr>
+ </table>
+</div><div class="memdoc">
+
+<p>The schedule current stage is attached to. </p>
+
</div>
</div>
<a id="a75c8cf7d913a913e34abcaf6797540a5"></a>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode__coll__graph.svg
index 152d669eab..c798b26fa5 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode__coll__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode__coll__graph.svg
@@ -4,478 +4,734 @@
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: tvm::te::StageNode Pages: 1 -->
-<svg width="1492pt" height="1447pt"
- viewBox="0.00 0.00 1491.50 1447.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 1443)">
+<svg width="1924pt" height="1480pt"
+ viewBox="0.00 0.00 1923.72 1480.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 1476)">
<title>tvm::te::StageNode</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-1443 1487.5,-1443 1487.5,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-1476 1919.7173,-1476 1919.7173,4 -4,4"/>
<!-- Node2 -->
<g id="node1" class="node">
<title>Node2</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="643,-.5 643,-134.5 852,-134.5 852,-.5 643,-.5"/>
-<text text-anchor="middle" x="747.5" y="-122.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::StageNode</text>
-<polyline fill="none" stroke="#000000" points="643,-115.5 852,-115.5 "/>
-<text text-anchor="start" x="651" y="-103.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ attach_type</text>
-<text text-anchor="start" x="651" y="-92.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ scope</text>
-<text text-anchor="start" x="651" y="-81.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ is_output</text>
-<text text-anchor="start" x="651" y="-70.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ double_buffer</text>
-<text text-anchor="start" x="651" y="-59.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ rolling_buffer</text>
-<text text-anchor="start" x="651" y="-48.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ num_child_stages</text>
-<text text-anchor="start" x="651" y="-37.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
-<polyline fill="none" stroke="#000000" points="643,-30.5 852,-30.5 "/>
-<text text-anchor="start" x="651" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
-<text text-anchor="start" x="651" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DECLARE_FINAL_OBJECT_INFO()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="686.5,-.5 686.5,-134.5 895.5,-134.5 895.5,-.5 686.5,-.5"/>
+<text text-anchor="middle" x="791" y="-122.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::StageNode</text>
+<polyline fill="none" stroke="#000000" points="686.5,-115.5 895.5,-115.5 "/>
+<text text-anchor="start" x="694.5" y="-103.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ attach_type</text>
+<text text-anchor="start" x="694.5" y="-92.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ scope</text>
+<text text-anchor="start" x="694.5" y="-81.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ is_output</text>
+<text text-anchor="start" x="694.5" y="-70.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ double_buffer</text>
+<text text-anchor="start" x="694.5" y="-59.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ rolling_buffer</text>
+<text text-anchor="start" x="694.5" y="-48.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ num_child_stages</text>
+<text text-anchor="start" x="694.5" y="-37.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
+<polyline fill="none" stroke="#000000" points="686.5,-30.5 895.5,-30.5 "/>
+<text text-anchor="start" x="694.5" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
+<text text-anchor="start" x="694.5" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DECLARE_FINAL_OBJECT_INFO()</text>
</g>
<!-- Node3 -->
<g id="node2" class="node">
<title>Node3</title>
<g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1Object.html" target="_top" xlink:title="base class of all object containers. ">
-<polygon fill="#ffffff" stroke="#000000" points="0,-182.5 0,-569.5 183,-569.5 183,-182.5 0,-182.5"/>
-<text text-anchor="middle" x="91.5" y="-557.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Object</text>
-<polyline fill="none" stroke="#000000" points="0,-550.5 183,-550.5 "/>
-<text text-anchor="start" x="8" y="-538.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
-<text text-anchor="start" x="8" y="-527.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_final</text>
-<text text-anchor="start" x="8" y="-516.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots</text>
-<text text-anchor="start" x="8" y="-505.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots_can</text>
-<text text-anchor="start" x="8" y="-494.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_overflow</text>
-<text text-anchor="start" x="8" y="-483.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_visit</text>
-<text text-anchor="start" x="8" y="-472.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_attrs</text>
-<text text-anchor="start" x="8" y="-461.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
-<text text-anchor="start" x="8" y="-450.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="8" y="-439.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
-<text text-anchor="start" x="8" y="-428.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
-<text text-anchor="start" x="8" y="-417.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_index</text>
-<text text-anchor="start" x="8" y="-406.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># type_index_</text>
-<text text-anchor="start" x="8" y="-395.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># ref_counter_</text>
-<polyline fill="none" stroke="#000000" points="0,-388.5 183,-388.5 "/>
-<text text-anchor="start" x="8" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ type_index()</text>
-<text text-anchor="start" x="8" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKey()</text>
-<text text-anchor="start" x="8" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKeyHash()</text>
-<text text-anchor="start" x="8" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IsInstance()</text>
-<text text-anchor="start" x="8" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="8" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
-<text text-anchor="start" x="8" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
-<text text-anchor="start" x="8" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
-<text text-anchor="start" x="8" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="8" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="8" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2Key()</text>
-<text text-anchor="start" x="8" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2KeyHash()</text>
-<text text-anchor="start" x="8" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeKey2Index()</text>
-<text text-anchor="start" x="8" y="-233.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _GetOrAllocRuntimeTypeIndex()</text>
-<text text-anchor="start" x="8" y="-222.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RuntimeTypeIndex()</text>
-<text text-anchor="start" x="8" y="-211.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># IncRef()</text>
-<text text-anchor="start" x="8" y="-200.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DecRef()</text>
-<text text-anchor="start" x="8" y="-189.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetOrAllocRuntimeTypeIndex()</text>
+<polygon fill="#ffffff" stroke="#000000" points="525.5,-598.5 525.5,-985.5 708.5,-985.5 708.5,-598.5 525.5,-598.5"/>
+<text text-anchor="middle" x="617" y="-973.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Object</text>
+<polyline fill="none" stroke="#000000" points="525.5,-966.5 708.5,-966.5 "/>
+<text text-anchor="start" x="533.5" y="-954.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
+<text text-anchor="start" x="533.5" y="-943.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_final</text>
+<text text-anchor="start" x="533.5" y="-932.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots</text>
+<text text-anchor="start" x="533.5" y="-921.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_child_slots_can</text>
+<text text-anchor="start" x="533.5" y="-910.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_overflow</text>
+<text text-anchor="start" x="533.5" y="-899.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_visit</text>
+<text text-anchor="start" x="533.5" y="-888.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_attrs</text>
+<text text-anchor="start" x="533.5" y="-877.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_sequal</text>
+<text text-anchor="start" x="533.5" y="-866.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="533.5" y="-855.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_has_method_shash</text>
+<text text-anchor="start" x="533.5" y="-844.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_reduce</text>
+<text text-anchor="start" x="533.5" y="-833.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_index</text>
+<text text-anchor="start" x="533.5" y="-822.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># type_index_</text>
+<text text-anchor="start" x="533.5" y="-811.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># ref_counter_</text>
+<polyline fill="none" stroke="#000000" points="525.5,-804.5 708.5,-804.5 "/>
+<text text-anchor="start" x="533.5" y="-792.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ type_index()</text>
+<text text-anchor="start" x="533.5" y="-781.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKey()</text>
+<text text-anchor="start" x="533.5" y="-770.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ GetTypeKeyHash()</text>
+<text text-anchor="start" x="533.5" y="-759.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IsInstance()</text>
+<text text-anchor="start" x="533.5" y="-748.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="533.5" y="-737.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
+<text text-anchor="start" x="533.5" y="-726.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
+<text text-anchor="start" x="533.5" y="-715.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Object()</text>
+<text text-anchor="start" x="533.5" y="-704.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="533.5" y="-693.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="533.5" y="-682.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2Key()</text>
+<text text-anchor="start" x="533.5" y="-671.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeIndex2KeyHash()</text>
+<text text-anchor="start" x="533.5" y="-660.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TypeKey2Index()</text>
+<text text-anchor="start" x="533.5" y="-649.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _GetOrAllocRuntimeTypeIndex()</text>
+<text text-anchor="start" x="533.5" y="-638.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RuntimeTypeIndex()</text>
+<text text-anchor="start" x="533.5" y="-627.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># IncRef()</text>
+<text text-anchor="start" x="533.5" y="-616.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DecRef()</text>
+<text text-anchor="start" x="533.5" y="-605.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetOrAllocRuntimeTypeIndex()</text>
</a>
</g>
</g>
<!-- Node3->Node2 -->
<g id="edge1" class="edge">
<title>Node3->Node2</title>
-<path fill="none" stroke="#191970" d="M189.7785,-226.4832C214.6413,-198.2603 243.4991,-171.7527 275.5,-153 336.3481,-117.3427 523.2939,-91.4417 642.8438,-77.9957"/>
-<polygon fill="none" stroke="#191970" points="187.0989,-224.2311 183.2094,-234.0862 192.3957,-228.8076 187.0989,-224.2311"/>
+<path fill="none" stroke="#191970" d="M715.2414,-676.5964C774.5212,-599.1077 844.4926,-491.647 876,-383 899.5197,-301.8971 900.0004,-273.962 876,-193 869.9419,-172.5637 859.2078,-152.4955 847.3633,-134.6085"/>
+<polygon fill="none" stroke="#191970" points="712.1385,-674.8889 708.8063,-684.9461 717.683,-679.162 712.1385,-674.8889"/>
</g>
<!-- Node3->Node3 -->
<g id="edge2" class="edge">
<title>Node3->Node3</title>
-<path fill="none" stroke="#404040" d="M183.3625,-410.5649C194.0482,-404.1857 201,-392.6641 201,-376 201,-364.8038 197.8618,-355.929 192.5615,-349.3756"/>
-<polygon fill="none" stroke="#404040" points="192.4464,-349.2763 185.2907,-348.3836 183.3625,-341.4351 190.5182,-342.3277 192.4464,-349.2763"/>
-<text text-anchor="middle" x="227" y="-373.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #deleter_</text>
+<path fill="none" stroke="#404040" d="M708.8625,-825.9248C719.5482,-819.6637 726.5,-808.3555 726.5,-792 726.5,-781.0112 723.3618,-772.3007 718.0615,-765.8687"/>
+<polygon fill="none" stroke="#404040" points="718.0184,-765.8322 710.8548,-765.0056 708.8625,-758.0752 716.0261,-758.9017 718.0184,-765.8322"/>
+<text text-anchor="middle" x="752.5" y="-789.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #deleter_</text>
+</g>
+<!-- Node12 -->
+<g id="node11" class="node">
+<title>Node12</title>
+<g id="a_node11"><a xlink:href="classtvm_1_1te_1_1ScheduleNode.html" target="_top" xlink:title="node container for schedule ">
+<polygon fill="#ffffff" stroke="#000000" points="955.5,-226.5 955.5,-349.5 1164.5,-349.5 1164.5,-226.5 955.5,-226.5"/>
+<text text-anchor="middle" x="1060" y="-337.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::ScheduleNode</text>
+<polyline fill="none" stroke="#000000" points="955.5,-330.5 1164.5,-330.5 "/>
+<text text-anchor="start" x="963.5" y="-318.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ op2stage_cache_</text>
+<text text-anchor="start" x="963.5" y="-307.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
+<polyline fill="none" stroke="#000000" points="955.5,-300.5 1164.5,-300.5 "/>
+<text text-anchor="start" x="963.5" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
+<text text-anchor="start" x="963.5" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ InitCache()</text>
+<text text-anchor="start" x="963.5" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ InvalidateCache()</text>
+<text text-anchor="start" x="963.5" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Contain()</text>
+<text text-anchor="start" x="963.5" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Contain()</text>
+<text text-anchor="start" x="963.5" y="-233.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DECLARE_FINAL_OBJECT_INFO()</text>
+</a>
+</g>
+</g>
+<!-- Node3->Node12 -->
+<g id="edge17" class="edge">
+<title>Node3->Node12</title>
+<path fill="none" stroke="#191970" d="M715.8571,-654.3661C737.3267,-632.4885 761.6077,-612.3706 788,-598 835.3639,-572.2104 869.7462,-618.0144 908,-580 965.1168,-523.2406 897.8573,-471.916 936,-401 946.1404,-382.1466 960.6796,-364.8044 976.2557,-349.6597"/>
+<polygon fill="none" stroke="#191970" points="713.079,-652.2064 708.685,-661.8471 718.132,-657.0508 713.079,-652.2064"/>
</g>
<!-- Node4 -->
<g id="node3" class="node">
<title>Node4</title>
<g id="a_node3"><a xlink:href="classtvm_1_1tir_1_1IterVar.html" target="_top" xlink:title="Iteration Variable, represents an iteration over an integer interval. ">
-<polygon fill="#ffffff" stroke="#000000" points="218.5,-621.5 218.5,-733.5 372.5,-733.5 372.5,-621.5 218.5,-621.5"/>
-<text text-anchor="middle" x="295.5" y="-721.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::tir::IterVar</text>
-<polyline fill="none" stroke="#000000" points="218.5,-714.5 372.5,-714.5 "/>
-<text text-anchor="middle" x="295.5" y="-702.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="218.5,-695.5 372.5,-695.5 "/>
-<text text-anchor="start" x="226.5" y="-683.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IterVar()</text>
-<text text-anchor="start" x="226.5" y="-672.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator PrimExpr()</text>
-<text text-anchor="start" x="226.5" y="-661.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_OBJECT_REF</text>
-<text text-anchor="start" x="226.5" y="-650.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_METHODS()</text>
-<text text-anchor="start" x="226.5" y="-639.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_OBJECT_REF</text>
-<text text-anchor="start" x="226.5" y="-628.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_COW_METHOD()</text>
+<polygon fill="#ffffff" stroke="#000000" points="0,-434.5 0,-546.5 154,-546.5 154,-434.5 0,-434.5"/>
+<text text-anchor="middle" x="77" y="-534.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::tir::IterVar</text>
+<polyline fill="none" stroke="#000000" points="0,-527.5 154,-527.5 "/>
+<text text-anchor="middle" x="77" y="-515.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="0,-508.5 154,-508.5 "/>
+<text text-anchor="start" x="8" y="-496.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ IterVar()</text>
+<text text-anchor="start" x="8" y="-485.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator PrimExpr()</text>
+<text text-anchor="start" x="8" y="-474.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_OBJECT_REF</text>
+<text text-anchor="start" x="8" y="-463.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_METHODS()</text>
+<text text-anchor="start" x="8" y="-452.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_OBJECT_REF</text>
+<text text-anchor="start" x="8" y="-441.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_COW_METHOD()</text>
</a>
</g>
</g>
<!-- Node4->Node2 -->
<g id="edge3" class="edge">
<title>Node4->Node2</title>
-<path fill="none" stroke="#404040" d="M288.4755,-621.2706C275.3577,-506.0747 252.9539,-248.4956 305.5,-182 345.5506,-131.3171 513.7572,-98.7593 630.7056,-81.7908"/>
-<polygon fill="none" stroke="#404040" points="631.0416,-81.7429 636.4163,-76.9355 642.9212,-80.0478 637.5465,-84.8552 631.0416,-81.7429"/>
-<text text-anchor="middle" x="339" y="-373.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +attach_ivar</text>
+<path fill="none" stroke="#404040" d="M66.9629,-434.2345C58.6578,-368.6716 55.961,-260.7065 112,-193 181.9672,-108.4654 499.1164,-80.6551 674.1273,-71.6706"/>
+<polygon fill="none" stroke="#404040" points="674.2339,-71.6654 680.0274,-67.3717 686.2191,-71.0682 680.4256,-75.3618 674.2339,-71.6654"/>
+<text text-anchor="middle" x="145.5" y="-285.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +attach_ivar</text>
</g>
<!-- Node5 -->
<g id="node4" class="node">
<title>Node5</title>
<g id="a_node4"><a xlink:href="classtvm_1_1runtime_1_1ObjectRef.html" target="_top" xlink:title="Base class of all object reference. ">
-<polygon fill="#ffffff" stroke="#000000" points="727.5,-990.5 727.5,-1212.5 861.5,-1212.5 861.5,-990.5 727.5,-990.5"/>
-<text text-anchor="middle" x="794.5" y="-1200.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
-<polyline fill="none" stroke="#000000" points="727.5,-1193.5 861.5,-1193.5 "/>
-<text text-anchor="start" x="735.5" y="-1181.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<polyline fill="none" stroke="#000000" points="727.5,-1174.5 861.5,-1174.5 "/>
-<text text-anchor="start" x="735.5" y="-1162.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="735.5" y="-1151.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="735.5" y="-1140.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
-<text text-anchor="start" x="735.5" y="-1129.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
-<text text-anchor="start" x="735.5" y="-1118.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
-<text text-anchor="start" x="735.5" y="-1107.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
-<text text-anchor="start" x="735.5" y="-1096.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
-<text text-anchor="start" x="735.5" y="-1085.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="735.5" y="-1074.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="735.5" y="-1063.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="735.5" y="-1052.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
-<text text-anchor="start" x="735.5" y="-1041.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
-<text text-anchor="start" x="735.5" y="-1030.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
-<text text-anchor="start" x="735.5" y="-1019.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
-<text text-anchor="start" x="735.5" y="-1008.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
-<text text-anchor="start" x="735.5" y="-997.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
+<polygon fill="#ffffff" stroke="#000000" points="945,-1023.5 945,-1245.5 1079,-1245.5 1079,-1023.5 945,-1023.5"/>
+<text text-anchor="middle" x="1012" y="-1233.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
+<polyline fill="none" stroke="#000000" points="945,-1226.5 1079,-1226.5 "/>
+<text text-anchor="start" x="953" y="-1214.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<polyline fill="none" stroke="#000000" points="945,-1207.5 1079,-1207.5 "/>
+<text text-anchor="start" x="953" y="-1195.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
+<text text-anchor="start" x="953" y="-1184.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
+<text text-anchor="start" x="953" y="-1173.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
+<text text-anchor="start" x="953" y="-1162.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
+<text text-anchor="start" x="953" y="-1151.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
+<text text-anchor="start" x="953" y="-1140.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
+<text text-anchor="start" x="953" y="-1129.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
+<text text-anchor="start" x="953" y="-1118.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="953" y="-1107.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="953" y="-1096.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="953" y="-1085.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
+<text text-anchor="start" x="953" y="-1074.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
+<text text-anchor="start" x="953" y="-1063.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
+<text text-anchor="start" x="953" y="-1052.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
+<text text-anchor="start" x="953" y="-1041.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
+<text text-anchor="start" x="953" y="-1030.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
</a>
</g>
</g>
<!-- Node5->Node4 -->
<g id="edge4" class="edge">
<title>Node5->Node4</title>
-<path fill="none" stroke="#191970" d="M718.1534,-1067.7485C662.6241,-1041.131 587.3546,-1000.6254 529.5,-953 450.1633,-887.6907 376.5807,-793.2105 333.8516,-733.599"/>
-<polygon fill="none" stroke="#191970" points="716.703,-1070.9343 727.2375,-1072.0636 719.7065,-1064.6114 716.703,-1070.9343"/>
+<path fill="none" stroke="#191970" d="M934.8483,-1130.7697C753.3739,-1120.4144 306.8001,-1085.2252 201,-986 77.1344,-869.832 69.1962,-650.1932 73.0591,-546.8903"/>
+<polygon fill="none" stroke="#191970" points="934.6722,-1134.2653 944.8525,-1131.3307 935.0642,-1127.2762 934.6722,-1134.2653"/>
</g>
<!-- Node7 -->
<g id="node6" class="node">
<title>Node7</title>
<g id="a_node6"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::IntImm \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="422,-588.5 422,-766.5 535,-766.5 535,-588.5 422,-588.5"/>
-<text text-anchor="start" x="430" y="-754.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
-<text text-anchor="middle" x="478.5" y="-743.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::IntImm ></text>
-<polyline fill="none" stroke="#000000" points="422,-736.5 535,-736.5 "/>
-<text text-anchor="middle" x="478.5" y="-724.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="422,-717.5 535,-717.5 "/>
-<text text-anchor="start" x="430" y="-705.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="430" y="-694.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="430" y="-683.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="430" y="-672.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="430" y="-661.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="430" y="-650.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="430" y="-639.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="430" y="-628.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="430" y="-617.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="430" y="-606.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="430" y="-595.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
+<polygon fill="#ffffff" stroke="#000000" points="159.5,-401.5 159.5,-579.5 272.5,-579.5 272.5,-401.5 159.5,-401.5"/>
+<text text-anchor="start" x="167.5" y="-567.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="216" y="-556.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::IntImm ></text>
+<polyline fill="none" stroke="#000000" points="159.5,-549.5 272.5,-549.5 "/>
+<text text-anchor="middle" x="216" y="-537.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="159.5,-530.5 272.5,-530.5 "/>
+<text text-anchor="start" x="167.5" y="-518.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="167.5" y="-507.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="167.5" y="-496.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="167.5" y="-485.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="167.5" y="-474.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="167.5" y="-463.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="167.5" y="-452.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="167.5" y="-441.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="167.5" y="-430.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="167.5" y="-419.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="167.5" y="-408.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
</a>
</g>
</g>
<!-- Node5->Node7 -->
<g id="edge7" class="edge">
<title>Node5->Node7</title>
-<path fill="none" stroke="#191970" d="M719.7593,-1037.1774C693.1184,-1012.322 663.9346,-982.7277 640.5,-953 594.7281,-894.9366 552.7151,-822.6773 522.9488,-766.7314"/>
-<polygon fill="none" stroke="#191970" points="717.4811,-1039.8379 727.199,-1044.0584 722.2341,-1034.6989 717.4811,-1039.8379"/>
+<path fill="none" stroke="#191970" d="M934.9237,-1128.6955C771.5654,-1114.7806 398.3997,-1073.9401 312,-986 206.0684,-878.1797 199.3079,-690.6851 206.1676,-579.845"/>
+<polygon fill="none" stroke="#191970" points="934.646,-1132.1844 944.904,-1129.5342 935.2323,-1125.209 934.646,-1132.1844"/>
</g>
<!-- Node8 -->
<g id="node7" class="node">
<title>Node8</title>
<g id="a_node7"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::tir::IndexMap \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="562.5,-588.5 562.5,-766.5 690.5,-766.5 690.5,-588.5 562.5,-588.5"/>
-<text text-anchor="start" x="570.5" y="-754.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
-<text text-anchor="middle" x="626.5" y="-743.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::tir::IndexMap ></text>
-<polyline fill="none" stroke="#000000" points="562.5,-736.5 690.5,-736.5 "/>
-<text text-anchor="middle" x="626.5" y="-724.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="562.5,-717.5 690.5,-717.5 "/>
-<text text-anchor="start" x="570.5" y="-705.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="570.5" y="-694.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="570.5" y="-683.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="570.5" y="-672.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="570.5" y="-661.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="570.5" y="-650.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="570.5" y="-639.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="570.5" y="-628.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="570.5" y="-617.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="570.5" y="-606.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="570.5" y="-595.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
+<polygon fill="#ffffff" stroke="#000000" points="278,-401.5 278,-579.5 406,-579.5 406,-401.5 278,-401.5"/>
+<text text-anchor="start" x="286" y="-567.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="342" y="-556.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::tir::IndexMap ></text>
+<polyline fill="none" stroke="#000000" points="278,-549.5 406,-549.5 "/>
+<text text-anchor="middle" x="342" y="-537.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="278,-530.5 406,-530.5 "/>
+<text text-anchor="start" x="286" y="-518.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="286" y="-507.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="286" y="-496.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="286" y="-485.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="286" y="-474.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="286" y="-463.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="286" y="-452.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="286" y="-441.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="286" y="-430.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="286" y="-419.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="286" y="-408.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
</a>
</g>
</g>
<!-- Node5->Node8 -->
<g id="edge9" class="edge">
<title>Node5->Node8</title>
-<path fill="none" stroke="#191970" d="M721.7763,-1007.2976C714.6379,-995.7653 708.0119,-983.8577 702.5,-972 672.0069,-906.3999 652.3851,-826.6699 640.7319,-766.7208"/>
-<polygon fill="none" stroke="#191970" points="719.0099,-1009.4703 727.325,-1016.0362 724.9193,-1005.718 719.0099,-1009.4703"/>
+<path fill="none" stroke="#191970" d="M934.5075,-1128.8437C787.6937,-1116.4442 477.3706,-1081.6366 407,-1005 301.0444,-889.6097 309.0938,-693.3919 325.1215,-579.5289"/>
+<polygon fill="none" stroke="#191970" points="934.5345,-1132.3579 944.79,-1129.698 935.1142,-1125.3819 934.5345,-1132.3579"/>
</g>
<!-- Node9 -->
<g id="node8" class="node">
<title>Node9</title>
<g id="a_node8"><a xlink:href="classtvm_1_1runtime_1_1Map.html" target="_top" xlink:title="{tvm::runtime::Map\<\l tvm::tir::IterVar,\l tvm::te::IterVarAttr \>\n||+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ operator=()\l+ operator=()\l+ at()\land 12 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="636.5,-281.5 636.5,-470.5 758.5,-470.5 758.5,-281.5 636.5,-281.5"/>
-<text text-anchor="start" x="644.5" y="-458.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Map<</text>
-<text text-anchor="start" x="644.5" y="-447.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::tir::IterVar,</text>
-<text text-anchor="middle" x="697.5" y="-436.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::te::IterVarAttr ></text>
-<polyline fill="none" stroke="#000000" points="636.5,-429.5 758.5,-429.5 "/>
-<text text-anchor="middle" x="697.5" y="-417.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="636.5,-410.5 758.5,-410.5 "/>
-<text text-anchor="start" x="644.5" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
-<text text-anchor="start" x="644.5" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
-<text text-anchor="start" x="644.5" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
-<text text-anchor="start" x="644.5" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
-<text text-anchor="start" x="644.5" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
-<text text-anchor="start" x="644.5" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
-<text text-anchor="start" x="644.5" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
-<text text-anchor="start" x="644.5" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="644.5" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="644.5" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ at()</text>
-<text text-anchor="start" x="644.5" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 12 more...</text>
+<polygon fill="#ffffff" stroke="#000000" points="460,-193.5 460,-382.5 582,-382.5 582,-193.5 460,-193.5"/>
+<text text-anchor="start" x="468" y="-370.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Map<</text>
+<text text-anchor="start" x="468" y="-359.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::tir::IterVar,</text>
+<text text-anchor="middle" x="521" y="-348.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::te::IterVarAttr ></text>
+<polyline fill="none" stroke="#000000" points="460,-341.5 582,-341.5 "/>
+<text text-anchor="middle" x="521" y="-329.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="460,-322.5 582,-322.5 "/>
+<text text-anchor="start" x="468" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="468" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="468" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="468" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="468" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="468" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="468" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="468" y="-233.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="468" y="-222.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="468" y="-211.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ at()</text>
+<text text-anchor="start" x="468" y="-200.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 12 more...</text>
</a>
</g>
</g>
<!-- Node5->Node9 -->
<g id="edge11" class="edge">
<title>Node5->Node9</title>
-<path fill="none" stroke="#191970" d="M768.167,-980.4567C755.1974,-917.1306 740.2842,-838.169 730.5,-767 716.6522,-666.2734 707.7345,-550.0955 702.7052,-470.7194"/>
-<polygon fill="none" stroke="#191970" points="764.7863,-981.3934 770.2308,-990.4823 771.6426,-979.982 764.7863,-981.3934"/>
+<path fill="none" stroke="#191970" d="M934.8172,-1127.3291C792.3264,-1112.1293 498.1458,-1070.8836 451,-986 419.4274,-929.1552 408.5456,-460.3997 435,-401 439.8653,-390.0757 447.0914,-391.9703 455,-383 456.5937,-381.1923 458.1831,-379.3454 459.765,-377.4668"/>
+<polygon fill="none" stroke="#191970" points="934.498,-1130.8147 944.8089,-1128.3787 935.2294,-1123.853 934.498,-1130.8147"/>
</g>
<!-- Node10 -->
<g id="node9" class="node">
<title>Node10</title>
<g id="a_node9"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::tir::IterVar \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="738,-588.5 738,-766.5 851,-766.5 851,-588.5 738,-588.5"/>
-<text text-anchor="start" x="746" y="-754.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
-<text text-anchor="middle" x="794.5" y="-743.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::tir::IterVar ></text>
-<polyline fill="none" stroke="#000000" points="738,-736.5 851,-736.5 "/>
-<text text-anchor="middle" x="794.5" y="-724.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="738,-717.5 851,-717.5 "/>
-<text text-anchor="start" x="746" y="-705.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="746" y="-694.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="746" y="-683.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="746" y="-672.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="746" y="-661.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="746" y="-650.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="746" y="-639.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="746" y="-628.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="746" y="-617.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="746" y="-606.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="746" y="-595.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
+<polygon fill="#ffffff" stroke="#000000" points="436.5,-401.5 436.5,-579.5 549.5,-579.5 549.5,-401.5 436.5,-401.5"/>
+<text text-anchor="start" x="444.5" y="-567.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="493" y="-556.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::tir::IterVar ></text>
+<polyline fill="none" stroke="#000000" points="436.5,-549.5 549.5,-549.5 "/>
+<text text-anchor="middle" x="493" y="-537.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="436.5,-530.5 549.5,-530.5 "/>
+<text text-anchor="start" x="444.5" y="-518.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="444.5" y="-507.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="444.5" y="-496.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="444.5" y="-485.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="444.5" y="-474.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="444.5" y="-463.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="444.5" y="-452.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="444.5" y="-441.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="444.5" y="-430.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="444.5" y="-419.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="444.5" y="-408.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
</a>
</g>
</g>
<!-- Node5->Node10 -->
<g id="edge13" class="edge">
<title>Node5->Node10</title>
-<path fill="none" stroke="#191970" d="M794.5,-980.0194C794.5,-911.9516 794.5,-828.5555 794.5,-766.5758"/>
-<polygon fill="none" stroke="#191970" points="791.0001,-980.376 794.5,-990.376 798.0001,-980.376 791.0001,-980.376"/>
+<path fill="none" stroke="#191970" d="M934.7828,-1127.106C801.7683,-1112.5991 538.3244,-1075.5015 483,-1005 432.9486,-941.218 458.8951,-709.2589 478.2356,-579.6532"/>
+<polygon fill="none" stroke="#191970" points="934.5904,-1130.6054 944.9071,-1128.1941 935.3384,-1123.6455 934.5904,-1130.6054"/>
</g>
<!-- Node11 -->
<g id="node10" class="node">
<title>Node11</title>
<g id="a_node10"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::te::IterVarRelation \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="891.5,-287 891.5,-465 1043.5,-465 1043.5,-287 891.5,-287"/>
-<text text-anchor="start" x="899.5" y="-453" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
-<text text-anchor="middle" x="967.5" y="-442" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::te::IterVarRelation ></text>
-<polyline fill="none" stroke="#000000" points="891.5,-435 1043.5,-435 "/>
-<text text-anchor="middle" x="967.5" y="-423" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="891.5,-416 1043.5,-416 "/>
-<text text-anchor="start" x="899.5" y="-404" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="899.5" y="-393" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="899.5" y="-382" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="899.5" y="-371" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="899.5" y="-360" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="899.5" y="-349" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="899.5" y="-338" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="899.5" y="-327" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
-<text text-anchor="start" x="899.5" y="-316" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="899.5" y="-305" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
-<text text-anchor="start" x="899.5" y="-294" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
+<polygon fill="#ffffff" stroke="#000000" points="715,-199 715,-377 867,-377 867,-199 715,-199"/>
+<text text-anchor="start" x="723" y="-365" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="791" y="-354" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::te::IterVarRelation ></text>
+<polyline fill="none" stroke="#000000" points="715,-347 867,-347 "/>
+<text text-anchor="middle" x="791" y="-335" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="715,-328 867,-328 "/>
+<text text-anchor="start" x="723" y="-316" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="723" y="-305" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="723" y="-294" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="723" y="-283" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="723" y="-272" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="723" y="-261" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="723" y="-250" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="723" y="-239" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="723" y="-228" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="723" y="-217" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="723" y="-206" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
</a>
</g>
</g>
<!-- Node5->Node11 -->
<g id="edge15" class="edge">
<title>Node5->Node11</title>
-<path fill="none" stroke="#191970" d="M823.4115,-980.2557C858.0872,-834.8379 915.3889,-594.5355 946.2474,-465.1259"/>
-<polygon fill="none" stroke="#191970" points="819.9727,-979.5876 821.0576,-990.1268 826.7818,-981.2114 819.9727,-979.5876"/>
+<path fill="none" stroke="#191970" d="M934.6301,-1122.7047C808.1798,-1101.6713 566.2363,-1053.48 516,-986 464.5121,-916.8388 471.0877,-671.6014 516,-598 525.1115,-583.0683 537.3867,-590.9843 551,-580 622.7521,-522.1047 688.036,-439.8045 732.339,-377.3264"/>
+<polygon fill="none" stroke="#191970" points="934.2073,-1126.1821 944.6432,-1124.3536 935.3448,-1119.2752 934.2073,-1126.1821"/>
</g>
<!-- Node13 -->
<g id="node12" class="node">
<title>Node13</title>
-<g id="a_node12"><a xlink:href="classtvm_1_1BaseExpr.html" target="_top" xlink:title="Managed reference to BaseExprNode. ">
-<polygon fill="#ffffff" stroke="#000000" points="952.5,-835 952.5,-903 1106.5,-903 1106.5,-835 952.5,-835"/>
-<text text-anchor="middle" x="1029.5" y="-891" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::BaseExpr</text>
-<polyline fill="none" stroke="#000000" points="952.5,-884 1106.5,-884 "/>
-<text text-anchor="middle" x="1029.5" y="-872" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="952.5,-865 1106.5,-865 "/>
-<text text-anchor="start" x="960.5" y="-853" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_OBJECT_REF</text>
-<text text-anchor="start" x="960.5" y="-842" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_METHODS()</text>
+<g id="a_node12"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::te::Stage \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
+<polygon fill="#ffffff" stroke="#000000" points="1388.5,-703 1388.5,-881 1501.5,-881 1501.5,-703 1388.5,-703"/>
+<text text-anchor="start" x="1396.5" y="-869" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="1445" y="-858" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::te::Stage ></text>
+<polyline fill="none" stroke="#000000" points="1388.5,-851 1501.5,-851 "/>
+<text text-anchor="middle" x="1445" y="-839" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="1388.5,-832 1501.5,-832 "/>
+<text text-anchor="start" x="1396.5" y="-820" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1396.5" y="-809" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1396.5" y="-798" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1396.5" y="-787" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1396.5" y="-776" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1396.5" y="-765" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1396.5" y="-754" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1396.5" y="-743" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1396.5" y="-732" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="1396.5" y="-721" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="1396.5" y="-710" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
</a>
</g>
</g>
<!-- Node5->Node13 -->
-<g id="edge18" class="edge">
+<g id="edge19" class="edge">
<title>Node5->Node13</title>
-<path fill="none" stroke="#191970" d="M869.1468,-1031.5547C889.2391,-1012.4209 910.8459,-991.5624 930.5,-972 953.1771,-949.4288 978.1381,-923.4512 997.3232,-903.2315"/>
-<polygon fill="none" stroke="#191970" points="866.6556,-1029.0938 861.8193,-1038.5204 871.4785,-1034.1672 866.6556,-1029.0938"/>
+<path fill="none" stroke="#191970" d="M1088.9235,-1122.7346C1171.4771,-1106.2005 1301.2008,-1068.3297 1379,-986 1406.0004,-957.4273 1421.9885,-917.3761 1431.4406,-881.0789"/>
+<polygon fill="none" stroke="#191970" points="1088.2438,-1119.3011 1079.0964,-1124.6467 1089.5809,-1126.1722 1088.2438,-1119.3011"/>
</g>
<!-- Node14 -->
<g id="node13" class="node">
<title>Node14</title>
-<g id="a_node13"><a xlink:href="classtvm_1_1te_1_1Stage.html" target="_top" xlink:title="Stage, contains scheduling for a stage of computation. ">
-<polygon fill="#ffffff" stroke="#000000" points="1155.5,-785.5 1155.5,-952.5 1265.5,-952.5 1265.5,-785.5 1155.5,-785.5"/>
-<text text-anchor="middle" x="1210.5" y="-940.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::Stage</text>
-<polyline fill="none" stroke="#000000" points="1155.5,-933.5 1265.5,-933.5 "/>
-<text text-anchor="middle" x="1210.5" y="-921.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="1155.5,-914.5 1265.5,-914.5 "/>
-<text text-anchor="start" x="1163.5" y="-902.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Stage()</text>
-<text text-anchor="start" x="1163.5" y="-891.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Stage()</text>
-<text text-anchor="start" x="1163.5" y="-880.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Stage()</text>
-<text text-anchor="start" x="1163.5" y="-869.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="1163.5" y="-858.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="1163.5" y="-847.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ set_scope()</text>
-<text text-anchor="start" x="1163.5" y="-836.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compute_at()</text>
-<text text-anchor="start" x="1163.5" y="-825.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compute_inline()</text>
-<text text-anchor="start" x="1163.5" y="-814.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compute_root()</text>
-<text text-anchor="start" x="1163.5" y="-803.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ bind()</text>
-<text text-anchor="start" x="1163.5" y="-792.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 21 more...</text>
+<g id="a_node13"><a xlink:href="classtvm_1_1runtime_1_1Map.html" target="_top" xlink:title="{tvm::runtime::Map\<\l tvm::te::Operation,\l tvm::te::Stage \>\n||+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ Map()\l+ operator=()\l+ operator=()\l+ at()\land 12 more...\l}">
+<polygon fill="#ffffff" stroke="#000000" points="1519.5,-697.5 1519.5,-886.5 1634.5,-886.5 1634.5,-697.5 1519.5,-697.5"/>
+<text text-anchor="start" x="1527.5" y="-874.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Map<</text>
+<text text-anchor="start" x="1527.5" y="-863.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::te::Operation,</text>
+<text text-anchor="middle" x="1577" y="-852.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> tvm::te::Stage ></text>
+<polyline fill="none" stroke="#000000" points="1519.5,-845.5 1634.5,-845.5 "/>
+<text text-anchor="middle" x="1577" y="-833.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="1519.5,-826.5 1634.5,-826.5 "/>
+<text text-anchor="start" x="1527.5" y="-814.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="1527.5" y="-803.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="1527.5" y="-792.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="1527.5" y="-781.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="1527.5" y="-770.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="1527.5" y="-759.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="1527.5" y="-748.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Map()</text>
+<text text-anchor="start" x="1527.5" y="-737.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="1527.5" y="-726.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="1527.5" y="-715.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ at()</text>
+<text text-anchor="start" x="1527.5" y="-704.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 12 more...</text>
</a>
</g>
</g>
<!-- Node5->Node14 -->
-<g id="edge20" class="edge">
+<g id="edge21" class="edge">
<title>Node5->Node14</title>
-<path fill="none" stroke="#191970" d="M871.3541,-1073.2969C938.5081,-1046.9729 1037.2269,-1004.1322 1115.5,-953 1129.1393,-944.09 1142.8314,-933.3124 1155.4059,-922.4912"/>
-<polygon fill="none" stroke="#191970" points="869.7319,-1070.1724 861.6813,-1077.06 872.2699,-1076.6961 869.7319,-1070.1724"/>
+<path fill="none" stroke="#191970" d="M1088.8506,-1120.4584C1212.8354,-1096.3898 1449.2693,-1044.1891 1510,-986 1537.2724,-959.869 1553.4468,-922.1168 1563.038,-886.8474"/>
+<polygon fill="none" stroke="#191970" points="1088.1874,-1117.0217 1079.0307,-1122.3514 1089.5125,-1123.8951 1088.1874,-1117.0217"/>
</g>
<!-- Node15 -->
<g id="node14" class="node">
<title>Node15</title>
-<g id="a_node14"><a xlink:href="classtvm_1_1te_1_1Operation.html" target="_top" xlink:title="Operation that produces tensors. ">
-<polygon fill="#ffffff" stroke="#000000" points="1298.5,-824 1298.5,-914 1404.5,-914 1404.5,-824 1298.5,-824"/>
-<text text-anchor="middle" x="1351.5" y="-902" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::Operation</text>
-<polyline fill="none" stroke="#000000" points="1298.5,-895 1404.5,-895 "/>
-<text text-anchor="middle" x="1351.5" y="-883" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="1298.5,-876 1404.5,-876 "/>
-<text text-anchor="start" x="1306.5" y="-864" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Operation()</text>
-<text text-anchor="start" x="1306.5" y="-853" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Operation()</text>
-<text text-anchor="start" x="1306.5" y="-842" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="1306.5" y="-831" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ output()</text>
+<g id="a_node14"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::te::Operation \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
+<polygon fill="#ffffff" stroke="#000000" points="797,-703 797,-881 925,-881 925,-703 797,-703"/>
+<text text-anchor="start" x="805" y="-869" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="861" y="-858" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::te::Operation ></text>
+<polyline fill="none" stroke="#000000" points="797,-851 925,-851 "/>
+<text text-anchor="middle" x="861" y="-839" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="797,-832 925,-832 "/>
+<text text-anchor="start" x="805" y="-820" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="805" y="-809" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="805" y="-798" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="805" y="-787" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="805" y="-776" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="805" y="-765" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="805" y="-754" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="805" y="-743" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="805" y="-732" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="805" y="-721" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="805" y="-710" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
</a>
</g>
</g>
<!-- Node5->Node15 -->
-<g id="edge22" class="edge">
+<g id="edge23" class="edge">
<title>Node5->Node15</title>
-<path fill="none" stroke="#191970" d="M871.5974,-1090.4801C971.0404,-1073.5194 1146.4233,-1034.0924 1274.5,-953 1290.7132,-942.7345 1305.6947,-928.1885 1318.0502,-914.0795"/>
-<polygon fill="none" stroke="#191970" points="870.9106,-1087.0463 861.6254,-1092.1489 872.066,-1093.9503 870.9106,-1087.0463"/>
+<path fill="none" stroke="#191970" d="M947.2482,-1014.2443C942.6261,-1004.7549 938.1558,-995.2619 934,-986 918.8998,-952.3467 904.1887,-914.5417 891.9759,-881.3046"/>
+<polygon fill="none" stroke="#191970" points="944.1653,-1015.9065 951.7209,-1023.3337 950.446,-1012.8159 944.1653,-1015.9065"/>
+</g>
+<!-- Node16 -->
+<g id="node15" class="node">
+<title>Node16</title>
+<g id="a_node15"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::runtime::String \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
+<polygon fill="#ffffff" stroke="#000000" points="943.5,-703 943.5,-881 1080.5,-881 1080.5,-703 943.5,-703"/>
+<text text-anchor="start" x="951.5" y="-869" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="1012" y="-858" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::String ></text>
+<polyline fill="none" stroke="#000000" points="943.5,-851 1080.5,-851 "/>
+<text text-anchor="middle" x="1012" y="-839" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="943.5,-832 1080.5,-832 "/>
+<text text-anchor="start" x="951.5" y="-820" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="951.5" y="-809" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="951.5" y="-798" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="951.5" y="-787" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="951.5" y="-776" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="951.5" y="-765" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="951.5" y="-754" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="951.5" y="-743" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="951.5" y="-732" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="951.5" y="-721" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="951.5" y="-710" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
+</a>
+</g>
+</g>
+<!-- Node5->Node16 -->
+<g id="edge25" class="edge">
+<title>Node5->Node16</title>
+<path fill="none" stroke="#191970" d="M1012,-1013.2298C1012,-969.7434 1012,-921.5445 1012,-881.2656"/>
+<polygon fill="none" stroke="#191970" points="1008.5001,-1013.3 1012,-1023.3001 1015.5001,-1013.3001 1008.5001,-1013.3"/>
+</g>
+<!-- Node17 -->
+<g id="node16" class="node">
+<title>Node17</title>
+<g id="a_node16"><a xlink:href="classtvm_1_1runtime_1_1Array.html" target="_top" xlink:title="{tvm::runtime::Array\l\< tvm::te::Schedule \>\n||+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ Array()\l+ operator=()\l+ operator=()\land 25 more...\l}">
+<polygon fill="#ffffff" stroke="#000000" points="1098.5,-703 1098.5,-881 1223.5,-881 1223.5,-703 1098.5,-703"/>
+<text text-anchor="start" x="1106.5" y="-869" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Array</text>
+<text text-anchor="middle" x="1161" y="-858" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::te::Schedule ></text>
+<polyline fill="none" stroke="#000000" points="1098.5,-851 1223.5,-851 "/>
+<text text-anchor="middle" x="1161" y="-839" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="1098.5,-832 1223.5,-832 "/>
+<text text-anchor="start" x="1106.5" y="-820" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1106.5" y="-809" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1106.5" y="-798" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1106.5" y="-787" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1106.5" y="-776" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1106.5" y="-765" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1106.5" y="-754" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1106.5" y="-743" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Array()</text>
+<text text-anchor="start" x="1106.5" y="-732" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="1106.5" y="-721" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="1106.5" y="-710" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 25 more...</text>
+</a>
+</g>
+</g>
+<!-- Node5->Node17 -->
+<g id="edge27" class="edge">
+<title>Node5->Node17</title>
+<path fill="none" stroke="#191970" d="M1075.9296,-1014.2324C1080.4906,-1004.7457 1084.901,-995.2566 1089,-986 1103.9097,-952.33 1118.4225,-914.5216 1130.466,-881.2866"/>
+<polygon fill="none" stroke="#191970" points="1072.7367,-1012.7953 1071.5158,-1023.3196 1079.0332,-1015.8537 1072.7367,-1012.7953"/>
+</g>
+<!-- Node18 -->
+<g id="node17" class="node">
+<title>Node18</title>
+<g id="a_node17"><a xlink:href="classtvm_1_1runtime_1_1Optional.html" target="_top" xlink:title="{tvm::runtime::Optional\l\< tvm::Bool \>\n|+ _type_is_nullable\l|+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ Optional()\l+ operator=()\l+ operator=()\land 15 more...\l}">
+<polygon fill="#ffffff" stroke="#000000" points="1242,-703 1242,-881 1370,-881 1370,-703 1242,-703"/>
+<text text-anchor="start" x="1250" y="-869" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::Optional</text>
+<text text-anchor="middle" x="1306" y="-858" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::Bool ></text>
+<polyline fill="none" stroke="#000000" points="1242,-851 1370,-851 "/>
+<text text-anchor="start" x="1250" y="-839" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<polyline fill="none" stroke="#000000" points="1242,-832 1370,-832 "/>
+<text text-anchor="start" x="1250" y="-820" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="1250" y="-809" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="1250" y="-798" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="1250" y="-787" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="1250" y="-776" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="1250" y="-765" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="1250" y="-754" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="1250" y="-743" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Optional()</text>
+<text text-anchor="start" x="1250" y="-732" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="1250" y="-721" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator=()</text>
+<text text-anchor="start" x="1250" y="-710" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 15 more...</text>
+</a>
+</g>
+</g>
+<!-- Node5->Node18 -->
+<g id="edge29" class="edge">
+<title>Node5->Node18</title>
+<path fill="none" stroke="#191970" d="M1088.0088,-1098.486C1135.0482,-1072.8532 1193.5889,-1034.4109 1232,-986 1256.086,-955.6435 1272.9923,-916.5728 1284.5015,-881.4594"/>
+<polygon fill="none" stroke="#191970" points="1086.2245,-1095.4713 1079.0656,-1103.2816 1089.5326,-1101.6403 1086.2245,-1095.4713"/>
+</g>
+<!-- Node20 -->
+<g id="node19" class="node">
+<title>Node20</title>
+<g id="a_node19"><a xlink:href="classtvm_1_1BaseExpr.html" target="_top" xlink:title="Managed reference to BaseExprNode. ">
+<polygon fill="#ffffff" stroke="#000000" points="1653,-758 1653,-826 1807,-826 1807,-758 1653,-758"/>
+<text text-anchor="middle" x="1730" y="-814" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::BaseExpr</text>
+<polyline fill="none" stroke="#000000" points="1653,-807 1807,-807 "/>
+<text text-anchor="middle" x="1730" y="-795" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="1653,-788 1807,-788 "/>
+<text text-anchor="start" x="1661" y="-776" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_OBJECT_REF</text>
+<text text-anchor="start" x="1661" y="-765" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_METHODS()</text>
+</a>
+</g>
+</g>
+<!-- Node5->Node20 -->
+<g id="edge32" class="edge">
+<title>Node5->Node20</title>
+<path fill="none" stroke="#191970" d="M1089.218,-1125.0276C1238.537,-1105.3465 1559.6005,-1055.9643 1644,-986 1693.3521,-945.0887 1715.3461,-870.1681 1724.3724,-826.3695"/>
+<polygon fill="none" stroke="#191970" points="1088.6641,-1121.5701 1079.2023,-1126.3371 1089.5717,-1128.5111 1088.6641,-1121.5701"/>
+</g>
+<!-- Node21 -->
+<g id="node20" class="node">
+<title>Node21</title>
+<g id="a_node20"><a xlink:href="classtvm_1_1te_1_1Stage.html" target="_top" xlink:title="Stage, contains scheduling for a stage of computation. ">
+<polygon fill="#ffffff" stroke="#000000" points="1719,-407 1719,-574 1829,-574 1829,-407 1719,-407"/>
+<text text-anchor="middle" x="1774" y="-562" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::Stage</text>
+<polyline fill="none" stroke="#000000" points="1719,-555 1829,-555 "/>
+<text text-anchor="middle" x="1774" y="-543" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="1719,-536 1829,-536 "/>
+<text text-anchor="start" x="1727" y="-524" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Stage()</text>
+<text text-anchor="start" x="1727" y="-513" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Stage()</text>
+<text text-anchor="start" x="1727" y="-502" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Stage()</text>
+<text text-anchor="start" x="1727" y="-491" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="1727" y="-480" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="1727" y="-469" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ set_scope()</text>
+<text text-anchor="start" x="1727" y="-458" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compute_at()</text>
+<text text-anchor="start" x="1727" y="-447" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compute_inline()</text>
+<text text-anchor="start" x="1727" y="-436" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ compute_root()</text>
+<text text-anchor="start" x="1727" y="-425" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ bind()</text>
+<text text-anchor="start" x="1727" y="-414" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 21 more...</text>
+</a>
+</g>
+</g>
+<!-- Node5->Node21 -->
+<g id="edge34" class="edge">
+<title>Node5->Node21</title>
+<path fill="none" stroke="#191970" d="M1089.1648,-1132.2841C1272.1227,-1125.2408 1723.8231,-1096.7612 1816,-986 1871.1543,-919.7257 1831.9646,-682.7314 1816,-598 1814.516,-590.1238 1812.4704,-582.0843 1810.0748,-574.125"/>
+<polygon fill="none" stroke="#191970" points="1088.9427,-1128.7898 1079.0805,-1132.661 1089.2042,-1135.7849 1088.9427,-1128.7898"/>
+</g>
+<!-- Node22 -->
+<g id="node21" class="node">
+<title>Node22</title>
+<g id="a_node21"><a xlink:href="classtvm_1_1te_1_1Operation.html" target="_top" xlink:title="Operation that produces tensors. ">
+<polygon fill="#ffffff" stroke="#000000" points="1795,-243 1795,-333 1901,-333 1901,-243 1795,-243"/>
+<text text-anchor="middle" x="1848" y="-321" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::te::Operation</text>
+<polyline fill="none" stroke="#000000" points="1795,-314 1901,-314 "/>
+<text text-anchor="middle" x="1848" y="-302" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="1795,-295 1901,-295 "/>
+<text text-anchor="start" x="1803" y="-283" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Operation()</text>
+<text text-anchor="start" x="1803" y="-272" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ Operation()</text>
+<text text-anchor="start" x="1803" y="-261" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="1803" y="-250" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ output()</text>
+</a>
+</g>
+</g>
+<!-- Node5->Node22 -->
+<g id="edge36" class="edge">
+<title>Node5->Node22</title>
+<path fill="none" stroke="#191970" d="M1089.2269,-1127.2813C1288.1926,-1107.7985 1805.4971,-1051.1946 1844,-986 1970.7913,-771.312 1896.3593,-450.8442 1862.2537,-333.3073"/>
+<polygon fill="none" stroke="#191970" points="1088.755,-1123.8106 1079.1415,-1128.2637 1089.4338,-1130.7776 1088.755,-1123.8106"/>
</g>
<!-- Node6 -->
<g id="node5" class="node">
<title>Node6</title>
<g id="a_node5"><a xlink:href="classtvm_1_1runtime_1_1ObjectPtr.html" target="_top" xlink:title="{tvm::runtime::ObjectPtr\l\< tvm::runtime::Object \>\n||+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ~ObjectPtr()\l+ swap()\l+ get()\l+ operator-\>()\land 11 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="724.5,-1260.5 724.5,-1438.5 864.5,-1438.5 864.5,-1260.5 724.5,-1260.5"/>
-<text text-anchor="start" x="732.5" y="-1426.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
-<text text-anchor="middle" x="794.5" y="-1415.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::Object ></text>
-<polyline fill="none" stroke="#000000" points="724.5,-1408.5 864.5,-1408.5 "/>
-<text text-anchor="middle" x="794.5" y="-1396.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="724.5,-1389.5 864.5,-1389.5 "/>
-<text text-anchor="start" x="732.5" y="-1377.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="732.5" y="-1366.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="732.5" y="-1355.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="732.5" y="-1344.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="732.5" y="-1333.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="732.5" y="-1322.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="732.5" y="-1311.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
-<text text-anchor="start" x="732.5" y="-1300.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
-<text text-anchor="start" x="732.5" y="-1289.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="732.5" y="-1278.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="732.5" y="-1267.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
+<polygon fill="#ffffff" stroke="#000000" points="942,-1293.5 942,-1471.5 1082,-1471.5 1082,-1293.5 942,-1293.5"/>
+<text text-anchor="start" x="950" y="-1459.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
+<text text-anchor="middle" x="1012" y="-1448.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::Object ></text>
+<polyline fill="none" stroke="#000000" points="942,-1441.5 1082,-1441.5 "/>
+<text text-anchor="middle" x="1012" y="-1429.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="942,-1422.5 1082,-1422.5 "/>
+<text text-anchor="start" x="950" y="-1410.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="950" y="-1399.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="950" y="-1388.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="950" y="-1377.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="950" y="-1366.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="950" y="-1355.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
+<text text-anchor="start" x="950" y="-1344.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
+<text text-anchor="start" x="950" y="-1333.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
+<text text-anchor="start" x="950" y="-1322.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="950" y="-1311.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="950" y="-1300.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
</a>
</g>
</g>
<!-- Node6->Node5 -->
<g id="edge5" class="edge">
<title>Node6->Node5</title>
-<path fill="none" stroke="#404040" d="M794.5,-1260.3167C794.5,-1248.8765 794.5,-1237.0062 794.5,-1225.1402"/>
-<polygon fill="none" stroke="#404040" points="794.5001,-1224.7944 790.5,-1218.7944 794.5,-1212.7944 798.5,-1218.7943 794.5001,-1224.7944"/>
-<text text-anchor="middle" x="814" y="-1234" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
+<path fill="none" stroke="#404040" d="M1012,-1293.3167C1012,-1281.8765 1012,-1270.0062 1012,-1258.1402"/>
+<polygon fill="none" stroke="#404040" points="1012.0001,-1257.7944 1008,-1251.7944 1012,-1245.7944 1016,-1251.7943 1012.0001,-1257.7944"/>
+<text text-anchor="middle" x="1031.5" y="-1267" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
</g>
<!-- Node7->Node2 -->
<g id="edge6" class="edge">
<title>Node7->Node2</title>
-<path fill="none" stroke="#404040" d="M421.944,-606.4987C415.1951,-594.8645 409.3451,-582.5068 405.5,-570 380.1621,-487.5848 355.131,-251.9802 405.5,-182 432.586,-144.3681 542.7941,-111.9565 630.8838,-91.3198"/>
-<polygon fill="none" stroke="#404040" points="630.9136,-91.313 635.8593,-86.0651 642.6057,-88.612 637.66,-93.8598 630.9136,-91.313"/>
-<text text-anchor="middle" x="450" y="-373.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +axis_separators</text>
+<path fill="none" stroke="#404040" d="M190.6294,-401.2033C178.0476,-336.7341 173.8641,-251.1403 219,-193 273.8589,-122.3353 523.6043,-89.5181 674.3666,-75.9069"/>
+<polygon fill="none" stroke="#404040" points="674.4382,-75.9007 680.0623,-71.3874 686.3915,-74.843 680.7674,-79.3563 674.4382,-75.9007"/>
+<text text-anchor="middle" x="263.5" y="-285.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +axis_separators</text>
</g>
<!-- Node8->Node2 -->
<g id="edge8" class="edge">
<title>Node8->Node2</title>
-<path fill="none" stroke="#404040" d="M562.2187,-608.0598C555.9019,-601.2736 549.5817,-594.4984 543.5,-588 535.9809,-579.9657 530.5654,-580.2254 526.5,-570 494.6452,-489.878 481.6986,-255.6689 526.5,-182 549.9982,-143.3609 591.2182,-117.1781 631.7434,-99.7246"/>
-<polygon fill="none" stroke="#404040" points="631.7644,-99.716 635.7864,-93.7306 642.8582,-95.1412 638.8362,-101.1265 631.7644,-99.716"/>
-<text text-anchor="middle" x="577" y="-373.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +layout_transforms</text>
+<path fill="none" stroke="#404040" d="M318.6956,-401.2119C307.482,-336.7471 304.8456,-251.155 350,-193 389.7939,-141.7488 557.7481,-104.8779 674.4943,-84.8564"/>
+<polygon fill="none" stroke="#404040" points="674.5023,-84.8551 679.751,-79.9103 686.3342,-82.8533 681.0855,-87.7982 674.5023,-84.8551"/>
+<text text-anchor="middle" x="400.5" y="-285.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +layout_transforms</text>
</g>
<!-- Node9->Node2 -->
<g id="edge10" class="edge">
<title>Node9->Node2</title>
-<path fill="none" stroke="#404040" d="M690.8003,-281.4019C690.3189,-241.2422 692.9152,-194.2663 703.5,-153 704.0911,-150.6957 704.7526,-148.383 705.4751,-146.07"/>
-<polygon fill="none" stroke="#404040" points="705.5513,-145.8541 703.7789,-138.8642 709.5495,-134.5398 711.3218,-141.5297 705.5513,-145.8541"/>
-<text text-anchor="middle" x="742" y="-156" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +iter_var_attrs</text>
+<path fill="none" stroke="#404040" d="M510.8723,-193.2026C513.21,-178.6793 518.1145,-164.664 527,-153 545.8218,-128.2927 612.6564,-106.7841 674.3637,-91.5198"/>
+<polygon fill="none" stroke="#404040" points="674.5541,-91.4737 679.4431,-86.173 686.2165,-88.6471 681.3275,-93.9479 674.5541,-91.4737"/>
+<text text-anchor="middle" x="565.5" y="-161.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +iter_var_attrs</text>
</g>
<!-- Node10->Node2 -->
<g id="edge12" class="edge">
<title>Node10->Node2</title>
-<path fill="none" stroke="#404040" d="M795.7772,-588.3176C796.9751,-460.9804 796.6214,-233.5558 780.5,-153 780.0624,-150.8133 779.5738,-148.6098 779.0411,-146.3979"/>
-<polygon fill="none" stroke="#404040" points="778.9639,-146.1141 773.5279,-141.376 775.8109,-134.5358 781.2468,-139.2739 778.9639,-146.1141"/>
-<text text-anchor="middle" x="835" y="-384.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +leaf_iter_vars</text>
-<text text-anchor="middle" x="835" y="-373.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+env_threads</text>
-<text text-anchor="middle" x="835" y="-362.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+all_iter_vars</text>
+<path fill="none" stroke="#404040" d="M549.6362,-404.0073C550.745,-402.9803 551.8663,-401.977 553,-401 567.1563,-388.8003 579.5741,-397.788 591,-383 643.5486,-314.9889 581.8986,-266.1623 627,-193 639.9345,-172.0179 657.7497,-153.2543 676.8913,-137.0307"/>
+<polygon fill="none" stroke="#404040" points="677.0897,-136.8685 679.2022,-129.9738 686.3791,-129.2719 684.2666,-136.1667 677.0897,-136.8685"/>
+<text text-anchor="middle" x="666.5" y="-296.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +leaf_iter_vars</text>
+<text text-anchor="middle" x="666.5" y="-285.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+env_threads</text>
+<text text-anchor="middle" x="666.5" y="-274.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+all_iter_vars</text>
</g>
<!-- Node11->Node2 -->
<g id="edge14" class="edge">
<title>Node11->Node2</title>
-<path fill="none" stroke="#404040" d="M935.2703,-286.7146C920.5301,-252.2115 901.0518,-213.5312 877.5,-182 867.3368,-168.3934 855.3319,-155.2184 842.7923,-142.9467"/>
-<polygon fill="none" stroke="#404040" points="842.7255,-142.883 835.6229,-141.6366 834.0417,-134.601 841.1443,-135.8474 842.7255,-142.883"/>
-<text text-anchor="middle" x="890" y="-156" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +relations</text>
-</g>
-<!-- Node12 -->
-<g id="node11" class="node">
-<title>Node12</title>
-<g id="a_node11"><a xlink:href="classtvm_1_1PrimExpr.html" target="_top" xlink:title="Reference to PrimExprNode. ">
-<polygon fill="#ffffff" stroke="#000000" points="1061.5,-325.5 1061.5,-426.5 1215.5,-426.5 1215.5,-325.5 1061.5,-325.5"/>
-<text text-anchor="middle" x="1138.5" y="-414.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::PrimExpr</text>
-<polyline fill="none" stroke="#000000" points="1061.5,-407.5 1215.5,-407.5 "/>
-<text text-anchor="middle" x="1138.5" y="-395.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="1061.5,-388.5 1215.5,-388.5 "/>
-<text text-anchor="start" x="1069.5" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PrimExpr()</text>
-<text text-anchor="start" x="1069.5" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PrimExpr()</text>
-<text text-anchor="start" x="1069.5" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ dtype()</text>
-<text text-anchor="start" x="1069.5" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_OBJECT_REF</text>
-<text text-anchor="start" x="1069.5" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_METHODS()</text>
-</a>
-</g>
+<path fill="none" stroke="#404040" d="M791,-198.9498C791,-181.7335 791,-163.8005 791,-146.9166"/>
+<polygon fill="none" stroke="#404040" points="791.0001,-146.8331 787,-140.8331 791,-134.8331 795,-140.8331 791.0001,-146.8331"/>
+<text text-anchor="middle" x="818.5" y="-161.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +relations</text>
</g>
<!-- Node12->Node2 -->
<g id="edge16" class="edge">
<title>Node12->Node2</title>
-<path fill="none" stroke="#404040" d="M1127.7341,-325.4162C1116.0618,-282.0196 1093.3655,-220.5562 1052.5,-182 1000.8077,-133.2289 926.3814,-104.7498 863.9912,-88.3925"/>
-<polygon fill="none" stroke="#404040" points="863.7205,-88.324 856.9223,-90.729 852.0876,-85.3787 858.8858,-82.9737 863.7205,-88.324"/>
-<text text-anchor="middle" x="1069.5" y="-156" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +store_predicate</text>
+<path fill="none" stroke="#404040" d="M994.0049,-226.3452C968.0278,-202.785 937.6135,-176.0569 909,-153 904.4969,-149.3714 899.8572,-145.7091 895.1411,-142.0481"/>
+<polygon fill="none" stroke="#404040" points="894.8901,-141.8553 887.6951,-141.3732 885.3728,-134.5462 892.5678,-135.0283 894.8901,-141.8553"/>
+<text text-anchor="middle" x="967" y="-161.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +attach_sch</text>
</g>
<!-- Node13->Node12 -->
-<g id="edge17" class="edge">
+<g id="edge18" class="edge">
<title>Node13->Node12</title>
-<path fill="none" stroke="#191970" d="M1039.2375,-824.9579C1059.4437,-733.5668 1105.8269,-523.7782 1127.2614,-426.8313"/>
-<polygon fill="none" stroke="#191970" points="1035.8196,-824.2048 1037.0781,-834.7246 1042.6545,-825.716 1035.8196,-824.2048"/>
+<path fill="none" stroke="#404040" d="M1461.6711,-702.9481C1472.551,-616.2972 1474.0803,-485.6603 1406,-401 1402.1491,-396.2113 1275.1714,-355.5028 1176.1553,-324.3054"/>
+<polygon fill="none" stroke="#404040" points="1175.9811,-324.2507 1169.0569,-326.2644 1164.5349,-320.6473 1171.4591,-318.6336 1175.9811,-324.2507"/>
+<text text-anchor="middle" x="1488.5" y="-493.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +groups</text>
+<text text-anchor="middle" x="1488.5" y="-482.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+stages</text>
</g>
-<!-- Node14->Node2 -->
-<g id="edge19" class="edge">
-<title>Node14->Node2</title>
-<path fill="none" stroke="#404040" d="M1224.6915,-785.3701C1250.2309,-623.2862 1295.0386,-274.2013 1224.5,-182 1181.2307,-125.4425 990.9431,-93.9164 864.179,-78.8531"/>
-<polygon fill="none" stroke="#404040" points="864.0362,-78.8365 857.6132,-82.1145 852.1169,-77.4464 858.5399,-74.1684 864.0362,-78.8365"/>
-<text text-anchor="middle" x="1297.5" y="-379" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +group</text>
-<text text-anchor="middle" x="1297.5" y="-368" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+attach_stage</text>
+<!-- Node14->Node12 -->
+<g id="edge20" class="edge">
+<title>Node14->Node12</title>
+<path fill="none" stroke="#404040" d="M1557.7761,-697.2444C1550.7148,-660.695 1542.9901,-618.4898 1537,-580 1530.8371,-540.3996 1543.3904,-429.2871 1515,-401 1491.0611,-377.1483 1397.0453,-390.0698 1364,-383 1300.8938,-369.4989 1232.1436,-348.5631 1176.0116,-329.7775"/>
+<polygon fill="none" stroke="#404040" points="1175.9669,-329.7625 1169.0034,-331.636 1164.5958,-325.9287 1171.5593,-324.0552 1175.9669,-329.7625"/>
+<text text-anchor="middle" x="1570" y="-488" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +stage_map</text>
</g>
-<!-- Node15->Node2 -->
-<g id="edge21" class="edge">
-<title>Node15->Node2</title>
-<path fill="none" stroke="#404040" d="M1367.4358,-823.7401C1406.5498,-703.6848 1493.227,-373.2647 1337.5,-182 1279.1703,-110.3592 1019.2374,-83.0436 864.515,-73.0104"/>
-<polygon fill="none" stroke="#404040" points="864.1586,-72.988 857.9187,-76.6024 852.1824,-72.2327 858.4222,-68.6183 864.1586,-72.988"/>
-<text text-anchor="middle" x="1455" y="-379" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +origin_op</text>
-<text text-anchor="middle" x="1455" y="-368" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+op</text>
+<!-- Node15->Node12 -->
+<g id="edge22" class="edge">
+<title>Node15->Node12</title>
+<path fill="none" stroke="#404040" d="M896.0145,-702.9797C909.8539,-665.4431 925.0371,-621.0172 936,-580 956.8884,-501.8471 935.2664,-474.5272 969,-401 975.5705,-386.6787 984.3634,-372.6092 993.9004,-359.5236"/>
+<polygon fill="none" stroke="#404040" points="994.0755,-359.292 994.5043,-352.0936 1001.3139,-349.7208 1000.8851,-356.9192 994.0755,-359.292"/>
+<text text-anchor="middle" x="993.5" y="-488" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +outputs</text>
+</g>
+<!-- Node16->Node12 -->
+<g id="edge24" class="edge">
+<title>Node16->Node12</title>
+<path fill="none" stroke="#404040" d="M1014.7355,-702.7849C1017.9314,-623.3312 1024.5964,-504.0506 1038,-401 1039.6678,-388.1777 1041.8976,-374.6195 1044.299,-361.5161"/>
+<polygon fill="none" stroke="#404040" points="1044.314,-361.4359 1041.4972,-354.7978 1046.5413,-349.6444 1049.3582,-356.2826 1044.314,-361.4359"/>
+<text text-anchor="middle" x="1084" y="-488" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +primitive_record</text>
+</g>
+<!-- Node17->Node12 -->
+<g id="edge26" class="edge">
+<title>Node17->Node12</title>
+<path fill="none" stroke="#404040" d="M1158.7888,-702.6752C1155.5108,-604.9578 1147.7168,-455.0287 1130,-401 1125.4778,-387.2091 1118.9528,-373.3744 1111.6821,-360.3496"/>
+<polygon fill="none" stroke="#404040" points="1111.4699,-359.9855 1104.9926,-356.8161 1105.4268,-349.6181 1111.9042,-352.7874 1111.4699,-359.9855"/>
+<text text-anchor="middle" x="1198.5" y="-488" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +schedule_record</text>
+</g>
+<!-- Node18->Node12 -->
+<g id="edge28" class="edge">
+<title>Node18->Node12</title>
+<path fill="none" stroke="#404040" d="M1296.8074,-702.6672C1285.1101,-595.0802 1264.4503,-425.6527 1248,-401 1230.0071,-374.0355 1203.1931,-352.5092 1175.4139,-335.7858"/>
+<polygon fill="none" stroke="#404040" points="1175.1241,-335.6182 1167.9276,-336.0768 1164.7364,-329.6102 1171.9329,-329.1517 1175.1241,-335.6182"/>
+<text text-anchor="middle" x="1342.5" y="-488" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +keep_schedule_record</text>
+</g>
+<!-- Node19 -->
+<g id="node18" class="node">
+<title>Node19</title>
+<g id="a_node18"><a xlink:href="classtvm_1_1PrimExpr.html" target="_top" xlink:title="Reference to PrimExprNode. ">
+<polygon fill="#ffffff" stroke="#000000" points="1373,-237.5 1373,-338.5 1527,-338.5 1527,-237.5 1373,-237.5"/>
+<text text-anchor="middle" x="1450" y="-326.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::PrimExpr</text>
+<polyline fill="none" stroke="#000000" points="1373,-319.5 1527,-319.5 "/>
+<text text-anchor="middle" x="1450" y="-307.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="1373,-300.5 1527,-300.5 "/>
+<text text-anchor="start" x="1381" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PrimExpr()</text>
+<text text-anchor="start" x="1381" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PrimExpr()</text>
+<text text-anchor="start" x="1381" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ dtype()</text>
+<text text-anchor="start" x="1381" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_OBJECT_REF</text>
+<text text-anchor="start" x="1381" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_METHODS()</text>
+</a>
+</g>
+</g>
+<!-- Node19->Node2 -->
+<g id="edge30" class="edge">
+<title>Node19->Node2</title>
+<path fill="none" stroke="#404040" d="M1372.7787,-261.2165C1317.2652,-242.0354 1240.5951,-215.6938 1173,-193 1083.9321,-163.0971 982.9303,-129.9476 907.1621,-105.2307"/>
+<polygon fill="none" stroke="#404040" points="907.1068,-105.2127 900.1623,-107.1555 895.6981,-101.4924 902.6425,-99.5496 907.1068,-105.2127"/>
+<text text-anchor="middle" x="1155" y="-161.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +store_predicate</text>
+</g>
+<!-- Node20->Node19 -->
+<g id="edge31" class="edge">
+<title>Node20->Node19</title>
+<path fill="none" stroke="#191970" d="M1725.265,-747.6161C1715.1657,-670.5401 1685.6314,-509.7885 1607,-401 1586.147,-372.1494 1555.6456,-347.8171 1527.0611,-329.2077"/>
+<polygon fill="none" stroke="#191970" points="1721.8409,-748.4387 1726.5642,-757.9224 1728.786,-747.5632 1721.8409,-748.4387"/>
+</g>
+<!-- Node21->Node2 -->
+<g id="edge33" class="edge">
+<title>Node21->Node2</title>
+<path fill="none" stroke="#404040" d="M1718.963,-409.1483C1664.2015,-330.696 1581.9008,-219.985 1536,-193 1431.5655,-131.6033 1089.9403,-93.4667 907.8064,-76.9901"/>
+<polygon fill="none" stroke="#404040" points="907.7289,-76.9833 901.3958,-80.4319 895.7767,-75.9124 902.1098,-72.4638 907.7289,-76.9833"/>
+<text text-anchor="middle" x="1735" y="-291" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +group</text>
+<text text-anchor="middle" x="1735" y="-280" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+attach_stage</text>
+</g>
+<!-- Node22->Node2 -->
+<g id="edge35" class="edge">
+<title>Node22->Node2</title>
+<path fill="none" stroke="#404040" d="M1805.7985,-242.8891C1772.8314,-211.1001 1724.0095,-171.0972 1672,-153 1533.8799,-104.9398 1114.1445,-81.0817 907.9,-72.0042"/>
+<polygon fill="none" stroke="#404040" points="907.8214,-72.0009 901.6532,-75.7363 895.8327,-71.4793 902.0009,-67.7439 907.8214,-72.0009"/>
+<text text-anchor="middle" x="1743.5" y="-167" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> +origin_op</text>
+<text text-anchor="middle" x="1743.5" y="-156" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+op</text>
</g>
</g>
</svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode__inherit__graph.svg b/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode__inherit__graph.svg
index bc3e09c852..983d5c3fe8 100644
--- a/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode__inherit__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1te_1_1StageNode__inherit__graph.svg
@@ -25,7 +25,7 @@
<text text-anchor="start" x="8" y="-81.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ iter_var_attrs</text>
<text text-anchor="start" x="8" y="-70.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ attach_type</text>
<text text-anchor="start" x="8" y="-59.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ attach_ivar</text>
-<text text-anchor="start" x="8" y="-48.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 9 more...</text>
+<text text-anchor="start" x="8" y="-48.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 10 more...</text>
<text text-anchor="start" x="8" y="-37.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_key</text>
<polyline fill="none" stroke="#000000" points="0,-30.5 209,-30.5 "/>
<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ VisitAttrs()</text>
diff --git a/docs/reference/api/doxygen/compute__dag_8h_source.html b/docs/reference/api/doxygen/compute__dag_8h_source.html
index 8ab5a2cefc..adb00c9dd4 100644
--- a/docs/reference/api/doxygen/compute__dag_8h_source.html
+++ b/docs/reference/api/doxygen/compute__dag_8h_source.html
@@ -67,7 +67,7 @@ $(function() {
</div><!--header-->
<div class="contents">
<a href="compute__dag_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*r</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or mo [...]
-<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:317</div></div>
+<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:318</div></div>
<div class="ttc" id="classtvm_1_1auto__scheduler_1_1AccessAnalyzerNode_html_a7707d940b81b5932c7487fae025be3c8"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1AccessAnalyzerNode.html#a7707d940b81b5932c7487fae025be3c8">tvm::auto_scheduler::AccessAnalyzerNode::ops_topo_order</a></div><div class="ttdeci">Array< te::Operation > ops_topo_order</div><div class="ttdoc">Store the topological order of operations. </div><div class="ttdef"><b>Definition:</b> compute_dag.h:79</div></div>
<div class="ttc" id="classtvm_1_1auto__scheduler_1_1ComputeDAGNode_html_a284eaa79b5d1fa15f4ad38bfbff9a41b"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1ComputeDAGNode.html#a284eaa79b5d1fa15f4ad38bfbff9a41b">tvm::auto_scheduler::ComputeDAGNode::init_state</a></div><div class="ttdeci">State init_state</div><div class="ttdoc">The initial state without any transform steps. </div><div class="ttdef"><b>Definition:</b> compute_dag.h:181</div></div>
<div class="ttc" id="namespacetvm_1_1auto__scheduler_html_af17f33579675323a67964d6653562b58"><div class="ttname"><a href="namespacetvm_1_1auto__scheduler.html#af17f33579675323a67964d6653562b58">tvm::auto_scheduler::GetShapeFromRewrittenLayout</a></div><div class="ttdeci">Array< PrimExpr > GetShapeFromRewrittenLayout(String rewritten_layout, Array< String > axis_names)</div><div class="ttdoc">Get the orginal shape from a rewritten layout string. </div></div>
diff --git a/docs/reference/api/doxygen/cuda_2dense_8h_source.html b/docs/reference/api/doxygen/cuda_2dense_8h_source.html
index e425078e3e..e59f307997 100644
--- a/docs/reference/api/doxygen/cuda_2dense_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2dense_8h_source.html
@@ -68,9 +68,9 @@ $(function() {
<div class="contents">
<a href="cuda_2dense_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or more [...]
<div class="ttc" id="generic_2extern_8h_html"><div class="ttname"><a href="generic_2extern_8h.html">extern.h</a></div><div class="ttdoc">Schedule for extern followed by injective ops. </div></div>
-<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:317</div></div>
+<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:318</div></div>
<div class="ttc" id="namespacetvm_1_1topi_1_1rocm_html_abe13cfee88cd67a15c064d16f4af46ad"><div class="ttname"><a href="namespacetvm_1_1topi_1_1rocm.html#abe13cfee88cd67a15c064d16f4af46ad">tvm::topi::rocm::schedule_dense</a></div><div class="ttdeci">Schedule schedule_dense(const Target &target, const Array< Tensor > &outs)</div><div class="ttdoc">Create a rocm schedule for dense. </div><div class="ttdef"><b>Definition:</b> dense.h:88</div></div>
-<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:654</div></div>
+<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:695</div></div>
<div class="ttc" id="array__utils_8h_html"><div class="ttname"><a href="array__utils_8h.html">array_utils.h</a></div><div class="ttdoc">Utility functions for handling arrays. </div></div>
<div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
<div class="ttc" id="namespacetvm_1_1te_html"><div class="ttname"><a href="namespacetvm_1_1te.html">tvm::te</a></div><div class="ttdoc">Tensor expression language DSL. </div><div class="ttdef"><b>Definition:</b> extracted_task.h:33</div></div>
diff --git a/docs/reference/api/doxygen/cuda_2injective_8h_source.html b/docs/reference/api/doxygen/cuda_2injective_8h_source.html
index da673528d6..ab8d5a1e52 100644
--- a/docs/reference/api/doxygen/cuda_2injective_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2injective_8h_source.html
@@ -67,8 +67,8 @@ $(function() {
</div><!--header-->
<div class="contents">
<a href="cuda_2injective_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or [...]
-<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:317</div></div>
-<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:654</div></div>
+<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:318</div></div>
+<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:695</div></div>
<div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
<div class="ttc" id="namespacetvm_1_1te_html"><div class="ttname"><a href="namespacetvm_1_1te.html">tvm::te</a></div><div class="ttdoc">Tensor expression language DSL. </div><div class="ttdef"><b>Definition:</b> extracted_task.h:33</div></div>
<div class="ttc" id="classtvm_1_1Target_html_abed5e5cfb5d36e70ea5eaadef9fb63b2"><div class="ttname"><a href="classtvm_1_1Target.html#abed5e5cfb5d36e70ea5eaadef9fb63b2">tvm::Target::Current</a></div><div class="ttdeci">static tvm::Target Current(bool allow_not_defined=true)</div><div class="ttdoc">Get the current target context from thread local storage. </div></div>
@@ -85,7 +85,7 @@ $(function() {
<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
<div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
<div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
-<div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
+<div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:815</div></div>
<div class="ttc" id="tags_8h_html"><div class="ttname"><a href="tags_8h.html">tags.h</a></div><div class="ttdoc">External function interface to rocBLAS libraries. </div></div>
<div class="ttc" id="namespacetvm_1_1te_html_a9872626811f38606b4e934faa13b5b9f"><div class="ttname"><a href="namespacetvm_1_1te.html#a9872626811f38606b4e934faa13b5b9f">tvm::te::AutoInlineInjective</a></div><div class="ttdeci">void AutoInlineInjective(Schedule sch)</div><div class="ttdoc">To automatically inline operations with injective writes (i.e. writes without reduction or sequential...</div></div>
<div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html_a2d76fa1fb628ff276a284e61123589c5"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html#a2d76fa1fb628ff276a284e61123589c5">tvm::runtime::ObjectRef::as</a></div><div class="ttdeci">const ObjectType * as() const</div><div class="ttdoc">Try to downcast the internal Object to a raw pointer of a corresponding type. </div><div class="ttdef"><b>Definition:</b> object.h:865</div></div>
diff --git a/docs/reference/api/doxygen/cuda_2pooling_8h_source.html b/docs/reference/api/doxygen/cuda_2pooling_8h_source.html
index 892ef1a10a..8b95b3ae4c 100644
--- a/docs/reference/api/doxygen/cuda_2pooling_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2pooling_8h_source.html
@@ -67,8 +67,8 @@ $(function() {
</div><!--header-->
<div class="contents">
<a href="cuda_2pooling_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or mo [...]
-<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:317</div></div>
-<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:654</div></div>
+<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:318</div></div>
+<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:695</div></div>
<div class="ttc" id="array__utils_8h_html"><div class="ttname"><a href="array__utils_8h.html">array_utils.h</a></div><div class="ttdoc">Utility functions for handling arrays. </div></div>
<div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
<div class="ttc" id="namespacetvm_1_1te_html"><div class="ttname"><a href="namespacetvm_1_1te.html">tvm::te</a></div><div class="ttdoc">Tensor expression language DSL. </div><div class="ttdef"><b>Definition:</b> extracted_task.h:33</div></div>
@@ -87,7 +87,7 @@ $(function() {
<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
<div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
<div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
-<div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
+<div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:815</div></div>
<div class="ttc" id="namespacetvm_1_1topi_html_ad5dcb2721aae4c9b84552b85db6e6cae"><div class="ttname"><a href="namespacetvm_1_1topi.html#ad5dcb2721aae4c9b84552b85db6e6cae">tvm::topi::is_broadcast</a></div><div class="ttdeci">bool is_broadcast(std::string tag)</div><div class="ttdef"><b>Definition:</b> tags.h:47</div></div>
<div class="ttc" id="tags_8h_html"><div class="ttname"><a href="tags_8h.html">tags.h</a></div><div class="ttdoc">External function interface to rocBLAS libraries. </div></div>
<div class="ttc" id="classtvm_1_1runtime_1_1ObjectRef_html_a2d76fa1fb628ff276a284e61123589c5"><div class="ttname"><a href="classtvm_1_1runtime_1_1ObjectRef.html#a2d76fa1fb628ff276a284e61123589c5">tvm::runtime::ObjectRef::as</a></div><div class="ttdeci">const ObjectType * as() const</div><div class="ttdoc">Try to downcast the internal Object to a raw pointer of a corresponding type. </div><div class="ttdef"><b>Definition:</b> object.h:865</div></div>
diff --git a/docs/reference/api/doxygen/cuda_2reduction_8h_source.html b/docs/reference/api/doxygen/cuda_2reduction_8h_source.html
index 54aa57eeef..5e0a8db1e7 100644
--- a/docs/reference/api/doxygen/cuda_2reduction_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2reduction_8h_source.html
@@ -69,10 +69,10 @@ $(function() {
<a href="cuda_2reduction_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or [...]
<div class="ttc" id="namespacetvm_1_1topi_1_1cuda_html_a9d51320c5b9bd9147018689b1b5f1279"><div class="ttname"><a href="namespacetvm_1_1topi_1_1cuda.html#a9d51320c5b9bd9147018689b1b5f1279">tvm::topi::cuda::TraverseBeforeReduce</a></div><div class="ttdeci">void TraverseBeforeReduce(Schedule s, Operation op)</div><div class="ttdoc">Recursively traverse operator inputs, setting injective inputs to be computed inline. </div><div class="ttdef"><b>Definition:</b> reduction.h:138</div></div>
<div class="ttc" id="classtvm_1_1te_1_1Operation_html_a00b67945c799a2022d3164ab63dd3b82"><div class="ttname"><a href="classtvm_1_1te_1_1Operation.html#a00b67945c799a2022d3164ab63dd3b82">tvm::te::Operation::output</a></div><div class="ttdeci">Tensor output(size_t i) const</div><div class="ttdoc">get the i-th output of the operation. </div></div>
-<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:317</div></div>
+<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:318</div></div>
<div class="ttc" id="namespacetvm_1_1topi_html_a938350880b154670bea57cd9ce69d490"><div class="ttname"><a href="namespacetvm_1_1topi.html#a938350880b154670bea57cd9ce69d490">tvm::topi::kCommReduceIdx</a></div><div class="ttdeci">constexpr auto kCommReduceIdx</div><div class="ttdef"><b>Definition:</b> tags.h:35</div></div>
<div class="ttc" id="namespacetvm_1_1topi_1_1cuda_html_a3dbbf8bdb78533c15e62ab0e874eb360"><div class="ttname"><a href="namespacetvm_1_1topi_1_1cuda.html#a3dbbf8bdb78533c15e62ab0e874eb360">tvm::topi::cuda::ScheduleReduce</a></div><div class="ttdeci">Schedule ScheduleReduce(const Target &target, Operation op, Schedule sch, bool is_idx_reduce=false)</div><div class="ttdoc">Schedule a given reduce operation. </div><div class="ttdef"><b>Definition:</b> reduction.h:50</div></div>
-<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:654</div></div>
+<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:695</div></div>
<div class="ttc" id="classtvm_1_1te_1_1OperationNode_html_ae6ac4336e7dc2df84f128fc97a6cdb9b"><div class="ttname"><a href="classtvm_1_1te_1_1OperationNode.html#ae6ac4336e7dc2df84f128fc97a6cdb9b">tvm::te::OperationNode::tag</a></div><div class="ttdeci">std::string tag</div><div class="ttdoc">optional tag of the operation </div><div class="ttdef"><b>Definition:</b> operation.h:61</div></div>
<div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
<div class="ttc" id="namespacetvm_1_1te_html"><div class="ttname"><a href="namespacetvm_1_1te.html">tvm::te</a></div><div class="ttdoc">Tensor expression language DSL. </div><div class="ttdef"><b>Definition:</b> extracted_task.h:33</div></div>
@@ -96,7 +96,7 @@ $(function() {
<div class="ttc" id="classtvm_1_1Target_html"><div class="ttname"><a href="classtvm_1_1Target.html">tvm::Target</a></div><div class="ttdoc">Managed reference class to TargetNode. </div><div class="ttdef"><b>Definition:</b> target.h:183</div></div>
<div class="ttc" id="classtvm_1_1te_1_1Tensor_html"><div class="ttname"><a href="classtvm_1_1te_1_1Tensor.html">tvm::te::Tensor</a></div><div class="ttdoc">Tensor structure representing a possible input, or intermediate computation result. </div><div class="ttdef"><b>Definition:</b> tensor.h:102</div></div>
<div class="ttc" id="operation_8h_html"><div class="ttname"><a href="operation_8h.html">operation.h</a></div><div class="ttdoc">Operation node can generate one or multiple Tensors. </div></div>
-<div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:774</div></div>
+<div class="ttc" id="classtvm_1_1te_1_1Fuse_html"><div class="ttname"><a href="classtvm_1_1te_1_1Fuse.html">tvm::te::Fuse</a></div><div class="ttdoc">Managed reference to FuseNode. </div><div class="ttdef"><b>Definition:</b> schedule.h:815</div></div>
<div class="ttc" id="classtvm_1_1te_1_1Schedule_html_a34ae85add41bbed0140726d024d08862"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862">tvm::te::Schedule::rfactor</a></div><div class="ttdeci">Array< Tensor > rfactor(const Tensor &tensor, const IterVar &axis, int factor_axis=0)</div><div class="ttdoc">Factor a reduction axis in tensor&#39;s schedule to be an explicit axis. This will create a new stage tha...</div></div>
<div class="ttc" id="namespacetvm_1_1topi_html_ad5dcb2721aae4c9b84552b85db6e6cae"><div class="ttname"><a href="namespacetvm_1_1topi.html#ad5dcb2721aae4c9b84552b85db6e6cae">tvm::topi::is_broadcast</a></div><div class="ttdeci">bool is_broadcast(std::string tag)</div><div class="ttdef"><b>Definition:</b> tags.h:47</div></div>
<div class="ttc" id="tags_8h_html"><div class="ttname"><a href="tags_8h.html">tags.h</a></div><div class="ttdoc">External function interface to rocBLAS libraries. </div></div>
diff --git a/docs/reference/api/doxygen/cuda_2softmax_8h_source.html b/docs/reference/api/doxygen/cuda_2softmax_8h_source.html
index f4693efdb7..5bcbc4c5b8 100644
--- a/docs/reference/api/doxygen/cuda_2softmax_8h_source.html
+++ b/docs/reference/api/doxygen/cuda_2softmax_8h_source.html
@@ -67,8 +67,8 @@ $(function() {
</div><!--header-->
<div class="contents">
<a href="cuda_2softmax_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or mo [...]
-<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:317</div></div>
-<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:654</div></div>
+<div class="ttc" id="classtvm_1_1te_1_1Schedule_html"><div class="ttname"><a href="classtvm_1_1te_1_1Schedule.html">tvm::te::Schedule</a></div><div class="ttdoc">Global schedule container For operations and all the operations they depend on. The schedule per Oper...</div><div class="ttdef"><b>Definition:</b> schedule.h:318</div></div>
+<div class="ttc" id="namespacetvm_1_1te_html_a485034766309df280239e0994913b34b"><div class="ttname"><a href="namespacetvm_1_1te.html#a485034766309df280239e0994913b34b">tvm::te::create_schedule</a></div><div class="ttdeci">Schedule create_schedule(Array< Operation > ops)</div><div class="ttdoc">Create a schedule for array of ops(and their dependencies). </div><div class="ttdef"><b>Definition:</b> schedule.h:695</div></div>
<div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
<div class="ttc" id="namespacetvm_1_1te_html"><div class="ttname"><a href="namespacetvm_1_1te.html">tvm::te</a></div><div class="ttdoc">Tensor expression language DSL. </div><div class="ttdef"><b>Definition:</b> extracted_task.h:33</div></div>
<div class="ttc" id="classtvm_1_1tir_1_1IterVar_html"><div class="ttname"><a href="classtvm_1_1tir_1_1IterVar.html">tvm::tir::IterVar</a></div><div class="ttdoc">Iteration Variable, represents an iteration over an integer interval. </div><div class="ttdef"><b>Definition:</b> var.h:308</div></div>
diff --git a/docs/reference/api/doxygen/functions_a.html b/docs/reference/api/doxygen/functions_a.html
index f681a975ed..b037416993 100644
--- a/docs/reference/api/doxygen/functions_a.html
+++ b/docs/reference/api/doxygen/functions_a.html
@@ -478,6 +478,9 @@ $(function() {
<li>attach_map
: <a class="el" href="classtvm_1_1auto__scheduler_1_1StateNode.html#afeeed629e92cb00c36c4da224ffa5022">tvm::auto_scheduler::StateNode</a>
</li>
+<li>attach_sch
+: <a class="el" href="classtvm_1_1te_1_1StageNode.html#a0627160a0f180921c11b3ffcda1ab2c8">tvm::te::StageNode</a>
+</li>
<li>attach_stage
: <a class="el" href="classtvm_1_1te_1_1StageNode.html#a75c8cf7d913a913e34abcaf6797540a5">tvm::te::StageNode</a>
</li>
diff --git a/docs/reference/api/doxygen/functions_func_s.html b/docs/reference/api/doxygen/functions_func_s.html
index 5d244e21d7..2f2922190e 100644
--- a/docs/reference/api/doxygen/functions_func_s.html
+++ b/docs/reference/api/doxygen/functions_func_s.html
@@ -775,7 +775,7 @@ $(function() {
: <a class="el" href="classtvm_1_1runtime_1_1DeviceAPI.html#ac29b9295c432a87658392872c644864f">tvm::runtime::DeviceAPI</a>
</li>
<li>String()
-: <a class="el" href="classtvm_1_1runtime_1_1String.html#a68df7bab89fca339e3918438dd80300d">tvm::runtime::String</a>
+: <a class="el" href="classtvm_1_1runtime_1_1String.html#a02fca36e3ff55cc1e83635b02a11fca3">tvm::runtime::String</a>
</li>
<li>StringImm()
: <a class="el" href="classtvm_1_1tir_1_1StringImm.html#a0f2830290e055f677c5d5dea98aab726">tvm::tir::StringImm</a>
diff --git a/docs/reference/api/doxygen/functions_func_t.html b/docs/reference/api/doxygen/functions_func_t.html
index c5d13e46f0..8b0c45ee46 100644
--- a/docs/reference/api/doxygen/functions_func_t.html
+++ b/docs/reference/api/doxygen/functions_func_t.html
@@ -1249,7 +1249,7 @@ $(function() {
: <a class="el" href="classtvm_1_1TypedEnvFunc_3_01R_07Args_8_8_8_08_4.html#a0d72a6fa7263821c14bcd37837998ed9">tvm::TypedEnvFunc< R(Args...)></a>
</li>
<li>TypedPackedFunc()
-: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#afd8ee9dd9648c19b468bb4b0b00e8e4e">tvm::runtime::TypedPackedFunc< R(Args...)></a>
+: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#a36ca0d1876544463ee848766e70e5e96">tvm::runtime::TypedPackedFunc< R(Args...)></a>
</li>
<li>TypeIndex2Key()
: <a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">tvm::runtime::Object</a>
@@ -1272,7 +1272,7 @@ $(function() {
: <a class="el" href="classtvm_1_1TypeRelation.html#ac26b1897eab8197ed26606ab81b7403b">tvm::TypeRelation</a>
</li>
<li>TypeReporter()
-: <a class="el" href="classtvm_1_1TypeReporter.html#aa3dc38a3c84d324d0b3a9f358460a091">tvm::TypeReporter</a>
+: <a class="el" href="classtvm_1_1TypeReporter.html#a8e7e05a07f9f7ad9bea91f27afac9051">tvm::TypeReporter</a>
</li>
<li>TypeVar()
: <a class="el" href="classtvm_1_1TypeVar.html#adf5ef8e89d162735519b5d125c89e3e3">tvm::TypeVar</a>
diff --git a/docs/reference/api/doxygen/functions_func_u.html b/docs/reference/api/doxygen/functions_func_u.html
index 4b4e0f203d..611cae9ff1 100644
--- a/docs/reference/api/doxygen/functions_func_u.html
+++ b/docs/reference/api/doxygen/functions_func_u.html
@@ -106,7 +106,7 @@ $(function() {
, <a class="el" href="classtvm_1_1auto__scheduler_1_1CostModelNode.html#ae35b2b678760b8da57a43d3ae9c24da5">tvm::auto_scheduler::CostModelNode</a>
, <a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#a2d7849df6c7dbe93bf363c1d9f860a26">tvm::auto_scheduler::PythonBasedModelNode</a>
, <a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModelNode.html#a7febac6c05d8e2d407f466467769ee32">tvm::auto_scheduler::RandomModelNode</a>
-, <a class="el" href="classtvm_1_1IRModuleNode.html#a94a93385e64ce844299729af6a573015">tvm::IRModuleNode</a>
+, <a class="el" href="classtvm_1_1IRModuleNode.html#abdd8936c6fca33ef9b7c086f8fd58f84">tvm::IRModuleNode</a>
, <a class="el" href="classtvm_1_1meta__schedule_1_1CostModelNode.html#a1bba32eba84db583fe90d1a5bce085f1">tvm::meta_schedule::CostModelNode</a>
, <a class="el" href="classtvm_1_1meta__schedule_1_1PyCostModelNode.html#a970b00b0eb1bf6b88eea2711b58c4d1d">tvm::meta_schedule::PyCostModelNode</a>
</li>
diff --git a/docs/reference/api/doxygen/functions_k.html b/docs/reference/api/doxygen/functions_k.html
index b87ac95440..8e6afae395 100644
--- a/docs/reference/api/doxygen/functions_k.html
+++ b/docs/reference/api/doxygen/functions_k.html
@@ -91,6 +91,9 @@ $(function() {
<li>kDynamic
: <a class="el" href="structtvm_1_1runtime_1_1TypeIndex.html#aed93c7318efc8052201d4c404b21a40da83fed6b80a5bcb3247430922fd85ea47">tvm::runtime::TypeIndex</a>
</li>
+<li>keep_schedule_record
+: <a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#ab27491f6d746b79bf94d9736566224c6">tvm::te::ScheduleNode</a>
+</li>
<li>keepdims
: <a class="el" href="structtvm_1_1relay_1_1ArgReduceAttrs.html#a69a41c9cc211fe0a503ac89485517f35">tvm::relay::ArgReduceAttrs</a>
, <a class="el" href="structtvm_1_1relay_1_1ReduceAttrs.html#afa8f7f2b60bcb5c44f6cd3338d80143a">tvm::relay::ReduceAttrs</a>
diff --git a/docs/reference/api/doxygen/functions_o.html b/docs/reference/api/doxygen/functions_o.html
index bcbf01e28f..3ea8a620de 100644
--- a/docs/reference/api/doxygen/functions_o.html
+++ b/docs/reference/api/doxygen/functions_o.html
@@ -75,7 +75,7 @@ $(function() {
, <a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html#a8fae619f3bd1a2b2f7273d8d6525032a">tvm::runtime::ObjectPtr< T ></a>
</li>
<li>Object()
-: <a class="el" href="classtvm_1_1runtime_1_1Object.html#ab7968feb6ad38ecaffc320e13819d826">tvm::runtime::Object</a>
+: <a class="el" href="classtvm_1_1runtime_1_1Object.html#a133436a9ec5c4a768b94102bf95a660b">tvm::runtime::Object</a>
, <a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html#a0720b5f434e636e22a3ed34f847eec57">tvm::runtime::ObjectPtr< T ></a>
</li>
<li>object
diff --git a/docs/reference/api/doxygen/functions_p.html b/docs/reference/api/doxygen/functions_p.html
index 665a8f6a45..88ac5927b0 100644
--- a/docs/reference/api/doxygen/functions_p.html
+++ b/docs/reference/api/doxygen/functions_p.html
@@ -399,7 +399,7 @@ $(function() {
: <a class="el" href="classtvm_1_1te_1_1IterVarAttrNode.html#a2a4a8e201e6caefeecffd4a7647866fd">tvm::te::IterVarAttrNode</a>
</li>
<li>PrefetchNode()
-: <a class="el" href="classtvm_1_1tir_1_1PrefetchNode.html#a73ef244c364b9c7efaee36e6bec746e7">tvm::tir::PrefetchNode</a>
+: <a class="el" href="classtvm_1_1tir_1_1PrefetchNode.html#acaaa5e89462c7edf3019df4283ec74db">tvm::tir::PrefetchNode</a>
</li>
<li>prefix_
: <a class="el" href="classtvm_1_1NameSupplyNode.html#aa14405ac3611e27389632477779fb6ad">tvm::NameSupplyNode</a>
@@ -427,6 +427,9 @@ $(function() {
<li>primitive_map
: <a class="el" href="classtvm_1_1runtime_1_1vm_1_1Executable.html#ab5a31e8670a4f20564abc48610a90e8c">tvm::runtime::vm::Executable</a>
</li>
+<li>primitive_record
+: <a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#aeddb87ac8fb45a6059e8ebb9659003f2">tvm::te::ScheduleNode</a>
+</li>
<li>primitive_targets
: <a class="el" href="classtvm_1_1CompilationConfigNode.html#aaf237580f1684eaf97e1852c6b69ecbd">tvm::CompilationConfigNode</a>
</li>
diff --git a/docs/reference/api/doxygen/functions_rela.html b/docs/reference/api/doxygen/functions_rela.html
index ced91ab6ca..ae4f7e87d2 100644
--- a/docs/reference/api/doxygen/functions_rela.html
+++ b/docs/reference/api/doxygen/functions_rela.html
@@ -399,6 +399,9 @@ $(function() {
<li>With< PassContext >
: <a class="el" href="classtvm_1_1transform_1_1PassContext.html#a5f399608a6da56a5c91ea6ead8489f69">tvm::transform::PassContext</a>
</li>
+<li>With< ScheduleContext >
+: <a class="el" href="classtvm_1_1te_1_1ScheduleContext.html#a10080b05885425a75e7f7281d3defb68">tvm::te::ScheduleContext</a>
+</li>
<li>With< SpecializedCondition >
: <a class="el" href="classtvm_1_1te_1_1SpecializedCondition.html#ae2aff9f2ce7debae1cb1648450f6b3fe">tvm::te::SpecializedCondition</a>
</li>
diff --git a/docs/reference/api/doxygen/functions_s.html b/docs/reference/api/doxygen/functions_s.html
index 1649079561..05ccfffa8e 100644
--- a/docs/reference/api/doxygen/functions_s.html
+++ b/docs/reference/api/doxygen/functions_s.html
@@ -149,6 +149,9 @@ $(function() {
, <a class="el" href="classtvm_1_1te_1_1Schedule.html#a1eb19ccb06835a11edc39ed1410f01af">tvm::te::Schedule</a>
, <a class="el" href="classtvm_1_1tir_1_1ScheduleNode.html#aae5808dc2e987bf17ef42196457a654d">tvm::tir::ScheduleNode</a>
</li>
+<li>schedule_record
+: <a class="el" href="classtvm_1_1te_1_1ScheduleNode.html#a52983b1afd658ec3b885b3b076c6203d">tvm::te::ScheduleNode</a>
+</li>
<li>ScheduleFn()
: <a class="el" href="classtvm_1_1meta__schedule_1_1SpaceGenerator.html#a4a7bf04c99138534f38508157baf602c">tvm::meta_schedule::SpaceGenerator</a>
</li>
@@ -805,7 +808,7 @@ $(function() {
: <a class="el" href="structtvm_1_1relay_1_1MultiBoxPriorAttrs.html#ad6d089344fa741021584222ffa70a451">tvm::relay::MultiBoxPriorAttrs</a>
</li>
<li>SizeVar()
-: <a class="el" href="classtvm_1_1tir_1_1SizeVar.html#ac470249315d9e395ad581d35dd5dcb05">tvm::tir::SizeVar</a>
+: <a class="el" href="classtvm_1_1tir_1_1SizeVar.html#ab089bab85206d8e306cc61e879e525be">tvm::tir::SizeVar</a>
</li>
<li>Slice()
: <a class="el" href="classtvm_1_1te_1_1Tensor_1_1Slice.html#ab314819e8bcca6421e9a4f33e48578c3">tvm::te::Tensor::Slice</a>
@@ -851,7 +854,7 @@ $(function() {
: <a class="el" href="classtvm_1_1script_1_1printer_1_1DocNode.html#a29e21c8f39639d1d30697971267847a8">tvm::script::printer::DocNode</a>
</li>
<li>SourceMap()
-: <a class="el" href="classtvm_1_1SourceMap.html#a9f10049893326844c3f01daad7c121e9">tvm::SourceMap</a>
+: <a class="el" href="classtvm_1_1SourceMap.html#ad4517cedaea581d34c28cb9903205eeb">tvm::SourceMap</a>
</li>
<li>space_generator
: <a class="el" href="classtvm_1_1meta__schedule_1_1TuneContextNode.html#a7bdfdd48530bfe380c5f6c143158a07f">tvm::meta_schedule::TuneContextNode</a>
@@ -873,7 +876,7 @@ $(function() {
</li>
<li>Span()
... 18610 lines suppressed ...