You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2022/05/21 04:58:16 UTC
[tvm-site] branch asf-site updated: deploying docs (apache/tvm@d0999bbd3b40b9466cc3b5c01f2b4b7fb09b478d)
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 8e2bcee3a deploying docs (apache/tvm@d0999bbd3b40b9466cc3b5c01f2b4b7fb09b478d)
8e2bcee3a is described below
commit 8e2bcee3a4450c7fc9c40246767eac9678082367
Author: tvm-bot <95...@users.noreply.github.com>
AuthorDate: Sat May 21 04:58:10 2022 +0000
deploying docs (apache/tvm@d0999bbd3b40b9466cc3b5c01f2b4b7fb09b478d)
---
.../how_to/compile_models/from_mxnet.rst.txt | 2 +-
.../how_to/compile_models/from_oneflow.rst.txt | 2 +-
.../how_to/compile_models/from_paddle.rst.txt | 2 +-
.../how_to/compile_models/from_pytorch.rst.txt | 2 +-
.../how_to/compile_models/from_tensorflow.rst.txt | 2 +-
.../compile_models/sg_execution_times.rst.txt | 22 +-
.../deploy_models/deploy_model_on_android.rst.txt | 2 +-
.../deploy_object_detection_pytorch.rst.txt | 4 +-
.../deploy_models/deploy_prequantized.rst.txt | 6 +-
.../deploy_prequantized_tflite.rst.txt | 4 +-
.../how_to/deploy_models/deploy_quantized.rst.txt | 2 +-
.../deploy_models/deploy_ssd_gluoncv.rst.txt | 4 +-
.../deploy_models/sg_execution_times.rst.txt | 18 +-
.../extend_tvm/bring_your_own_datatypes.rst.txt | 2 +-
.../how_to/extend_tvm/sg_execution_times.rst.txt | 10 +-
.../how_to/extend_tvm/use_pass_instrument.rst.txt | 16 +-
.../optimize_operators/opt_conv_cuda.rst.txt | 2 +-
.../optimize_operators/opt_conv_tensorcore.rst.txt | 2 +-
.../how_to/optimize_operators/opt_gemm.rst.txt | 16 +-
.../optimize_operators/sg_execution_times.rst.txt | 8 +-
.../sg_execution_times.rst.txt | 16 +-
.../tune_conv2d_layer_cuda.rst.txt | 1334 +++++++-------------
.../tune_network_cuda.rst.txt | 2 +-
.../tune_network_x86.rst.txt | 4 +-
.../tune_sparse_x86.rst.txt | 84 +-
.../tune_with_autotvm/sg_execution_times.rst.txt | 12 +-
.../tune_with_autotvm/tune_conv2d_cuda.rst.txt | 34 +-
.../work_with_microtvm/micro_autotune.rst.txt | 16 +-
.../work_with_microtvm/sg_execution_times.rst.txt | 12 +-
.../work_with_relay/sg_execution_times.rst.txt | 8 +-
.../work_with_schedules/sg_execution_times.rst.txt | 18 +-
.../how_to/work_with_schedules/tensorize.rst.txt | 2 +-
.../tutorials/autotvm/sg_execution_times.rst.txt | 6 +-
.../frontend/deploy_classification.rst.txt | 2 +-
.../tutorials/frontend/deploy_detection.rst.txt | 2 +-
.../tutorials/frontend/sg_execution_times.rst.txt | 6 +-
.../tutorials/optimize/sg_execution_times.rst.txt | 6 +-
.../topic/vta/tutorials/sg_execution_times.rst.txt | 6 +-
.../tutorial/auto_scheduler_matmul_x86.rst.txt | 9 +-
docs/_sources/tutorial/autotvm_relay_x86.rst.txt | 56 +-
.../tutorial/cross_compilation_and_rpc.rst.txt | 2 +-
docs/_sources/tutorial/intro_topi.rst.txt | 2 +-
docs/_sources/tutorial/sg_execution_times.rst.txt | 26 +-
.../tutorial/tensor_expr_get_started.rst.txt | 46 +-
docs/commit_hash | 2 +-
docs/how_to/compile_models/from_mxnet.html | 2 +-
docs/how_to/compile_models/from_oneflow.html | 80 +-
docs/how_to/compile_models/from_paddle.html | 2 +-
docs/how_to/compile_models/from_pytorch.html | 37 +-
docs/how_to/compile_models/from_tensorflow.html | 2 +-
docs/how_to/compile_models/sg_execution_times.html | 22 +-
.../deploy_models/deploy_model_on_android.html | 2 +-
.../deploy_object_detection_pytorch.html | 123 +-
docs/how_to/deploy_models/deploy_prequantized.html | 14 +-
.../deploy_models/deploy_prequantized_tflite.html | 4 +-
docs/how_to/deploy_models/deploy_quantized.html | 2 +-
docs/how_to/deploy_models/deploy_ssd_gluoncv.html | 41 +-
docs/how_to/deploy_models/sg_execution_times.html | 18 +-
.../extend_tvm/bring_your_own_datatypes.html | 2 +-
docs/how_to/extend_tvm/sg_execution_times.html | 10 +-
docs/how_to/extend_tvm/use_pass_instrument.html | 16 +-
docs/how_to/optimize_operators/opt_conv_cuda.html | 2 +-
.../optimize_operators/opt_conv_tensorcore.html | 2 +-
docs/how_to/optimize_operators/opt_gemm.html | 16 +-
.../optimize_operators/sg_execution_times.html | 8 +-
.../sg_execution_times.html | 14 +-
.../tune_conv2d_layer_cuda.html | 1334 +++++++-------------
.../tune_with_autoscheduler/tune_network_cuda.html | 2 +-
.../tune_with_autoscheduler/tune_network_x86.html | 4 +-
.../tune_with_autoscheduler/tune_sparse_x86.html | 84 +-
.../tune_with_autotvm/sg_execution_times.html | 12 +-
.../how_to/tune_with_autotvm/tune_conv2d_cuda.html | 34 +-
docs/how_to/work_with_microtvm/micro_autotune.html | 16 +-
.../work_with_microtvm/sg_execution_times.html | 12 +-
.../how_to/work_with_relay/sg_execution_times.html | 8 +-
.../work_with_schedules/sg_execution_times.html | 18 +-
docs/how_to/work_with_schedules/tensorize.html | 2 +-
...sstvm_1_1meta__schedule_1_1Mutator-members.html | 27 +-
.../classtvm_1_1meta__schedule_1_1Mutator.html | 36 +-
...m_1_1meta__schedule_1_1Mutator__coll__graph.svg | 107 +-
..._1meta__schedule_1_1Mutator__inherit__graph.svg | 79 +-
...stvm_1_1meta__schedule_1_1Postproc-members.html | 2 +-
.../classtvm_1_1meta__schedule_1_1Postproc.html | 14 +-
..._1_1meta__schedule_1_1ScheduleRule-members.html | 51 +-
...classtvm_1_1meta__schedule_1_1ScheduleRule.html | 53 +-
...meta__schedule_1_1ScheduleRule__coll__graph.svg | 113 +-
...a__schedule_1_1ScheduleRule__inherit__graph.svg | 85 +-
docs/reference/api/doxygen/functions_a.html | 5 +-
docs/reference/api/doxygen/functions_func_a.html | 11 +-
docs/reference/api/doxygen/functions_func_m.html | 5 +-
docs/reference/api/doxygen/functions_func_r.html | 2 +-
docs/reference/api/doxygen/functions_func_s.html | 2 +-
docs/reference/api/doxygen/functions_func_v.html | 2 +-
docs/reference/api/doxygen/functions_m.html | 5 +-
docs/reference/api/doxygen/functions_r.html | 2 +-
docs/reference/api/doxygen/functions_s.html | 6 +-
docs/reference/api/doxygen/functions_t.html | 4 +-
docs/reference/api/doxygen/functions_v.html | 10 +-
docs/reference/api/doxygen/ir_2attrs_8h.html | 6 +-
.../reference/api/doxygen/ir_2attrs_8h_source.html | 2 +-
docs/reference/api/doxygen/mutator_8h_source.html | 2 +-
docs/reference/api/doxygen/postproc_8h_source.html | 2 +-
.../api/doxygen/schedule__rule_8h_source.html | 2 +-
docs/reference/api/doxygen/search/all_11.js | 4 +-
docs/reference/api/doxygen/search/all_13.js | 12 +-
docs/reference/api/doxygen/search/all_14.js | 10 +-
docs/reference/api/doxygen/search/all_15.js | 6 +-
docs/reference/api/doxygen/search/all_16.js | 6 +-
docs/reference/api/doxygen/search/all_17.js | 2 +-
docs/reference/api/doxygen/search/all_18.js | 2 +-
docs/reference/api/doxygen/search/all_2.js | 1 +
docs/reference/api/doxygen/search/all_e.js | 5 +-
docs/reference/api/doxygen/search/functions_1.js | 1 +
docs/reference/api/doxygen/search/functions_10.js | 4 +-
docs/reference/api/doxygen/search/functions_12.js | 8 +-
docs/reference/api/doxygen/search/functions_13.js | 4 +-
docs/reference/api/doxygen/search/functions_14.js | 2 +-
docs/reference/api/doxygen/search/functions_15.js | 4 +-
docs/reference/api/doxygen/search/functions_16.js | 2 +-
docs/reference/api/doxygen/search/functions_d.js | 5 +-
docs/reference/api/python/auto_scheduler.html | 4 +-
.../api/typedoc/classes/bytestreamreader.html | 12 +-
.../api/typedoc/classes/cachedcallstack.html | 34 +-
docs/reference/api/typedoc/classes/dldatatype.html | 12 +-
docs/reference/api/typedoc/classes/dldevice.html | 10 +-
.../reference/api/typedoc/classes/environment.html | 12 +-
docs/reference/api/typedoc/classes/ffilibrary.html | 20 +-
.../api/typedoc/classes/graphexecutor.html | 16 +-
docs/reference/api/typedoc/classes/instance.html | 40 +-
docs/reference/api/typedoc/classes/memory.html | 34 +-
docs/reference/api/typedoc/classes/module.html | 10 +-
docs/reference/api/typedoc/classes/ndarray.html | 22 +-
.../api/typedoc/classes/packedfunccell.html | 6 +-
docs/reference/api/typedoc/classes/rpcserver.html | 14 +-
docs/reference/api/typedoc/classes/scalar.html | 6 +-
.../api/typedoc/classes/webgpucontext.html | 12 +-
docs/reference/api/typedoc/enums/argtypecode.html | 30 +-
.../api/typedoc/enums/aynccallbackcode.html | 4 +-
.../api/typedoc/enums/dldatatypecode.html | 8 +-
.../api/typedoc/enums/rpcserverstate.html | 12 +-
docs/reference/api/typedoc/enums/sizeof.html | 18 +-
docs/reference/api/typedoc/index.html | 112 +-
.../api/typedoc/interfaces/disposable.html | 2 +-
.../api/typedoc/interfaces/functioninfo.html | 6 +-
.../api/typedoc/interfaces/libraryprovider.html | 4 +-
docs/searchindex.js | 2 +-
.../vta/tutorials/autotvm/sg_execution_times.html | 6 +-
.../tutorials/frontend/deploy_classification.html | 2 +-
.../vta/tutorials/frontend/deploy_detection.html | 2 +-
.../vta/tutorials/frontend/sg_execution_times.html | 6 +-
.../vta/tutorials/optimize/sg_execution_times.html | 6 +-
docs/topic/vta/tutorials/sg_execution_times.html | 6 +-
docs/tutorial/auto_scheduler_matmul_x86.html | 5 +-
docs/tutorial/autotvm_relay_x86.html | 258 ++--
docs/tutorial/cross_compilation_and_rpc.html | 2 +-
docs/tutorial/intro_topi.html | 2 +-
docs/tutorial/sg_execution_times.html | 26 +-
docs/tutorial/tensor_expr_get_started.html | 46 +-
158 files changed, 2254 insertions(+), 3074 deletions(-)
diff --git a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
index e6351db34..a9bf50dcf 100644
--- a/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_mxnet.rst.txt
@@ -98,7 +98,7 @@ In this section, we download a pretrained imagenet model and classify an image.
.. code-block:: none
- Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip51e829dd-1d99-4b44-9836-8d4391dba25d from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+ Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip25d94653-141a-4692-983a-bb4507787197 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
x (1, 3, 224, 224)
diff --git a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
index b3c16665c..e180ac5ca 100644
--- a/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_oneflow.rst.txt
@@ -100,7 +100,7 @@ Load a pretrained OneFlow model and save model
.. code-block:: none
Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
-
0%| | 0.00/41.5M [00:00<?, ?B/s]
0%| | 16.0k/41.5M [00:00<08:46, 82.6kB/s]
0%| | 48.0k/41.5M [00:00<05:32, 131kB/s]
0%| | 96.0k/41.5M [00:00<03:56, 184kB/s]
0%| | 160k/41.5M [00:00<02:59, 241kB/s]
1%| | 288k/41.5M [00:00<01:50, 391kB/s]
1%|1 | 552k/41.5M [00:01<00:59, 719kB/s]
3%|2 | 1.05M/41.5M [00:01<00:31, 1.36MB/s]
5%|4 | 2.06M/41.5M [00:01<00:15, 2.63MB/s]
9%|8 | 3.54M/41.5M [00:01<00:09, 4.23MB/s]
12%|#2 | 5.00M/41.5M [00:01<00:07, 5.30MB/s]
16%|#5 | 6.48M/41.5M [00:02<00:06, 6.05MB/s]
19%|#9 | 7.95M/41.5M [00:02<00:05, 6.57MB/s]
23%|##2 | 9.43M/41.5M [00:02<00:04, 6.93MB/s]
26%|##6 | 10.9M/41.5M [00:02<00:04, 7.17MB/s]
30%|##9 | 12.4M/41.5M [00:02<00:04, 7.35MB/s]
33%|###3 | 13.9M/41.5M [00:03<00:03, 7.48MB/s]
37%|###6 | 15.3M/41.5M [00:03<00
:03, 7.55MB/s]
41%|#### | 16.8M/41.5M [00:03<00:03, 7.61MB/s]
44%|####4 | 18.3M/41.5M [00:03<00:03, 7.65MB/s]
48%|####7 | 19.7M/41.5M [00:03<00:02, 8.98MB/s]
50%|####9 | 20.7M/41.5M [00:03<00:02, 9.07MB/s]
52%|#####2 | 21.6M/41.5M [00:04<00:02, 7.63MB/s]
55%|#####4 | 22.7M/41.5M [00:04<00:02, 7.01MB/s]
58%|#####8 | 24.1M/41.5M [00:04<00:02, 8.59MB/s]
60%|###### | 25.1M/41.5M [00:04<00:01, 8.76MB/s]
63%|######2 | 26.0M/41.5M [00:04<00:02, 7.33MB/s]
65%|######5 | 27.1M/41.5M [00:04<00:02, 6.81MB/s]
69%|######8 | 28.6M/41.5M [00:05<00:01, 8.51MB/s]
71%|#######1 | 29.5M/41.5M [00:05<00:01, 8.69MB/s]
73%|#######3 | 30.4M/41.5M [00:05<00:01, 7.27MB/s]
76%|#######5 | 31.5M/41.5M [00:05<00:01, 6.77MB/s]
79%|#######9 | 33.0M/41.5M [00:05<00:01, 8.49MB/s]
82%|########1 | 33.9M/41.5M [00:05<00:00, 8.67MB/s]
84%|########3 | 34.8M/41.5M [00:05<00:00, 7.25MB/s]
87%|####
####6 | 35.9M/41.5M [00:06<00:00, 6.77MB/s]
90%|######### | 37.4M/41.5M [00:06<00:00, 7.10MB/s]
94%|#########3| 38.9M/41.5M [00:06<00:00, 7.32MB/s]
97%|#########7| 40.3M/41.5M [00:06<00:00, 7.43MB/s]
100%|##########| 41.5M/41.5M [00:06<00:00, 6.37MB/s]
+
0%| | 0.00/41.5M [00:00<?, ?B/s]
0%| | 16.0k/41.5M [00:00<08:03, 90.0kB/s]
0%| | 40.0k/41.5M [00:00<06:14, 116kB/s]
0%| | 88.0k/41.5M [00:00<03:53, 186kB/s]
0%| | 144k/41.5M [00:00<03:03, 236kB/s]
1%| | 280k/41.5M [00:00<01:41, 425kB/s]
1%|1 | 488k/41.5M [00:01<01:03, 677kB/s]
2%|2 | 920k/41.5M [00:01<00:34, 1.25MB/s]
4%|4 | 1.75M/41.5M [00:01<00:17, 2.39MB/s]
8%|7 | 3.22M/41.5M [00:01<00:09, 4.29MB/s]
11%|#1 | 4.69M/41.5M [00:01<00:06, 5.57MB/s]
15%|#4 | 6.16M/41.5M [00:02<00:05, 6.44MB/s]
18%|#8 | 7.62M/41.5M [00:02<00:05, 7.04MB/s]
22%|##1 | 9.09M/41.5M [00:02<00:04, 7.46MB/s]
25%|##5 | 10.6M/41.5M [00:02<00:04, 7.74MB/s]
29%|##8 | 12.0M/41.5M [00:02<00:03, 7.95MB/s]
33%|###2 | 13.5M/41.5M [00:02<00:03, 8.09MB/s]
36%|###6 | 15.0M/41.5M [00:03<00:
03, 8.18MB/s]
40%|###9 | 16.4M/41.5M [00:03<00:03, 8.24MB/s]
43%|####3 | 17.9M/41.5M [00:03<00:02, 8.29MB/s]
47%|####6 | 19.4M/41.5M [00:03<00:02, 8.33MB/s]
50%|##### | 20.8M/41.5M [00:03<00:02, 8.49MB/s]
54%|#####3 | 22.3M/41.5M [00:03<00:02, 9.62MB/s]
56%|#####6 | 23.2M/41.5M [00:04<00:02, 9.47MB/s]
58%|#####8 | 24.2M/41.5M [00:04<00:02, 8.40MB/s]
61%|###### | 25.2M/41.5M [00:04<00:02, 8.26MB/s]
64%|######4 | 26.6M/41.5M [00:04<00:01, 9.78MB/s]
67%|######6 | 27.6M/41.5M [00:04<00:01, 9.10MB/s]
69%|######8 | 28.5M/41.5M [00:04<00:01, 7.83MB/s]
71%|#######1 | 29.6M/41.5M [00:04<00:01, 8.58MB/s]
74%|#######3 | 30.6M/41.5M [00:04<00:01, 8.79MB/s]
76%|#######5 | 31.5M/41.5M [00:05<00:01, 7.77MB/s]
78%|#######8 | 32.5M/41.5M [00:05<00:01, 8.61MB/s]
81%|######## | 33.5M/41.5M [00:05<00:00, 8.81MB/s]
83%|########2 | 34.4M/41.5M [00:05<00:00, 7.75MB/s]
86%|#####
###5 | 35.5M/41.5M [00:05<00:00, 8.57MB/s]
88%|########7 | 36.4M/41.5M [00:05<00:00, 8.81MB/s]
90%|########9 | 37.3M/41.5M [00:05<00:00, 7.72MB/s]
93%|#########2| 38.4M/41.5M [00:06<00:00, 7.57MB/s]
96%|#########6| 39.9M/41.5M [00:06<00:00, 9.32MB/s]
98%|#########8| 40.8M/41.5M [00:06<00:00, 8.87MB/s]
100%|##########| 41.5M/41.5M [00:06<00:00, 6.79MB/s]
diff --git a/docs/_sources/how_to/compile_models/from_paddle.rst.txt b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
index d9702342c..aae3386cd 100644
--- a/docs/_sources/how_to/compile_models/from_paddle.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_paddle.rst.txt
@@ -201,7 +201,7 @@ Look up prediction top 1 index in 1000 class synset.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 7.692 seconds)
+ **Total running time of the script:** ( 1 minutes 4.138 seconds)
.. _sphx_glr_download_how_to_compile_models_from_paddle.py:
diff --git a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
index 6c0705b66..c9d074ff5 100644
--- a/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_pytorch.rst.txt
@@ -79,7 +79,7 @@ Load a pretrained PyTorch model
.. code-block:: none
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
0%| | 0.00/44.7M [00:00<?, ?B/s]
12%|#1 | 5.20M/44.7M [00:00<00:00, 53.9MB/s]
31%|###1 | 13.9M/44.7M [00:00<00:00, 75.8MB/s]
47%|####7 | 21.1M/44.7M [00:00<00:00, 73.2MB/s]
71%|####### | 31.5M/44.7M [00:00<00:00, 86.2MB/s]
93%|#########2| 41.5M/44.7M [00:00<00:00, 92.8MB/s]
100%|##########| 44.7M/44.7M [00:00<00:00, 87.6MB/s]
+
0%| | 0.00/44.7M [00:00<?, ?B/s]
3%|3 | 1.50M/44.7M [00:00<00:02, 15.4MB/s]
7%|7 | 3.19M/44.7M [00:00<00:02, 16.6MB/s]
11%|# | 4.78M/44.7M [00:00<00:02, 16.1MB/s]
14%|#4 | 6.38M/44.7M [00:00<00:02, 16.3MB/s]
18%|#8 | 8.12M/44.7M [00:00<00:02, 17.0MB/s]
22%|##1 | 9.75M/44.7M [00:00<00:02, 16.7MB/s]
26%|##5 | 11.6M/44.7M [00:00<00:02, 17.3MB/s]
30%|##9 | 13.2M/44.7M [00:00<00:02, 16.3MB/s]
33%|###3 | 14.8M/44.7M [00:00<00:01, 16.0MB/s]
37%|###6 | 16.5M/44.7M [00:01<00:01, 16.1MB/s]
40%|#### | 18.0M/44.7M [00:01<00:01, 15.4MB/s]
44%|####4 | 19.7M/44.7M [00:01<00:01, 15.9MB/s]
48%|####7 | 21.2M/44.7M [00:01<00:01, 15.1MB/s]
51%|##### | 22.7M/44.7M [00:01<00:01, 14.0MB/s]
54%|#####3 | 24.0M/44.7M [00:01<00:01, 14.0MB/s]
57%|#####6 | 25.4M/44.7M [00:01<00:01, 14.0MB/s]
60%|#####9 | 26.7M/44.7M [00
:01<00:01, 13.9MB/s]
63%|######2 | 28.1M/44.7M [00:01<00:01, 13.4MB/s]
67%|######7 | 30.0M/44.7M [00:02<00:01, 15.1MB/s]
70%|####### | 31.4M/44.7M [00:02<00:00, 14.8MB/s]
74%|#######3 | 32.9M/44.7M [00:02<00:00, 13.5MB/s]
77%|#######6 | 34.2M/44.7M [00:02<00:00, 12.7MB/s]
79%|#######9 | 35.4M/44.7M [00:02<00:00, 10.4MB/s]
82%|########1 | 36.5M/44.7M [00:02<00:00, 10.0MB/s]
84%|########3 | 37.5M/44.7M [00:02<00:00, 9.47MB/s]
87%|########6 | 38.7M/44.7M [00:02<00:00, 10.3MB/s]
90%|########9 | 40.2M/44.7M [00:03<00:00, 11.4MB/s]
92%|#########2| 41.3M/44.7M [00:03<00:00, 11.0MB/s]
95%|#########4| 42.4M/44.7M [00:03<00:00, 11.0MB/s]
98%|#########8| 43.8M/44.7M [00:03<00:00, 11.9MB/s]
100%|##########| 44.7M/44.7M [00:03<00:00, 13.6MB/s]
diff --git a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
index ca09f049a..49fa5a4e0 100644
--- a/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
+++ b/docs/_sources/how_to/compile_models/from_tensorflow.rst.txt
@@ -372,7 +372,7 @@ Run the corresponding model on tensorflow
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 6.408 seconds)
+ **Total running time of the script:** ( 1 minutes 4.194 seconds)
.. _sphx_glr_download_how_to_compile_models_from_tensorflow.py:
diff --git a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
index 2dc788395..73d545abf 100644
--- a/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/compile_models/sg_execution_times.rst.txt
@@ -5,15 +5,15 @@
Computation times
=================
-**05:30.548** total execution time for **how_to_compile_models** files:
+**05:20.193** total execution time for **how_to_compile_models** files:
-- **01:07.692**: :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)
-- **01:06.408**: :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``)
-- **00:57.800**: :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)
-- **00:32.301**: :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)
-- **00:25.078**: :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)
-- **00:22.836**: :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)
-- **00:21.658**: :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)
-- **00:20.176**: :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)
-- **00:13.909**: :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)
-- **00:02.691**: :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)
+- **01:04.194**: :ref:`sphx_glr_how_to_compile_models_from_tensorflow.py` (``from_tensorflow.py``)
+- **01:04.138**: :ref:`sphx_glr_how_to_compile_models_from_paddle.py` (``from_paddle.py``)
+- **00:56.314**: :ref:`sphx_glr_how_to_compile_models_from_darknet.py` (``from_darknet.py``)
+- **00:30.639**: :ref:`sphx_glr_how_to_compile_models_from_oneflow.py` (``from_oneflow.py``)
+- **00:24.464**: :ref:`sphx_glr_how_to_compile_models_from_tflite.py` (``from_tflite.py``)
+- **00:22.029**: :ref:`sphx_glr_how_to_compile_models_from_pytorch.py` (``from_pytorch.py``)
+- **00:21.340**: :ref:`sphx_glr_how_to_compile_models_from_coreml.py` (``from_coreml.py``)
+- **00:21.037**: :ref:`sphx_glr_how_to_compile_models_from_mxnet.py` (``from_mxnet.py``)
+- **00:13.535**: :ref:`sphx_glr_how_to_compile_models_from_keras.py` (``from_keras.py``)
+- **00:02.503**: :ref:`sphx_glr_how_to_compile_models_from_onnx.py` (``from_onnx.py``)
diff --git a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
index 6ebb24c47..2121a4aba 100644
--- a/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_model_on_android.rst.txt
@@ -393,7 +393,7 @@ Execute on TVM
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 16.6092 16.5794 17.4391 16.3050 0.3147
+ 15.8462 15.8339 15.9624 15.7438 0.0632
diff --git a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
index 8f44302fe..7926b0dc7 100644
--- a/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_object_detection_pytorch.rst.txt
@@ -108,7 +108,7 @@ Load pre-trained maskrcnn from torchvision and do tracing
.. code-block:: none
Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-
0%| | 0.00/170M [00:00<?, ?B/s]
4%|3 | 6.31M/170M [00:00<00:02, 65.9MB/s]
12%|#1 | 20.3M/170M [00:00<00:01, 113MB/s]
23%|##3 | 39.5M/170M [00:00<00:00, 153MB/s]
32%|###2 | 54.5M/170M [00:00<00:00, 155MB/s]
41%|#### | 69.3M/170M [00:00<00:00, 149MB/s]
49%|####9 | 83.6M/170M [00:00<00:00, 148MB/s]
58%|#####7 | 97.7M/170M [00:00<00:00, 147MB/s]
66%|######5 | 112M/170M [00:00<00:00, 145MB/s]
74%|#######4 | 126M/170M [00:00<00:00, 147MB/s]
83%|########2 | 141M/170M [00:01<00:00, 149MB/s]
91%|#########1| 155M/170M [00:01<00:00, 150MB/s]
100%|##########| 170M/170M [00:01<00:00, 146MB/s]
+
0%| | 0.00/170M [00:00<?, ?B/s]
1%| | 1.12M/170M [00:00<00:15, 11.7MB/s]
1%|1 | 2.42M/170M [00:00<00:13, 12.7MB/s]
2%|2 | 4.01M/170M [00:00<00:12, 13.9MB/s]
3%|3 | 5.34M/170M [00:00<00:13, 12.9MB/s]
4%|3 | 6.58M/170M [00:00<00:13, 12.8MB/s]
5%|4 | 7.94M/170M [00:00<00:12, 13.1MB/s]
5%|5 | 9.20M/170M [00:00<00:13, 12.3MB/s]
6%|6 | 10.5M/170M [00:00<00:13, 12.5MB/s]
8%|7 | 12.9M/170M [00:00<00:10, 16.3MB/s]
9%|8 | 14.5M/170M [00:01<00:10, 15.5MB/s]
10%|# | 17.1M/170M [00:01<00:09, 17.6MB/s]
11%|#1 | 18.7M/170M [00:01<00:09, 17.4MB/s]
12%|#2 | 20.4M/170M [00:01<00:09, 17.3MB/s]
13%|#3 | 22.1M/170M [00:01<00:09, 16.3MB/s]
14%|#3 | 23.7M/170M [00:01<00:09, 16.3MB/s]
15%|#4 | 25.2M/170M [00:01<00:10, 14.8MB/s]
16%|#5 | 26.7M/170M [00:01<00:10, 14.7MB/
s]
17%|#6 | 28.4M/170M [00:01<00:09, 15.0MB/s]
18%|#7 | 29.8M/170M [00:02<00:09, 15.0MB/s]
18%|#8 | 31.4M/170M [00:02<00:10, 14.1MB/s]
19%|#9 | 32.7M/170M [00:02<00:10, 13.8MB/s]
20%|## | 34.1M/170M [00:02<00:11, 12.2MB/s]
21%|## | 35.3M/170M [00:02<00:12, 11.6MB/s]
21%|##1 | 36.4M/170M [00:02<00:12, 11.2MB/s]
22%|##2 | 37.9M/170M [00:02<00:11, 12.2MB/s]
23%|##3 | 39.6M/170M [00:02<00:10, 13.7MB/s]
24%|##4 | 41.4M/170M [00:03<00:08, 15.2MB/s]
25%|##5 | 43.1M/170M [00:03<00:08, 16.0MB/s]
26%|##6 | 44.7M/170M [00:03<00:08, 16.3MB/s]
27%|##7 | 46.4M/170M [00:03<00:07, 16.5MB/s]
28%|##8 | 48.0M/170M [00:03<00:07, 16.5MB/s]
30%|##9 | 50.4M/170M [00:03<00:06, 18.8MB/s]
31%|### | 52.2M/170M [00:03<00:06, 18.6MB/s]
32%|###1 | 54.0M/170M [00:03<00:07, 16.6MB/s]
33%|###2 | 55.6M/170M [00:03<00:
07, 16.2MB/s]
34%|###3 | 57.7M/170M [00:03<00:06, 17.6MB/s]
35%|###4 | 59.4M/170M [00:04<00:06, 17.4MB/s]
36%|###5 | 61.1M/170M [00:04<00:07, 14.5MB/s]
37%|###6 | 62.6M/170M [00:04<00:08, 13.2MB/s]
38%|###7 | 64.2M/170M [00:04<00:07, 14.2MB/s]
39%|###8 | 65.6M/170M [00:04<00:08, 12.5MB/s]
39%|###9 | 66.9M/170M [00:04<00:09, 11.4MB/s]
40%|#### | 68.4M/170M [00:04<00:08, 12.4MB/s]
41%|####1 | 69.7M/170M [00:05<00:08, 11.9MB/s]
42%|####1 | 71.0M/170M [00:05<00:08, 12.0MB/s]
43%|####2 | 72.2M/170M [00:05<00:08, 12.0MB/s]
43%|####3 | 73.9M/170M [00:05<00:08, 12.3MB/s]
44%|####4 | 75.4M/170M [00:05<00:07, 13.1MB/s]
45%|####5 | 77.1M/170M [00:05<00:06, 14.1MB/s]
46%|####6 | 78.4M/170M [00:05<00:06, 13.7MB/s]
47%|####7 | 79.9M/170M [00:05<00:07, 11.8MB/s]
48%|####7 | 81.1M/170M [00:06<00:09, 10.0MB/s]
49%|####8 | 82.7M/170M
[00:06<00:07, 11.6MB/s]
49%|####9 | 84.1M/170M [00:06<00:07, 12.1MB/s]
50%|##### | 85.3M/170M [00:06<00:08, 11.0MB/s]
51%|##### | 86.4M/170M [00:06<00:08, 10.8MB/s]
52%|#####1 | 87.6M/170M [00:06<00:07, 11.1MB/s]
52%|#####2 | 88.7M/170M [00:06<00:08, 10.5MB/s]
53%|#####3 | 90.3M/170M [00:06<00:06, 12.2MB/s]
54%|#####4 | 92.1M/170M [00:06<00:05, 13.7MB/s]
55%|#####5 | 93.5M/170M [00:07<00:05, 13.7MB/s]
56%|#####6 | 95.5M/170M [00:07<00:04, 15.7MB/s]
57%|#####7 | 97.0M/170M [00:07<00:05, 14.4MB/s]
58%|#####7 | 98.5M/170M [00:07<00:05, 14.5MB/s]
59%|#####8 | 99.9M/170M [00:07<00:05, 13.5MB/s]
60%|#####9 | 101M/170M [00:07<00:05, 12.9MB/s]
60%|###### | 102M/170M [00:07<00:05, 12.0MB/s]
61%|######1 | 104M/170M [00:07<00:05, 12.2MB/s]
62%|######1 | 105M/170M [00:08<00:05, 12.5MB/s]
63%|######2 | 106M/170M [00:08<00:05, 12.7MB/s]
63%|######3 | 108
M/170M [00:08<00:05, 11.8MB/s]
64%|######4 | 109M/170M [00:08<00:05, 11.9MB/s]
65%|######4 | 110M/170M [00:08<00:05, 11.8MB/s]
65%|######5 | 111M/170M [00:08<00:06, 9.54MB/s]
66%|######6 | 112M/170M [00:08<00:05, 10.4MB/s]
67%|######6 | 113M/170M [00:08<00:05, 10.6MB/s]
68%|######7 | 115M/170M [00:08<00:04, 12.5MB/s]
69%|######8 | 117M/170M [00:09<00:04, 13.0MB/s]
69%|######9 | 118M/170M [00:09<00:04, 12.9MB/s]
70%|####### | 119M/170M [00:09<00:03, 13.8MB/s]
71%|#######1 | 121M/170M [00:09<00:03, 14.3MB/s]
72%|#######2 | 122M/170M [00:09<00:04, 11.9MB/s]
73%|#######2 | 124M/170M [00:09<00:03, 12.8MB/s]
74%|#######3 | 125M/170M [00:09<00:03, 13.9MB/s]
75%|#######4 | 127M/170M [00:09<00:03, 14.4MB/s]
76%|#######5 | 128M/170M [00:09<00:03, 14.4MB/s]
76%|#######6 | 130M/170M [00:10<00:02, 14.5MB/s]
77%|#######7 | 131M/170M [00:10<00:02, 14.1MB/s]
78%|#######8 | 133M/170M
[00:10<00:02, 13.6MB/s]
79%|#######8 | 134M/170M [00:10<00:02, 14.2MB/s]
80%|#######9 | 135M/170M [00:10<00:02, 12.9MB/s]
81%|########1 | 138M/170M [00:10<00:02, 15.5MB/s]
82%|########2 | 140M/170M [00:10<00:01, 17.2MB/s]
84%|########3 | 142M/170M [00:10<00:01, 18.4MB/s]
85%|########4 | 144M/170M [00:10<00:01, 19.3MB/s]
86%|########5 | 146M/170M [00:11<00:01, 19.4MB/s]
87%|########7 | 148M/170M [00:11<00:01, 19.0MB/s]
89%|########8 | 151M/170M [00:11<00:00, 21.8MB/s]
90%|######### | 153M/170M [00:11<00:00, 20.6MB/s]
91%|#########1| 155M/170M [00:11<00:00, 16.5MB/s]
92%|#########2| 157M/170M [00:11<00:00, 17.4MB/s]
93%|#########3| 159M/170M [00:11<00:00, 17.3MB/s]
94%|#########4| 160M/170M [00:11<00:00, 17.2MB/s]
96%|#########5| 163M/170M [00:11<00:00, 17.8MB/s]
97%|#########6| 164M/170M [00:12<00:00, 17.0MB/s]
98%|#########7| 166M/170M [00:12<00:00, 15.0MB/s]
99%|#########8| 168M/170M [00:12<
00:00, 14.8MB/s]
100%|#########9| 169M/170M [00:12<00:00, 16.0MB/s]
100%|##########| 170M/170M [00:12<00:00, 14.3MB/s]
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:3878: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
for i in range(dim)
/usr/local/lib/python3.7/dist-packages/torchvision/models/detection/anchor_utils.py:127: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
@@ -253,7 +253,7 @@ Get boxes with score larger than 0.9
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 3 minutes 14.969 seconds)
+ **Total running time of the script:** ( 3 minutes 14.476 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
index f5a6c11af..bf7d00453 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized.rst.txt
@@ -187,7 +187,7 @@ training. Other models require a full post training calibration.
.. code-block:: none
Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
-
0%| | 0.00/13.6M [00:00<?, ?B/s]
19%|#9 | 2.62M/13.6M [00:00<00:00, 27.1MB/s]
40%|###9 | 5.38M/13.6M [00:00<00:00, 28.0MB/s]
59%|#####9 | 8.05M/13.6M [00:00<00:00, 23.4MB/s]
76%|#######6 | 10.4M/13.6M [00:00<00:00, 22.3MB/s]
94%|#########3| 12.7M/13.6M [00:00<00:00, 22.9MB/s]
100%|##########| 13.6M/13.6M [00:00<00:00, 23.2MB/s]
+
0%| | 0.00/13.6M [00:00<?, ?B/s]
11%|# | 1.44M/13.6M [00:00<00:00, 15.1MB/s]
31%|### | 4.17M/13.6M [00:00<00:00, 23.1MB/s]
68%|######7 | 9.20M/13.6M [00:00<00:00, 36.4MB/s]
100%|##########| 13.6M/13.6M [00:00<00:00, 39.9MB/s]
@@ -344,7 +344,7 @@ Here we give an example of how to measure performance of TVM compiled models.
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 90.7040 90.8425 93.0880 90.1547 0.4057
+ 90.4965 90.4885 91.1011 90.1071 0.2119
@@ -384,7 +384,7 @@ TODO
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 7.981 seconds)
+ **Total running time of the script:** ( 1 minutes 4.896 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_prequantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
index 62996de2b..f19b2bbd8 100644
--- a/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_prequantized_tflite.rst.txt
@@ -351,7 +351,7 @@ Here we give an example of how to measure performance of TVM compiled models.
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 119.6547 119.6150 120.5882 118.8113 0.3526
+ 118.7466 118.6988 124.2167 117.9157 0.6235
@@ -385,7 +385,7 @@ Here we give an example of how to measure performance of TVM compiled models.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 2 minutes 0.512 seconds)
+ **Total running time of the script:** ( 2 minutes 1.474 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_prequantized_tflite.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
index e5847a41e..1413fe051 100644
--- a/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_quantized.rst.txt
@@ -221,7 +221,7 @@ We create a Relay VM to build and execute the model.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 18.771 seconds)
+ **Total running time of the script:** ( 1 minutes 16.612 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_quantized.py:
diff --git a/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt b/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
index 1fe50af85..0069a4939 100644
--- a/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
+++ b/docs/_sources/how_to/deploy_models/deploy_ssd_gluoncv.rst.txt
@@ -137,7 +137,7 @@ Convert and compile model for CPU.
data: None
input_sym_arg_type = in_param.infer_type()[0]
Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_512_resnet50_v1_voc-9c8b225a.zip...
-
0%| | 0/132723 [00:00<?, ?KB/s]
4%|4 | 5753/132723 [00:00<00:02, 57520.04KB/s]
10%|9 | 12722/132723 [00:00<00:01, 64675.74KB/s]
15%|#5 | 20193/132723 [00:00<00:01, 69247.47KB/s]
22%|##1 | 28684/132723 [00:00<00:01, 75427.22KB/s]
27%|##7 | 36227/132723 [00:00<00:01, 60414.75KB/s]
34%|###3 | 44625/132723 [00:00<00:01, 67295.27KB/s]
40%|###9 | 53074/132723 [00:00<00:01, 72355.01KB/s]
46%|####6 | 61536/132723 [00:00<00:00, 75984.80KB/s]
53%|#####2 | 69958/132723 [00:00<00:00, 78430.89KB/s]
59%|#####9 | 78345/132723 [00:01<00:00, 80050.08KB/s]
65%|######5 | 86648/132723 [00:01<00:00, 80938.42KB/s]
72%|#######1 | 95046/132723 [00:01<00:00, 81845.87KB/s]
78%|#######7 | 103464/132723 [00:01<00:00, 82543.00KB/s]
84%|########4 | 111766/132723 [00:01<00:00, 82684.72KB/s]
91%|######### | 120173/132723 [00:01<00:00, 83096.59KB/s]
97%|########
#6| 128708/132723 [00:01<00:00, 83769.14KB/s]
100%|##########| 132723/132723 [00:01<00:00, 77266.43KB/s]
+
0%| | 0/132723 [00:00<?, ?KB/s]
1%|1 | 1629/132723 [00:00<00:08, 15108.34KB/s]
3%|3 | 4430/132723 [00:00<00:05, 22460.26KB/s]
7%|6 | 9070/132723 [00:00<00:03, 33093.30KB/s]
10%|# | 13726/132723 [00:00<00:03, 38319.29KB/s]
15%|#4 | 19870/132723 [00:00<00:02, 46503.08KB/s]
21%|## | 27445/132723 [00:00<00:01, 56376.95KB/s]
27%|##6 | 35593/132723 [00:00<00:01, 64543.51KB/s]
32%|###1 | 42398/132723 [00:00<00:01, 64878.02KB/s]
37%|###6 | 48897/132723 [00:00<00:01, 61712.49KB/s]
42%|####2 | 55859/132723 [00:01<00:01, 64045.03KB/s]
47%|####6 | 62299/132723 [00:01<00:01, 63333.29KB/s]
52%|#####2 | 69418/132723 [00:01<00:00, 65650.66KB/s]
57%|#####7 | 76198/132723 [00:01<00:00, 66286.39KB/s]
63%|######2 | 83422/132723 [00:01<00:00, 68052.47KB/s]
68%|######7 | 90243/132723 [00:01<00:00, 63112.72KB/s]
74%|#######3 | 9
7621/132723 [00:01<00:00, 66131.07KB/s]
80%|#######9 | 105781/132723 [00:01<00:00, 70583.46KB/s]
85%|########5 | 112911/132723 [00:01<00:00, 66947.25KB/s]
90%|######### | 119689/132723 [00:02<00:00, 61302.31KB/s]
95%|#########4| 125951/132723 [00:02<00:00, 54798.91KB/s]
100%|#########9| 132411/132723 [00:02<00:00, 57294.21KB/s]
100%|##########| 132723/132723 [00:02<00:00, 58546.79KB/s]
@@ -202,7 +202,7 @@ Display result
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 2 minutes 30.048 seconds)
+ **Total running time of the script:** ( 2 minutes 22.199 seconds)
.. _sphx_glr_download_how_to_deploy_models_deploy_ssd_gluoncv.py:
diff --git a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
index 394a56739..8447cef6d 100644
--- a/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/deploy_models/sg_execution_times.rst.txt
@@ -5,13 +5,13 @@
Computation times
=================
-**11:05.635** total execution time for **how_to_deploy_models** files:
+**10:49.724** total execution time for **how_to_deploy_models** files:
-- **03:14.969**: :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``)
-- **02:30.048**: :ref:`sphx_glr_how_to_deploy_models_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)
-- **02:00.512**: :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)
-- **01:18.771**: :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)
-- **01:07.981**: :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)
-- **00:30.029**: :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)
-- **00:23.119**: :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)
-- **00:00.207**: :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)
+- **03:14.476**: :ref:`sphx_glr_how_to_deploy_models_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``)
+- **02:22.199**: :ref:`sphx_glr_how_to_deploy_models_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)
+- **02:01.474**: :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)
+- **01:16.612**: :ref:`sphx_glr_how_to_deploy_models_deploy_quantized.py` (``deploy_quantized.py``)
+- **01:04.896**: :ref:`sphx_glr_how_to_deploy_models_deploy_prequantized.py` (``deploy_prequantized.py``)
+- **00:28.366**: :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_android.py` (``deploy_model_on_android.py``)
+- **00:21.500**: :ref:`sphx_glr_how_to_deploy_models_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)
+- **00:00.200**: :ref:`sphx_glr_how_to_deploy_models_deploy_sparse.py` (``deploy_sparse.py``)
diff --git a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
index 4c683f9a0..37d43d69d 100644
--- a/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/bring_your_own_datatypes.rst.txt
@@ -423,7 +423,7 @@ First let us define two helper functions to get the mobilenet model and a cat im
.. code-block:: none
- Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip1f6ea228-ff1b-4af1-9183-cf78189f33d5 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+ Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipd3f3452b-7645-428a-b305-d7484200241d from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
diff --git a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
index 864adf26f..b9b2f5132 100644
--- a/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/sg_execution_times.rst.txt
@@ -5,9 +5,9 @@
Computation times
=================
-**00:39.153** total execution time for **how_to_extend_tvm** files:
+**00:37.975** total execution time for **how_to_extend_tvm** files:
-- **00:35.506**: :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``)
-- **00:02.343**: :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)
-- **00:01.089**: :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)
-- **00:00.214**: :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)
+- **00:34.444**: :ref:`sphx_glr_how_to_extend_tvm_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``)
+- **00:02.280**: :ref:`sphx_glr_how_to_extend_tvm_use_pass_instrument.py` (``use_pass_instrument.py``)
+- **00:01.053**: :ref:`sphx_glr_how_to_extend_tvm_use_pass_infra.py` (``use_pass_infra.py``)
+- **00:00.199**: :ref:`sphx_glr_how_to_extend_tvm_low_level_custom_pass.py` (``low_level_custom_pass.py``)
diff --git a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
index 9ef73ad39..b64109253 100644
--- a/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
+++ b/docs/_sources/how_to/extend_tvm/use_pass_instrument.rst.txt
@@ -199,10 +199,10 @@ profile the execution time of each passes.
.. code-block:: none
Printing results of timing profile...
- InferType: 6158us [6158us] (45.74%; 45.74%)
- FoldScaleAxis: 7306us [2us] (54.26%; 54.26%)
- FoldConstant: 7303us [1510us] (54.25%; 99.97%)
- InferType: 5794us [5794us] (43.03%; 79.33%)
+ InferType: 6241us [6241us] (45.74%; 45.74%)
+ FoldScaleAxis: 7403us [2us] (54.26%; 54.26%)
+ FoldConstant: 7401us [1519us] (54.25%; 99.97%)
+ InferType: 5882us [5882us] (43.11%; 79.48%)
@@ -239,10 +239,10 @@ Refer to following sections and :py:func:`tvm.instrument.pass_instrument` for th
.. code-block:: none
Printing results of timing profile...
- InferType: 5874us [5874us] (44.88%; 44.88%)
- FoldScaleAxis: 7213us [2us] (55.12%; 55.12%)
- FoldConstant: 7211us [1508us] (55.10%; 99.97%)
- InferType: 5703us [5703us] (43.58%; 79.09%)
+ InferType: 6028us [6028us] (44.55%; 44.55%)
+ FoldScaleAxis: 7503us [2us] (55.45%; 55.45%)
+ FoldConstant: 7501us [1571us] (55.43%; 99.97%)
+ InferType: 5930us [5930us] (43.83%; 79.06%)
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
index b43ecf01f..526750a6d 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_cuda.rst.txt
@@ -295,7 +295,7 @@ latency of convolution.
.. code-block:: none
- Convolution: 52.730581 ms
+ Convolution: 35.315314 ms
diff --git a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
index 82c5436ee..3ed267fb8 100644
--- a/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_conv_tensorcore.rst.txt
@@ -628,7 +628,7 @@ be able to run on our build server
.. code-block:: none
- conv2d with tensor core: 6.616086 ms
+ conv2d with tensor core: 8.942790 ms
diff --git a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
index 7a29716ab..82c93d57a 100644
--- a/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/opt_gemm.rst.txt
@@ -118,8 +118,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
.. code-block:: none
- Numpy running time: 0.019359
- Baseline: 3.477377
+ Numpy running time: 0.018525
+ Baseline: 3.480135
@@ -210,7 +210,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
.. code-block:: none
- Opt1: 0.309022
+ Opt1: 0.294867
@@ -309,7 +309,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
.. code-block:: none
- Opt2: 0.340348
+ Opt2: 0.330186
@@ -401,7 +401,7 @@ the access pattern for A matrix is more cache friendly.
.. code-block:: none
- Opt3: 0.122857
+ Opt3: 0.117279
@@ -520,7 +520,7 @@ flattening.
.. code-block:: none
- Opt4: 0.112846
+ Opt4: 0.112383
@@ -638,7 +638,7 @@ write to C when all the block results are ready.
.. code-block:: none
- Opt5: 0.113382
+ Opt5: 0.111201
@@ -759,7 +759,7 @@ Futhermore, we can also utilize multi-core processors to do the thread-level par
.. code-block:: none
- Opt6: 0.146819
+ Opt6: 0.145077
diff --git a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
index 4ae81a164..1d780dc7b 100644
--- a/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/optimize_operators/sg_execution_times.rst.txt
@@ -5,8 +5,8 @@
Computation times
=================
-**00:35.754** total execution time for **how_to_optimize_operators** files:
+**00:35.131** total execution time for **how_to_optimize_operators** files:
-- **00:33.065**: :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)
-- **00:01.422**: :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``)
-- **00:01.267**: :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)
+- **00:32.474**: :ref:`sphx_glr_how_to_optimize_operators_opt_gemm.py` (``opt_gemm.py``)
+- **00:01.458**: :ref:`sphx_glr_how_to_optimize_operators_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``)
+- **00:01.199**: :ref:`sphx_glr_how_to_optimize_operators_opt_conv_cuda.py` (``opt_conv_cuda.py``)
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
index a65e8d8ff..ddb2a667d 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/sg_execution_times.rst.txt
@@ -5,11 +5,11 @@
Computation times
=================
-**05:09.442** total execution time for **how_to_tune_with_autoscheduler** files:
-
-- **02:27.217**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``)
-- **01:21.204**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)
-- **00:41.339**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)
-- **00:21.812**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)
-- **00:08.989**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)
-- **00:08.881**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)
+**04:54.695** total execution time for **how_to_tune_with_autoscheduler** files:
+
+- **02:20.941**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``)
+- **01:18.949**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_x86.py` (``tune_network_x86.py``)
+- **00:40.281**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_cuda.py` (``tune_network_cuda.py``)
+- **00:17.236**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_sparse_x86.py` (``tune_sparse_x86.py``)
+- **00:08.934**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_mali.py` (``tune_network_mali.py``)
+- **00:08.355**: :ref:`sphx_glr_how_to_tune_with_autoscheduler_tune_network_arm.py` (``tune_network_arm.py``)
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
index b25205b6a..5faa03a5b 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.rst.txt
@@ -222,472 +222,232 @@ cooperative fetching, unrolling and operator fusion.
compute: Buffer(compute_2: Pointer(float32), float32, [25088], [])}
buffer_map = {data_1: data, kernel_1: kernel, bias_1: bias, compute_1: compute}
preflattened_buffer_map = {data_1: data_3: Buffer(data_2, float32, [1, 512, 7, 7], []), kernel_1: kernel_3: Buffer(kernel_2, float32, [512, 512, 3, 3], []), bias_1: bias_3: Buffer(bias_2, float32, [1, 512, 1, 1], []), compute_1: compute_3: Buffer(compute_2, float32, [1, 512, 7, 7], [])} {
- attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 64;
- allocate(conv2d_nchw: Pointer(local float32), float32, [4]), storage_scope = local;
- allocate(pad_temp.shared: Pointer(shared float32), float32, [2016]), storage_scope = shared;
- allocate(kernel.shared: Pointer(shared float32), float32, [768]), storage_scope = shared;
- attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98 {
- conv2d_nchw_1: Buffer(conv2d_nchw, float32, [4], [], scope="local", align=8)[0] = 0f32
- conv2d_nchw_1[2] = 0f32
+ attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 32;
+ allocate(conv2d_nchw: Pointer(local float32), float32, [14]), storage_scope = local;
+ allocate(pad_temp.shared: Pointer(shared float32), float32, [1296]), storage_scope = shared;
+ allocate(kernel.shared: Pointer(shared float32), float32, [2304]), storage_scope = shared;
+ attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56 {
+ conv2d_nchw_1: Buffer(conv2d_nchw, float32, [14], [], scope="local", align=32)[0] = 0f32
conv2d_nchw_1[1] = 0f32
+ conv2d_nchw_1[2] = 0f32
conv2d_nchw_1[3] = 0f32
- for (rc.outer.outer: int32, 0, 16) {
- for (rx.outer.outer: int32, 0, 3) {
- let cse_var_2: int32 = (rc.outer.outer*1568)
- let cse_var_1: int32 = (rc.outer.outer*288)
- {
- attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1: Buffer(pad_temp.shared, float32, [2016], [], scope="shared")[threadIdx.x_1] = @tir.if_then_else(((((7 <= floormod(threadIdx.x_1, 63)) && (floormod(threadIdx.x_1, 63) < 56)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[((((cse_var_2 + (floordiv(threadIdx.x_1, 63)*49)) + rx.outer.outer) + floormod(threadIdx.x_1, 63)) - 8)], 0f32, dtype=float32)
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 98)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 5), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 5), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 14), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 5), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, dty [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 196)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 1), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 1), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 28), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 1), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, dt [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 294)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 6), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 6), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 42), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 6), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, dt [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 392)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 2), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 2), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 56), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 2), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, dt [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 490)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 7), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 7), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 70), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 7), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, dt [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 588)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 3), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 3), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 84), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 3), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, dt [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 686)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 8), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 8), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 98), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 8), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, dt [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 784)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 4), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 4), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 112), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 4), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, d [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 882)] = @tir.if_then_else(((((7 <= floormod(threadIdx.x_1, 63)) && (floormod(threadIdx.x_1, 63) < 56)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[((((cse_var_2 + (floordiv(floordiv(threadIdx.x_1, 7), 9)*49)) + rx.outer.outer) + floormod(threadIdx.x_1, 63)) + 678)], 0f32, dtype=float32)
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 980)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 5), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 5), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 140), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 5), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, d [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1078)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 1), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 1), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 154), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 1), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1176)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 6), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 6), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 168), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 6), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1274)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 2), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 2), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 182), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 2), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1372)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 7), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 7), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 196), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 7), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1470)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 3), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 3), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 210), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 3), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1568)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 8), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 8), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 224), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 8), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1666)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 4), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 4), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 238), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 4), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1764)] = @tir.if_then_else(((((7 <= floormod(threadIdx.x_1, 63)) && (floormod(threadIdx.x_1, 63) < 56)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[((((cse_var_2 + (floordiv(floordiv(threadIdx.x_1, 7), 9)*49)) + rx.outer.outer) + floormod(threadIdx.x_1, 63)) + 1364)], 0f32, dtype=float32)
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1862)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 5), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 5), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 266), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 5), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- if @tir.likely((threadIdx.x_1 < 56), dtype=bool) {
- pad_temp.shared_1[(threadIdx.x_1 + 1960)] = @tir.if_then_else((((floormod((floordiv(threadIdx.x_1, 7) + 1), 9) < 8) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 280), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 1), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, dtype=float32)
+ conv2d_nchw_1[4] = 0f32
+ conv2d_nchw_1[5] = 0f32
+ conv2d_nchw_1[6] = 0f32
+ conv2d_nchw_1[7] = 0f32
+ conv2d_nchw_1[8] = 0f32
+ conv2d_nchw_1[9] = 0f32
+ conv2d_nchw_1[10] = 0f32
+ conv2d_nchw_1[11] = 0f32
+ conv2d_nchw_1[12] = 0f32
+ conv2d_nchw_1[13] = 0f32
+ for (rc.outer.outer: int32, 0, 32) {
+ let cse_var_1: int32 = (rc.outer.outer*784)
+ {
+ attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1: Buffer(pad_temp.shared, float32, [1296], [], scope="shared")[threadIdx.x_1] = @tir.if_then_else((((9 <= threadIdx.x_1) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data[(((cse_var_1 + (floordiv(threadIdx.x_1, 9)*7)) + floormod(threadIdx.x_1, 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 56)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 56), 81)) && (floormod((threadIdx.x_1 + 56), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 2), 9))) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 56), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 56), 81), 9)*7)) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 112)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 112), 81)) && (floormod((threadIdx.x_1 + 31), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 112), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 112), 81), 9)*7)) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 168)] = @tir.if_then_else((((9 <= floormod((threadIdx.x_1 + 168), 81)) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 168), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 168), 81), 9)*7)) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 224)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 224), 81)) && (floormod((threadIdx.x_1 + 62), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 8), 9))) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 224), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 224), 81), 9)*7)) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 280)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 280), 81)) && (floormod((threadIdx.x_1 + 37), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 1), 9))) && (floormod((threadIdx.x_1 + 1), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 280), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 280), 81), 9)*7)) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 336)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 3), 9)) && (floormod((threadIdx.x_1 + 3), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 336), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 336), 81), 9)*7)) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 392)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 392), 81)) && (floormod((threadIdx.x_1 + 68), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 5), 9))) && (floormod((threadIdx.x_1 + 5), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 392), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 392), 81), 9)*7)) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 448)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 448), 81)) && (floormod((threadIdx.x_1 + 43), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 7), 9))) && (floormod((threadIdx.x_1 + 7), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 448), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 448), 81), 9)*7)) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 504)] = @tir.if_then_else((((floormod((threadIdx.x_1 + 18), 81) < 72) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 504), 81)*49)) + (floormod((floordiv(threadIdx.x_1, 9) + 2), 9)*7)) + floormod(threadIdx.x_1, 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 560)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 560), 81)) && (floormod((threadIdx.x_1 + 74), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 2), 9))) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 560), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 560), 81), 9)*7)) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 616)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 616), 81)) && (floormod((threadIdx.x_1 + 49), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 616), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 616), 81), 9)*7)) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 672)] = @tir.if_then_else((((floormod((threadIdx.x_1 + 24), 81) < 72) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 672), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 672), 81), 9)*7)) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 728)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 728), 81)) && (floormod((threadIdx.x_1 + 80), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 8), 9))) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 728), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 728), 81), 9)*7)) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 784)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 784), 81)) && (floormod((threadIdx.x_1 + 55), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 1), 9))) && (floormod((threadIdx.x_1 + 1), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 784), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 784), 81), 9)*7)) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 840)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 840), 81)) && (floormod((threadIdx.x_1 + 30), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 3), 9))) && (floormod((threadIdx.x_1 + 3), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 840), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 840), 81), 9)*7)) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 896)] = @tir.if_then_else((((9 <= floormod((threadIdx.x_1 + 896), 81)) && (1 <= floormod((threadIdx.x_1 + 5), 9))) && (floormod((threadIdx.x_1 + 5), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 896), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 896), 81), 9)*7)) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 952)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 952), 81)) && (floormod((threadIdx.x_1 + 61), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 7), 9))) && (floormod((threadIdx.x_1 + 7), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 952), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 952), 81), 9)*7)) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1008)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 9) + 4), 9)) && (floormod((threadIdx.x_1 + 36), 81) < 72)) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1008), 81)*49)) + (floormod((floordiv(threadIdx.x_1, 9) + 4), 9)*7)) + floormod(threadIdx.x_1, 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1064)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 2), 9)) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1064), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 1064), 81), 9)*7)) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1120)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 1120), 81)) && (floormod((threadIdx.x_1 + 67), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1120), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 1120), 81), 9)*7)) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1176)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 1176), 81)) && (floormod((threadIdx.x_1 + 42), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1176), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 1176), 81), 9)*7)) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1232)] = @tir.if_then_else((((floormod((threadIdx.x_1 + 17), 81) < 72) && (1 <= floormod((threadIdx.x_1 + 8), 9))) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1232), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 1232), 81), 9)*7)) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ if @tir.likely((threadIdx.x_1 < 8), dtype=bool) {
+ pad_temp.shared_1[(threadIdx.x_1 + 1288)] = 0f32
+ }
+ attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56 {
+ kernel.shared_1: Buffer(kernel.shared, float32, [2304], [], scope="shared")[(threadIdx.x_2*24)] = kernel[((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24))]
+ kernel.shared_1[((threadIdx.x_2*24) + 1)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 1)]
+ kernel.shared_1[((threadIdx.x_2*24) + 2)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 2)]
+ kernel.shared_1[((threadIdx.x_2*24) + 3)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 3)]
+ kernel.shared_1[((threadIdx.x_2*24) + 4)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 4)]
+ kernel.shared_1[((threadIdx.x_2*24) + 5)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 5)]
+ kernel.shared_1[((threadIdx.x_2*24) + 6)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 6)]
+ kernel.shared_1[((threadIdx.x_2*24) + 7)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 7)]
+ kernel.shared_1[((threadIdx.x_2*24) + 8)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 8)]
+ kernel.shared_1[((threadIdx.x_2*24) + 9)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 9)]
+ kernel.shared_1[((threadIdx.x_2*24) + 10)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 10)]
+ kernel.shared_1[((threadIdx.x_2*24) + 11)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 11)]
+ kernel.shared_1[((threadIdx.x_2*24) + 12)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 12)]
+ kernel.shared_1[((threadIdx.x_2*24) + 13)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 13)]
+ kernel.shared_1[((threadIdx.x_2*24) + 14)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 14)]
+ kernel.shared_1[((threadIdx.x_2*24) + 15)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 15)]
+ kernel.shared_1[((threadIdx.x_2*24) + 16)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 16)]
+ kernel.shared_1[((threadIdx.x_2*24) + 17)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 17)]
+ kernel.shared_1[((threadIdx.x_2*24) + 18)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 18)]
+ kernel.shared_1[((threadIdx.x_2*24) + 19)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 19)]
+ kernel.shared_1[((threadIdx.x_2*24) + 20)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 20)]
+ kernel.shared_1[((threadIdx.x_2*24) + 21)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 21)]
+ kernel.shared_1[((threadIdx.x_2*24) + 22)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 22)]
+ kernel.shared_1[((threadIdx.x_2*24) + 23)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 23)]
+ }
+ attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56 {
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1344)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 16), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1345)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 16), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1346)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 16), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1347)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 17), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1348)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 17), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1349)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 17), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1350)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 18), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1351)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 18), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1352)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 18), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1353)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 19), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1354)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 19), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1355)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 19), 48)*3)) + 2)]
}
- attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1: Buffer(kernel.shared, float32, [768], [], scope="shared")[threadIdx.x_2] = kernel[(((((blockIdx.x*36864) + (floordiv(threadIdx.x_2, 96)*4608)) + cse_var_1) + (floormod(threadIdx.x_2, 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 98)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 49), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 2), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 196)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 98), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 4), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 294)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 147), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 6), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 392)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 196), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 8), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 490)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 245), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 10), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 588)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 294), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 12), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- if @tir.likely((threadIdx.x_2 < 82), dtype=bool) {
- kernel.shared_1[(threadIdx.x_2 + 686)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 343), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 14), 96)*3)) + rx.outer.outer)]
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1356)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 20), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1357)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 20), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1358)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 20), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1359)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 21), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1360)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 21), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1361)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 21), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1362)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 22), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1363)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 22), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1364)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 22), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1365)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 23), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1366)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 23), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1367)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 23), 48)*3)) + 2)]
+ }
+ }
+ for (rc.outer.inner: int32, 0, 2) {
+ for (rx.outer.inner: int32, 0, 3) {
+ for (rc.inner: int32, 0, 8) {
+ conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 1)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 1)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 2)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 3)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 4)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 5)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 6)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 9)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 10)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 11)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 12)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 13)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 14)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 15)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 9)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 10)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 11)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 12)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 13)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 14)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 15)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 18)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 19)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 20)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 21)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 22)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 23)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 24)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 18)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 19)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 20)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 21)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 22)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 23)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 24)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ }
}
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[floormod(threadIdx.x, 49)]*kernel.shared_1[(floordiv(threadIdx.x, 49)*192)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[floormod(threadIdx.x, 49)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 384)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[floormod(threadIdx.x, 49)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 96)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[floormod(threadIdx.x, 49)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 480)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 7)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 1)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 7)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 385)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 7)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 97)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 7)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 481)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 14)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 2)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 14)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 386)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 14)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 98)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 14)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 482)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 3)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 387)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 99)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 483)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 70)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 4)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 70)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 388)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 70)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 100)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 70)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 484)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 77)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 5)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 77)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 389)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 77)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 101)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 77)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 485)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 6)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 390)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 102)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 486)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 133)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 7)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 133)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 391)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 133)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 103)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 133)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 487)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 140)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 8)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 140)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 392)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 140)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 104)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 140)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 488)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 9)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 393)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 105)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 489)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 196)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 10)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 196)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 394)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 196)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 106)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 196)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 490)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 203)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 11)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 203)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 395)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 203)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 107)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 203)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 491)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 12)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 396)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 108)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 492)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 259)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 13)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 259)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 397)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 259)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 109)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 259)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 493)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 266)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 14)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 266)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 398)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 266)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 110)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 266)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 494)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 15)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 399)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 111)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 495)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 322)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 16)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 322)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 400)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 322)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 112)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 322)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 496)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 329)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 17)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 329)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 401)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 329)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 113)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 329)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 497)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 378)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 18)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 378)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 402)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 378)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 114)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 378)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 498)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 385)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 19)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 385)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 403)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 385)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 115)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 385)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 499)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 392)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 20)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 392)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 404)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 392)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 116)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 392)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 500)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 441)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 21)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 441)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 405)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 441)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 117)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 441)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 501)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 448)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 22)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 448)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 406)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 448)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 118)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 448)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 502)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 455)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 23)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 455)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 407)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 455)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 119)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 455)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 503)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 504)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 24)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 504)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 408)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 504)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 120)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 504)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 504)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 511)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 25)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 511)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 409)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 511)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 121)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 511)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 505)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 518)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 26)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 518)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 410)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 518)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 122)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 518)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 506)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 567)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 27)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 567)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 411)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 567)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 123)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 567)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 507)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 574)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 28)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 574)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 412)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 574)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 124)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 574)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 508)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 581)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 29)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 581)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 413)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 581)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 125)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 581)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 509)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 630)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 30)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 630)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 414)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 630)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 126)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 630)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 510)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 637)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 31)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 637)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 415)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 637)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 127)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 637)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 511)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 644)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 32)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 644)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 416)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 644)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 128)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 644)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 512)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 693)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 33)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 693)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 417)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 693)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 129)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 693)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 513)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 700)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 34)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 700)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 418)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 700)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 130)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 700)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 514)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 707)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 35)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 707)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 419)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 707)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 131)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 707)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 515)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 756)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 36)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 756)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 420)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 756)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 132)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 756)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 516)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 763)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 37)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 763)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 421)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 763)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 133)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 763)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 517)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 770)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 38)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 770)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 422)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 770)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 134)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 770)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 518)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 819)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 39)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 819)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 423)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 819)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 135)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 819)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 519)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 826)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 40)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 826)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 424)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 826)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 136)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 826)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 520)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 833)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 41)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 833)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 425)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 833)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 137)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 833)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 521)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 882)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 42)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 882)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 426)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 882)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 138)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 882)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 522)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 889)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 43)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 889)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 427)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 889)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 139)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 889)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 523)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 896)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 44)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 896)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 428)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 896)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 140)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 896)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 524)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 945)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 45)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 945)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 429)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 945)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 141)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 945)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 525)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 952)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 46)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 952)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 430)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 952)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 142)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 952)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 526)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 959)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 47)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 959)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 431)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 959)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 143)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 959)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 527)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1008)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 48)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1008)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 432)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1008)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 144)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1008)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 528)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1015)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 49)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1015)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 433)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1015)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 145)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1015)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 529)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1022)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 50)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1022)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 434)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1022)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 146)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1022)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 530)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1071)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 51)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1071)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 435)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1071)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 147)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1071)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 531)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1078)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 52)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1078)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 436)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1078)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 148)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1078)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 532)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1085)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 53)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1085)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 437)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1085)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 149)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1085)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 533)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1134)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 54)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1134)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 438)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1134)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 150)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1134)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 534)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1141)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 55)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1141)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 439)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1141)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 151)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1141)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 535)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1148)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 56)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1148)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 440)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1148)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 152)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1148)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 536)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1197)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 57)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1197)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 441)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1197)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 153)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1197)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 537)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1204)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 58)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1204)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 442)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1204)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 154)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1204)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 538)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1211)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 59)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1211)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 443)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1211)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 155)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1211)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 539)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1260)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 60)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1260)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 444)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1260)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 156)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1260)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 540)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1267)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 61)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1267)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 445)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1267)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 157)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1267)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 541)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1274)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 62)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1274)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 446)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1274)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 158)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1274)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 542)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1323)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 63)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1323)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 447)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1323)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 159)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1323)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 543)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1330)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 64)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1330)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 448)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1330)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 160)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1330)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 544)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1337)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 65)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1337)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 449)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1337)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 161)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1337)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 545)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1386)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 66)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1386)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 450)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1386)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 162)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1386)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 546)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1393)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 67)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1393)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 451)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1393)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 163)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1393)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 547)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1400)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 68)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1400)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 452)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1400)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 164)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1400)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 548)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1449)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 69)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1449)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 453)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1449)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 165)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1449)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 549)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1456)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 70)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1456)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 454)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1456)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 166)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1456)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 550)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1463)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 71)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1463)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 455)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1463)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 167)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1463)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 551)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1512)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 72)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1512)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 456)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1512)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 168)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1512)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 552)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1519)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 73)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1519)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 457)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1519)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 169)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1519)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 553)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1526)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 74)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1526)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 458)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1526)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 170)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1526)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 554)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1575)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 75)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1575)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 459)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1575)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 171)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1575)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 555)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1582)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 76)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1582)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 460)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1582)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 172)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1582)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 556)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1589)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 77)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1589)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 461)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1589)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 173)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1589)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 557)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1638)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 78)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1638)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 462)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1638)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 174)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1638)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 558)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1645)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 79)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1645)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 463)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1645)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 175)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1645)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 559)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1652)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 80)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1652)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 464)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1652)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 176)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1652)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 560)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1701)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 81)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1701)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 465)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1701)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 177)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1701)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 561)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1708)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 82)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1708)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 466)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1708)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 178)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1708)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 562)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1715)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 83)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1715)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 467)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1715)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 179)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1715)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 563)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1764)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 84)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1764)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 468)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1764)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 180)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1764)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 564)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1771)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 85)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1771)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 469)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1771)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 181)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1771)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 565)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1778)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 86)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1778)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 470)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1778)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 182)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1778)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 566)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1827)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 87)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1827)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 471)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1827)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 183)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1827)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 567)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1834)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 88)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1834)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 472)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1834)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 184)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1834)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 568)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1841)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 89)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1841)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 473)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1841)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 185)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1841)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 569)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1890)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 90)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1890)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 474)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1890)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 186)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1890)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 570)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1897)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 91)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1897)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 475)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1897)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 187)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1897)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 571)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1904)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 92)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1904)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 476)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1904)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 188)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1904)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 572)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1953)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 93)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1953)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 477)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1953)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 189)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1953)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 573)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1960)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 94)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1960)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 478)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1960)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 190)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1960)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 574)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1967)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 95)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1967)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 479)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1967)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 191)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1967)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 575)]))
}
}
}
for (i1.inner: int32, 0, 2) {
- compute[((((blockIdx.x*392) + (floordiv(threadIdx.x, 49)*98)) + (i1.inner*49)) + floormod(threadIdx.x, 49))] = max((conv2d_nchw_1[i1.inner] + bias[(((blockIdx.x*8) + (floordiv(threadIdx.x, 49)*2)) + i1.inner)]), 0f32)
- compute[(((((blockIdx.x*392) + (floordiv(threadIdx.x, 49)*98)) + (i1.inner*49)) + floormod(threadIdx.x, 49)) + 196)] = max((conv2d_nchw_1[(i1.inner + 2)] + bias[((((blockIdx.x*8) + (floordiv(threadIdx.x, 49)*2)) + i1.inner) + 4)]), 0f32)
+ for (i3.inner: int32, 0, 7) {
+ compute[(((((blockIdx.x*784) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + i3.inner)] = max((conv2d_nchw_1[((i1.inner*7) + i3.inner)] + bias[(((blockIdx.x*16) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+ }
}
}
}
@@ -740,7 +500,7 @@ We build the binary and check its correctness and performance.
.. code-block:: none
- Execution time of this operator: 0.298 ms
+ Execution time of this operator: 0.236 ms
@@ -784,36 +544,36 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
conv2d_nchw_nn_o_o_i, conv2d_nchw_nn_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_i, factor=1)
conv2d_nchw_nn_o_o_o_i, conv2d_nchw_nn_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_i, factor=1)
conv2d_nchw_nn_o_o_o_o, conv2d_nchw_nn_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_o_i, factor=1)
- conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=1)
- conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=2)
- conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=2)
- conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=2)
+ conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=2)
+ conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=1)
+ conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=8)
+ conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=1)
conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=1)
conv2d_nchw_yy_o_o_i, conv2d_nchw_yy_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_i, factor=1)
conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=7)
conv2d_nchw_yy_o_o_o_o, conv2d_nchw_yy_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_o_i, factor=1)
- conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=1)
+ conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=7)
conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=1)
- conv2d_nchw_xx_o_o_o_i, conv2d_nchw_xx_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_i, factor=7)
+ conv2d_nchw_xx_o_o_o_i, conv2d_nchw_xx_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_i, factor=1)
conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=1)
- conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=1)
- conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=32)
- conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=1)
- conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=3)
+ conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=8)
+ conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=2)
+ conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=3)
+ conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=1)
conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=1)
- conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=1)
+ conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=3)
s[conv2d_nchw].reorder(conv2d_nchw_nn_o_o_o_o, conv2d_nchw_ff_o_o_o_o, conv2d_nchw_yy_o_o_o_o, conv2d_nchw_xx_o_o_o_o, conv2d_nchw_nn_o_o_o_i, conv2d_nchw_ff_o_o_o_i, conv2d_nchw_yy_o_o_o_i, conv2d_nchw_xx_o_o_o_i, conv2d_nchw_nn_o_o_i, conv2d_nchw_ff_o_o_i, conv2d_nchw_yy_o_o_i, conv2d_nchw_xx_o_o_i, conv2d_nchw_rc_o_o, conv2d_nchw_ry_o_o, conv2d_nchw_rx_o_o, conv2d_nchw_rc_o_i, conv2d_nchw_ry_o_i, conv2d_nchw_rx_o_i, conv2d_nchw_nn_o_i, conv2d_nchw_ff_o_i, conv2d_nchw_yy_o_i, conv2 [...]
compute_i0_o_i, compute_i0_i = s[compute].split(compute_i0, factor=1)
compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=2)
- compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=2)
- compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=2)
+ compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=8)
+ compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
- compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=1)
- compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=7)
+ compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=7)
+ compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=1)
compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=1)
s[compute].reorder(compute_i0_o_o_o, compute_i1_o_o_o, compute_i2_o_o_o, compute_i3_o_o_o, compute_i0_o_o_i, compute_i1_o_o_i, compute_i2_o_o_i, compute_i3_o_o_i, compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i, compute_i0_i, compute_i1_i, compute_i2_i, compute_i3_i)
s[conv2d_nchw].compute_at(s[compute], compute_i3_o_i)
@@ -831,16 +591,16 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused = s[compute].fuse(compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i)
s[compute].bind(compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused, te.thread_axis("threadIdx.x"))
kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
- kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
+ kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=24)
s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
- kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=98)
+ kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
- pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=98)
+ pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
- s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 512)
+ s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 64)
s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "unroll_explicit", True)
CUDA source code:
@@ -858,440 +618,202 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
#define int64_t long long
#define uint64_t unsigned long long
#endif
- extern "C" __global__ void __launch_bounds__(98) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
- float conv2d_nchw[4];
- __shared__ float pad_temp_shared[2016];
- __shared__ float kernel_shared[768];
+ extern "C" __global__ void __launch_bounds__(56) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
+ float conv2d_nchw[14];
+ __shared__ float pad_temp_shared[1296];
+ __shared__ float kernel_shared[2304];
conv2d_nchw[0] = 0.000000e+00f;
- conv2d_nchw[2] = 0.000000e+00f;
conv2d_nchw[1] = 0.000000e+00f;
+ conv2d_nchw[2] = 0.000000e+00f;
conv2d_nchw[3] = 0.000000e+00f;
- for (int rc_outer_outer = 0; rc_outer_outer < 16; ++rc_outer_outer) {
- for (int rx_outer_outer = 0; rx_outer_outer < 3; ++rx_outer_outer) {
- __syncthreads();
- pad_temp_shared[((int)threadIdx.x)] = (((((7 <= (((int)threadIdx.x) % 63)) && ((((int)threadIdx.x) % 63) < 56)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[(((((rc_outer_outer * 1568) + ((((int)threadIdx.x) / 63) * 49)) + rx_outer_outer) + (((int)threadIdx.x) % 63)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 98)] = (((((1 <= (((((int)threadIdx.x) / 7) + 5) % 9)) && ((((((int)threadIdx.x) / 7) + 5) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 98) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 5) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 196)] = (((((1 <= (((((int)threadIdx.x) / 7) + 1) % 9)) && ((((((int)threadIdx.x) / 7) + 1) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 196) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 1) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 294)] = (((((1 <= (((((int)threadIdx.x) / 7) + 6) % 9)) && ((((((int)threadIdx.x) / 7) + 6) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 294) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 6) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 392)] = (((((1 <= (((((int)threadIdx.x) / 7) + 2) % 9)) && ((((((int)threadIdx.x) / 7) + 2) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 392) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 2) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 490)] = (((((1 <= (((((int)threadIdx.x) / 7) + 7) % 9)) && ((((((int)threadIdx.x) / 7) + 7) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 490) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 7) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 588)] = (((((1 <= (((((int)threadIdx.x) / 7) + 3) % 9)) && ((((((int)threadIdx.x) / 7) + 3) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 588) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 3) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 686)] = (((((1 <= (((((int)threadIdx.x) / 7) + 8) % 9)) && ((((((int)threadIdx.x) / 7) + 8) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 686) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 8) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 784)] = (((((1 <= (((((int)threadIdx.x) / 7) + 4) % 9)) && ((((((int)threadIdx.x) / 7) + 4) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 784) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 4) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 882)] = (((((7 <= (((int)threadIdx.x) % 63)) && ((((int)threadIdx.x) % 63) < 56)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[(((((rc_outer_outer * 1568) + ((((int)threadIdx.x) / 63) * 49)) + rx_outer_outer) + (((int)threadIdx.x) % 63)) + 678)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 980)] = (((((1 <= (((((int)threadIdx.x) / 7) + 5) % 9)) && ((((((int)threadIdx.x) / 7) + 5) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 980) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 5) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1078)] = (((((1 <= (((((int)threadIdx.x) / 7) + 1) % 9)) && ((((((int)threadIdx.x) / 7) + 1) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1078) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 1) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1176)] = (((((1 <= (((((int)threadIdx.x) / 7) + 6) % 9)) && ((((((int)threadIdx.x) / 7) + 6) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1176) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 6) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1274)] = (((((1 <= (((((int)threadIdx.x) / 7) + 2) % 9)) && ((((((int)threadIdx.x) / 7) + 2) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1274) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 2) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1372)] = (((((1 <= (((((int)threadIdx.x) / 7) + 7) % 9)) && ((((((int)threadIdx.x) / 7) + 7) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1372) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 7) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1470)] = (((((1 <= (((((int)threadIdx.x) / 7) + 3) % 9)) && ((((((int)threadIdx.x) / 7) + 3) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1470) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 3) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1568)] = (((((1 <= (((((int)threadIdx.x) / 7) + 8) % 9)) && ((((((int)threadIdx.x) / 7) + 8) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1568) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 8) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1666)] = (((((1 <= (((((int)threadIdx.x) / 7) + 4) % 9)) && ((((((int)threadIdx.x) / 7) + 4) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1666) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 4) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1764)] = (((((7 <= (((int)threadIdx.x) % 63)) && ((((int)threadIdx.x) % 63) < 56)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[(((((rc_outer_outer * 1568) + ((((int)threadIdx.x) / 63) * 49)) + rx_outer_outer) + (((int)threadIdx.x) % 63)) + 1364)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1862)] = (((((1 <= (((((int)threadIdx.x) / 7) + 5) % 9)) && ((((((int)threadIdx.x) / 7) + 5) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1862) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 5) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- if (((int)threadIdx.x) < 56) {
- pad_temp_shared[(((int)threadIdx.x) + 1960)] = ((((((int)threadIdx.x) < 49) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1960) / 63) * 49)) + (((((int)threadIdx.x) / 7) + 1) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- }
- kernel_shared[((int)threadIdx.x)] = kernel[(((((((int)blockIdx.x) * 36864) + ((((int)threadIdx.x) / 96) * 4608)) + (rc_outer_outer * 288)) + ((((int)threadIdx.x) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 98)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 98) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 2) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 196)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 196) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 4) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 294)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 294) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 6) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 392)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 392) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 8) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 490)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 490) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 10) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 588)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 588) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 12) % 96) * 3)) + rx_outer_outer)];
- if (((int)threadIdx.x) < 82) {
- kernel_shared[(((int)threadIdx.x) + 686)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 686) / 96) * 4608)) + (rc_outer_outer * 288)) + ((((int)threadIdx.x) + 14) * 3)) + rx_outer_outer)];
+ conv2d_nchw[4] = 0.000000e+00f;
+ conv2d_nchw[5] = 0.000000e+00f;
+ conv2d_nchw[6] = 0.000000e+00f;
+ conv2d_nchw[7] = 0.000000e+00f;
+ conv2d_nchw[8] = 0.000000e+00f;
+ conv2d_nchw[9] = 0.000000e+00f;
+ conv2d_nchw[10] = 0.000000e+00f;
+ conv2d_nchw[11] = 0.000000e+00f;
+ conv2d_nchw[12] = 0.000000e+00f;
+ conv2d_nchw[13] = 0.000000e+00f;
+ for (int rc_outer_outer = 0; rc_outer_outer < 32; ++rc_outer_outer) {
+ __syncthreads();
+ pad_temp_shared[((int)threadIdx.x)] = ((((9 <= ((int)threadIdx.x)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[((((rc_outer_outer * 784) + ((((int)threadIdx.x) / 9) * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 56)] = (((((9 <= ((((int)threadIdx.x) + 56) % 81)) && (((((int)threadIdx.x) + 56) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 2) % 9))) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 56) / 81) * 49)) + ((((((int)threadIdx.x) + 56) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 112)] = (((((9 <= ((((int)threadIdx.x) + 31) % 81)) && (((((int)threadIdx.x) + 31) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 112) / 81) * 49)) + ((((((int)threadIdx.x) + 31) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 168)] = ((((9 <= ((((int)threadIdx.x) + 6) % 81)) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 168) / 81) * 49)) + ((((((int)threadIdx.x) + 6) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 224)] = (((((9 <= ((((int)threadIdx.x) + 62) % 81)) && (((((int)threadIdx.x) + 62) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 8) % 9))) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 224) / 81) * 49)) + ((((((int)threadIdx.x) + 62) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 280)] = (((((9 <= ((((int)threadIdx.x) + 37) % 81)) && (((((int)threadIdx.x) + 37) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 1) % 9))) && (((((int)threadIdx.x) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 280) / 81) * 49)) + ((((((int)threadIdx.x) + 37) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 336)] = (((1 <= ((((int)threadIdx.x) + 3) % 9)) && (((((int)threadIdx.x) + 3) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 336) / 81) * 49)) + ((((((int)threadIdx.x) + 12) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 392)] = (((((9 <= ((((int)threadIdx.x) + 68) % 81)) && (((((int)threadIdx.x) + 68) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 5) % 9))) && (((((int)threadIdx.x) + 5) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 392) / 81) * 49)) + ((((((int)threadIdx.x) + 68) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 448)] = (((((9 <= ((((int)threadIdx.x) + 43) % 81)) && (((((int)threadIdx.x) + 43) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 7) % 9))) && (((((int)threadIdx.x) + 7) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 448) / 81) * 49)) + ((((((int)threadIdx.x) + 43) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 504)] = ((((((int)threadIdx.x) < 54) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 504) / 81) * 49)) + (((((int)threadIdx.x) / 9) + 2) * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 560)] = (((((9 <= ((((int)threadIdx.x) + 74) % 81)) && (((((int)threadIdx.x) + 74) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 2) % 9))) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 560) / 81) * 49)) + ((((((int)threadIdx.x) + 74) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 616)] = (((((9 <= ((((int)threadIdx.x) + 49) % 81)) && (((((int)threadIdx.x) + 49) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 616) / 81) * 49)) + ((((((int)threadIdx.x) + 49) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 672)] = ((((((int)threadIdx.x) < 48) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 672) / 81) * 49)) + ((((((int)threadIdx.x) + 24) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 728)] = (((((9 <= ((((int)threadIdx.x) + 80) % 81)) && (((((int)threadIdx.x) + 80) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 8) % 9))) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 728) / 81) * 49)) + ((((((int)threadIdx.x) + 80) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 784)] = (((((9 <= ((((int)threadIdx.x) + 55) % 81)) && (((((int)threadIdx.x) + 55) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 1) % 9))) && (((((int)threadIdx.x) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 784) / 81) * 49)) + ((((((int)threadIdx.x) + 55) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 840)] = (((((9 <= ((((int)threadIdx.x) + 30) % 81)) && (((((int)threadIdx.x) + 30) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 3) % 9))) && (((((int)threadIdx.x) + 3) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 840) / 81) * 49)) + ((((((int)threadIdx.x) + 30) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 896)] = ((((9 <= ((((int)threadIdx.x) + 5) % 81)) && (1 <= ((((int)threadIdx.x) + 5) % 9))) && (((((int)threadIdx.x) + 5) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 896) / 81) * 49)) + ((((((int)threadIdx.x) + 5) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 952)] = (((((9 <= ((((int)threadIdx.x) + 61) % 81)) && (((((int)threadIdx.x) + 61) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 7) % 9))) && (((((int)threadIdx.x) + 7) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 952) / 81) * 49)) + ((((((int)threadIdx.x) + 61) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1008)] = (((((1 <= (((((int)threadIdx.x) / 9) + 4) % 9)) && (((((int)threadIdx.x) + 36) % 81) < 72)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1008) / 81) * 49)) + ((((((int)threadIdx.x) / 9) + 4) % 9) * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1064)] = (((1 <= ((((int)threadIdx.x) + 2) % 9)) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1064) / 81) * 49)) + ((((((int)threadIdx.x) + 11) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1120)] = (((((9 <= ((((int)threadIdx.x) + 67) % 81)) && (((((int)threadIdx.x) + 67) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1120) / 81) * 49)) + ((((((int)threadIdx.x) + 67) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1176)] = (((((9 <= ((((int)threadIdx.x) + 42) % 81)) && (((((int)threadIdx.x) + 42) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1176) / 81) * 49)) + ((((((int)threadIdx.x) + 42) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1232)] = ((((((int)threadIdx.x) < 55) && (1 <= ((((int)threadIdx.x) + 8) % 9))) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1232) / 81) * 49)) + ((((((int)threadIdx.x) + 17) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
+ if (((int)threadIdx.x) < 8) {
+ pad_temp_shared[(((int)threadIdx.x) + 1288)] = 0.000000e+00f;
+ }
+ kernel_shared[(((int)threadIdx.x) * 24)] = kernel[((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24))];
+ kernel_shared[((((int)threadIdx.x) * 24) + 1)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 1)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 2)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 2)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 3)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 3)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 4)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 4)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 5)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 5)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 6)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 6)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 7)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 7)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 8)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 8)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 9)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 9)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 10)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 10)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 11)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 11)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 12)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 12)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 13)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 13)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 14)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 14)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 15)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 15)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 16)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 16)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 17)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 17)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 18)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 18)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 19)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 19)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 20)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 20)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 21)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 21)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 22)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 22)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 23)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 23)];
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1344)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 16) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1345)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 16) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1346)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 16) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1347)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 17) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1348)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 17) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1349)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 17) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1350)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 18) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1351)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 18) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1352)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 18) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1353)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 19) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1354)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 19) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1355)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 19) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1356)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 20) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1357)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 20) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1358)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 20) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1359)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 21) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1360)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 21) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1361)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 21) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1362)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 22) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1363)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 22) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1364)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 22) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1365)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 23) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1366)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 23) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1367)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 23) % 48) * 3)) + 2)];
+ }
+ __syncthreads();
+ for (int rc_outer_inner = 0; rc_outer_inner < 2; ++rc_outer_inner) {
+ for (int rx_outer_inner = 0; rx_outer_inner < 3; ++rx_outer_inner) {
+ for (int rc_inner = 0; rc_inner < 8; ++rc_inner) {
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 1)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 1)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 2)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 3)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 4)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 5)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 6)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 9)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 10)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 11)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 12)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 13)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 14)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 15)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 9)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 10)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 11)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 12)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 13)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 14)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 15)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 18)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 19)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 20)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 21)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 22)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 23)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 24)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 18)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 19)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 20)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 21)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 22)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 23)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 24)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ }
}
- __syncthreads();
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((int)threadIdx.x) % 49)] * kernel_shared[((((int)threadIdx.x) / 49) * 192)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((int)threadIdx.x) % 49)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 384)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((int)threadIdx.x) % 49)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 96)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((int)threadIdx.x) % 49)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 480)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 7)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 1)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 7)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 385)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 7)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 97)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 7)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 481)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 14)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 2)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 14)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 386)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 14)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 98)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 14)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 482)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 63)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 3)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 63)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 387)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 63)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 99)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 63)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 483)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 70)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 4)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 70)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 388)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 70)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 100)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 70)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 484)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 77)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 5)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 77)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 389)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 77)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 101)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 77)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 485)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 126)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 6)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 126)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 390)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 126)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 102)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 126)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 486)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 133)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 7)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 133)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 391)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 133)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 103)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 133)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 487)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 140)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 8)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 140)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 392)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 140)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 104)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 140)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 488)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 189)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 9)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 189)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 393)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 189)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 105)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 189)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 489)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 196)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 10)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 196)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 394)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 196)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 106)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 196)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 490)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 203)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 11)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 203)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 395)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 203)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 107)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 203)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 491)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 252)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 12)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 252)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 396)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 252)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 108)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 252)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 492)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 259)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 13)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 259)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 397)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 259)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 109)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 259)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 493)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 266)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 14)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 266)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 398)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 266)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 110)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 266)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 494)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 315)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 15)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 315)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 399)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 315)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 111)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 315)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 495)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 322)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 16)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 322)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 400)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 322)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 112)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 322)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 496)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 329)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 17)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 329)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 401)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 329)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 113)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 329)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 497)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 378)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 18)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 378)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 402)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 378)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 114)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 378)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 498)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 385)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 19)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 385)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 403)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 385)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 115)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 385)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 499)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 392)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 20)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 392)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 404)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 392)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 116)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 392)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 500)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 441)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 21)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 441)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 405)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 441)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 117)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 441)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 501)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 448)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 22)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 448)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 406)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 448)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 118)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 448)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 502)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 455)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 23)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 455)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 407)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 455)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 119)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 455)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 503)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 504)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 24)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 504)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 408)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 504)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 120)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 504)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 504)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 511)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 25)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 511)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 409)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 511)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 121)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 511)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 505)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 518)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 26)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 518)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 410)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 518)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 122)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 518)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 506)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 567)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 27)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 567)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 411)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 567)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 123)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 567)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 507)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 574)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 28)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 574)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 412)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 574)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 124)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 574)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 508)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 581)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 29)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 581)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 413)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 581)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 125)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 581)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 509)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 630)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 30)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 630)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 414)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 630)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 126)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 630)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 510)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 637)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 31)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 637)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 415)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 637)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 127)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 637)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 511)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 644)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 32)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 644)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 416)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 644)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 128)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 644)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 512)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 693)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 33)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 693)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 417)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 693)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 129)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 693)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 513)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 700)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 34)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 700)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 418)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 700)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 130)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 700)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 514)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 707)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 35)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 707)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 419)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 707)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 131)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 707)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 515)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 756)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 36)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 756)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 420)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 756)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 132)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 756)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 516)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 763)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 37)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 763)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 421)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 763)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 133)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 763)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 517)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 770)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 38)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 770)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 422)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 770)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 134)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 770)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 518)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 819)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 39)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 819)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 423)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 819)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 135)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 819)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 519)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 826)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 40)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 826)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 424)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 826)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 136)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 826)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 520)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 833)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 41)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 833)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 425)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 833)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 137)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 833)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 521)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 882)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 42)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 882)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 426)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 882)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 138)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 882)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 522)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 889)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 43)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 889)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 427)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 889)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 139)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 889)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 523)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 896)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 44)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 896)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 428)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 896)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 140)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 896)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 524)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 945)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 45)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 945)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 429)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 945)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 141)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 945)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 525)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 952)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 46)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 952)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 430)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 952)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 142)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 952)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 526)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 959)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 47)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 959)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 431)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 959)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 143)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 959)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 527)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1008)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 48)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1008)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 432)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1008)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 144)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1008)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 528)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1015)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 49)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1015)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 433)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1015)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 145)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1015)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 529)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1022)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 50)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1022)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 434)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1022)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 146)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1022)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 530)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1071)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 51)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1071)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 435)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1071)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 147)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1071)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 531)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1078)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 52)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1078)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 436)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1078)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 148)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1078)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 532)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1085)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 53)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1085)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 437)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1085)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 149)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1085)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 533)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1134)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 54)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1134)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 438)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1134)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 150)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1134)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 534)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1141)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 55)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1141)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 439)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1141)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 151)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1141)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 535)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1148)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 56)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1148)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 440)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1148)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 152)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1148)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 536)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1197)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 57)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1197)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 441)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1197)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 153)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1197)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 537)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1204)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 58)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1204)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 442)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1204)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 154)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1204)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 538)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1211)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 59)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1211)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 443)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1211)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 155)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1211)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 539)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1260)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 60)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1260)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 444)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1260)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 156)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1260)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 540)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1267)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 61)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1267)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 445)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1267)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 157)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1267)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 541)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1274)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 62)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1274)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 446)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1274)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 158)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1274)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 542)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1323)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 63)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1323)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 447)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1323)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 159)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1323)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 543)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1330)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 64)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1330)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 448)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1330)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 160)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1330)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 544)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1337)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 65)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1337)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 449)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1337)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 161)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1337)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 545)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1386)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 66)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1386)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 450)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1386)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 162)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1386)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 546)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1393)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 67)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1393)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 451)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1393)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 163)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1393)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 547)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1400)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 68)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1400)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 452)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1400)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 164)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1400)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 548)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1449)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 69)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1449)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 453)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1449)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 165)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1449)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 549)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1456)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 70)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1456)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 454)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1456)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 166)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1456)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 550)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1463)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 71)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1463)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 455)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1463)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 167)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1463)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 551)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1512)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 72)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1512)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 456)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1512)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 168)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1512)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 552)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1519)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 73)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1519)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 457)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1519)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 169)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1519)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 553)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1526)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 74)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1526)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 458)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1526)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 170)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1526)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 554)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1575)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 75)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1575)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 459)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1575)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 171)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1575)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 555)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1582)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 76)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1582)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 460)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1582)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 172)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1582)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 556)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1589)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 77)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1589)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 461)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1589)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 173)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1589)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 557)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1638)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 78)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1638)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 462)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1638)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 174)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1638)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 558)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1645)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 79)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1645)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 463)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1645)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 175)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1645)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 559)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1652)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 80)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1652)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 464)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1652)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 176)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1652)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 560)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1701)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 81)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1701)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 465)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1701)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 177)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1701)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 561)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1708)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 82)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1708)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 466)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1708)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 178)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1708)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 562)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1715)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 83)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1715)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 467)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1715)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 179)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1715)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 563)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1764)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 84)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1764)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 468)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1764)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 180)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1764)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 564)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1771)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 85)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1771)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 469)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1771)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 181)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1771)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 565)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1778)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 86)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1778)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 470)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1778)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 182)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1778)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 566)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1827)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 87)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1827)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 471)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1827)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 183)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1827)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 567)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1834)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 88)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1834)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 472)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1834)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 184)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1834)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 568)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1841)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 89)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1841)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 473)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1841)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 185)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1841)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 569)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1890)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 90)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1890)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 474)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1890)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 186)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1890)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 570)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1897)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 91)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1897)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 475)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1897)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 187)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1897)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 571)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1904)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 92)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1904)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 476)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1904)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 188)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1904)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 572)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1953)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 93)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1953)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 477)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1953)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 189)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1953)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 573)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1960)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 94)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1960)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 478)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1960)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 190)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1960)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 574)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1967)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 95)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1967)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 479)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1967)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 191)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1967)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 575)]));
}
}
for (int i1_inner = 0; i1_inner < 2; ++i1_inner) {
- compute[((((((int)blockIdx.x) * 392) + ((((int)threadIdx.x) / 49) * 98)) + (i1_inner * 49)) + (((int)threadIdx.x) % 49))] = max((conv2d_nchw[i1_inner] + bias[(((((int)blockIdx.x) * 8) + ((((int)threadIdx.x) / 49) * 2)) + i1_inner)]), 0.000000e+00f);
- compute[(((((((int)blockIdx.x) * 392) + ((((int)threadIdx.x) / 49) * 98)) + (i1_inner * 49)) + (((int)threadIdx.x) % 49)) + 196)] = max((conv2d_nchw[(i1_inner + 2)] + bias[((((((int)blockIdx.x) * 8) + ((((int)threadIdx.x) / 49) * 2)) + i1_inner) + 4)]), 0.000000e+00f);
+ for (int i3_inner = 0; i3_inner < 7; ++i3_inner) {
+ compute[(((((((int)blockIdx.x) * 784) + ((((int)threadIdx.x) / 7) * 98)) + (i1_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + i3_inner)] = max((conv2d_nchw[((i1_inner * 7) + i3_inner)] + bias[(((((int)blockIdx.x) * 16) + ((((int)threadIdx.x) / 7) * 2)) + i1_inner)]), 0.000000e+00f);
+ }
}
}
@@ -1350,7 +872,7 @@ In the example below we resume the status and do more 5 trials.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 2 minutes 27.217 seconds)
+ **Total running time of the script:** ( 2 minutes 20.941 seconds)
.. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_conv2d_layer_cuda.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
index 068b58b27..934f437a6 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_cuda.rst.txt
@@ -614,7 +614,7 @@ so we can read the log file and load the best schedules.
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 9.7397 9.7346 9.7896 9.6948 0.0389
+ 9.6835 9.6888 9.7216 9.6402 0.0334
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
index e4c7a6bba..d067161e1 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_network_x86.rst.txt
@@ -633,7 +633,7 @@ so we can read the log file and load the best schedules.
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 766.0562 767.0835 768.9867 762.0984 2.9045
+ 790.2151 788.5246 795.5004 786.6204 3.8172
@@ -658,7 +658,7 @@ Other Tips
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 21.204 seconds)
+ **Total running time of the script:** ( 1 minutes 18.949 seconds)
.. _sphx_glr_download_how_to_tune_with_autoscheduler_tune_network_x86.py:
diff --git a/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt b/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
index eabe220a4..54a083c62 100644
--- a/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
+++ b/docs/_sources/how_to/tune_with_autoscheduler/tune_sparse_x86.rst.txt
@@ -362,80 +362,30 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
placeholder_4: Buffer(placeholder_14: Pointer(float32), float32, [65536], []),
compute: Buffer(compute_2: Pointer(float32), float32, [65536], [])}
buffer_map = {placeholder_5: placeholder, placeholder_6: placeholder_1, placeholder_7: placeholder_2, placeholder_8: placeholder_3, placeholder_9: placeholder_4, compute_1: compute}
- preflattened_buffer_map = {placeholder_9: placeholder_15: Buffer(placeholder_14, float32, [128, 512], []), compute_1: compute_3: Buffer(compute_2, float32, [128, 512], []), placeholder_7: placeholder_16: Buffer(placeholder_12, int32, [4916], []), placeholder_8: placeholder_17: Buffer(placeholder_13, int32, [33], []), placeholder_6: placeholder_18: Buffer(placeholder_11, float32, [4916, 16, 1], []), placeholder_5: placeholder_19: Buffer(placeholder_10, float32, [128, 256], [])} {
- for (i0.outer.i1.outer.fused: int32, 0, 32) "parallel" {
- allocate(compute_4: Pointer(global float32), float32, [2048]), storage_scope = global {
- for (i.outer.inner: int32, 0, 2) {
+ preflattened_buffer_map = {placeholder_7: placeholder_15: Buffer(placeholder_12, int32, [4916], []), placeholder_5: placeholder_16: Buffer(placeholder_10, float32, [128, 256], []), placeholder_8: placeholder_17: Buffer(placeholder_13, int32, [33], []), placeholder_6: placeholder_18: Buffer(placeholder_11, float32, [4916, 16, 1], []), compute_1: compute_3: Buffer(compute_2, float32, [128, 512], []), placeholder_9: placeholder_19: Buffer(placeholder_14, float32, [128, 512], [])} {
+ for (i0.outer.i1.outer.fused: int32, 0, 16) "parallel" {
+ allocate(compute_4: Pointer(global float32), float32, [4096]), storage_scope = global {
+ for (i.outer.inner: int32, 0, 64) {
for (nb_j.inner: int32, 0, 2) {
- for (i.inner.init: int32, 0, 32) {
- let cse_var_1: int32 = (((i.outer.inner*1024) + (i.inner.init*32)) + (nb_j.inner*16))
- {
- compute_5: Buffer(compute_4, float32, [2048], [])[cse_var_1] = 0f32
- compute_5[(cse_var_1 + 1)] = 0f32
- compute_5[(cse_var_1 + 2)] = 0f32
- compute_5[(cse_var_1 + 3)] = 0f32
- compute_5[(cse_var_1 + 4)] = 0f32
- compute_5[(cse_var_1 + 5)] = 0f32
- compute_5[(cse_var_1 + 6)] = 0f32
- compute_5[(cse_var_1 + 7)] = 0f32
- compute_5[(cse_var_1 + 8)] = 0f32
- compute_5[(cse_var_1 + 9)] = 0f32
- compute_5[(cse_var_1 + 10)] = 0f32
- compute_5[(cse_var_1 + 11)] = 0f32
- compute_5[(cse_var_1 + 12)] = 0f32
- compute_5[(cse_var_1 + 13)] = 0f32
- compute_5[(cse_var_1 + 14)] = 0f32
- compute_5[(cse_var_1 + 15)] = 0f32
+ for (i.inner.init: int32, 0, 2) {
+ for (j.init: int32, 0, 16) {
+ compute_5: Buffer(compute_4, float32, [4096], [])[((((i.outer.inner*64) + (i.inner.init*32)) + (nb_j.inner*16)) + j.init)] = 0f32
}
}
- for (elem_idx: int32, 0, let cse_var_2: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner) in (placeholder_3[(cse_var_2 + 1)] - placeholder_3[cse_var_2])) {
- for (i.inner: int32, 0, 32) {
- let cse_var_21: int32 = (elem_idx*16)
- let cse_var_20: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner)
- let cse_var_19: int32 = (((i.outer.inner*1024) + (i.inner*32)) + (nb_j.inner*16))
- let cse_var_18: int32 = (cse_var_19 + 1)
- let cse_var_17: int32 = (cse_var_19 + 11)
- let cse_var_16: int32 = (cse_var_19 + 12)
- let cse_var_15: int32 = (cse_var_19 + 13)
- let cse_var_14: int32 = (cse_var_19 + 14)
- let cse_var_13: int32 = (cse_var_19 + 15)
- let cse_var_12: int32 = (cse_var_19 + 2)
- let cse_var_11: int32 = (cse_var_19 + 3)
- let cse_var_10: int32 = (cse_var_19 + 4)
- let cse_var_9: int32 = (cse_var_19 + 5)
- let cse_var_8: int32 = (cse_var_19 + 6)
- let cse_var_7: int32 = (cse_var_19 + 7)
- let cse_var_6: int32 = (cse_var_19 + 8)
- let cse_var_5: int32 = (cse_var_19 + 9)
- let cse_var_4: int32 = (((floordiv(i0.outer.i1.outer.fused, 16)*16384) + (i.outer.inner*8192)) + (i.inner*256))
- let cse_var_3: int32 = (cse_var_19 + 10)
- {
- compute_5[cse_var_19] = (compute_5[cse_var_19] + (placeholder_1[((placeholder_3[cse_var_20]*16) + cse_var_21)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_18] = (compute_5[cse_var_18] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 1)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_12] = (compute_5[cse_var_12] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 2)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_11] = (compute_5[cse_var_11] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 3)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_10] = (compute_5[cse_var_10] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 4)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_9] = (compute_5[cse_var_9] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 5)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_8] = (compute_5[cse_var_8] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 6)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_7] = (compute_5[cse_var_7] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 7)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_6] = (compute_5[cse_var_6] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 8)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_5] = (compute_5[cse_var_5] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 9)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_3] = (compute_5[cse_var_3] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 10)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_17] = (compute_5[cse_var_17] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 11)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_16] = (compute_5[cse_var_16] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 12)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_15] = (compute_5[cse_var_15] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 13)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_14] = (compute_5[cse_var_14] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 14)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_13] = (compute_5[cse_var_13] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 15)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
+ for (elem_idx: int32, 0, let cse_var_1: int32 = ((i0.outer.i1.outer.fused*2) + nb_j.inner) in (placeholder_3[(cse_var_1 + 1)] - placeholder_3[cse_var_1])) {
+ for (i.inner: int32, 0, 2) {
+ for (j: int32, 0, 16) {
+ let cse_var_3: int32 = ((i0.outer.i1.outer.fused*2) + nb_j.inner)
+ let cse_var_2: int32 = ((((i.outer.inner*64) + (i.inner*32)) + (nb_j.inner*16)) + j)
+ compute_5[cse_var_2] = (compute_5[cse_var_2] + (placeholder_1[(((placeholder_3[cse_var_3]*16) + (elem_idx*16)) + j)]*max(placeholder[(((i.outer.inner*512) + (i.inner*256)) + placeholder_2[(placeholder_3[cse_var_3] + elem_idx)])], 0f32)))
}
}
}
}
}
- for (i0.inner: int32, 0, 64) {
- for (i1.inner: int32, 0, 32) {
- let cse_var_22: int32 = ((((floordiv(i0.outer.i1.outer.fused, 16)*32768) + (i0.inner*512)) + (floormod(i0.outer.i1.outer.fused, 16)*32)) + i1.inner)
- compute[cse_var_22] = max((compute_5[((i0.inner*32) + i1.inner)] + placeholder_4[cse_var_22]), 0f32)
- }
+ for (i0.inner: int32, 0, 128) {
+ let cse_var_4: int32 = ((i0.inner*512) + (i0.outer.i1.outer.fused*32))
+ compute[ramp(cse_var_4, 1, 32)] = max((compute_5[ramp((i0.inner*32), 1, 32)] + placeholder_4[ramp(cse_var_4, 1, 32)]), broadcast(0f32, 32))
}
}
}
@@ -489,7 +439,7 @@ We build the binary and check its correctness and performance.
.. code-block:: none
- Execution time of this operator: 1.723 ms
+ Execution time of this operator: 2.110 ms
diff --git a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
index 232369037..8053616b2 100644
--- a/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
Computation times
=================
-**00:46.224** total execution time for **how_to_tune_with_autotvm** files:
+**00:45.441** total execution time for **how_to_tune_with_autotvm** files:
-- **00:45.298**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)
-- **00:00.245**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)
-- **00:00.227**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``)
-- **00:00.227**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)
-- **00:00.227**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)
+- **00:44.572**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)
+- **00:00.225**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)
+- **00:00.217**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)
+- **00:00.214**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``)
+- **00:00.212**: :ref:`sphx_glr_how_to_tune_with_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)
diff --git a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
index 13297923c..b5070df00 100644
--- a/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/how_to/tune_with_autotvm/tune_conv2d_cuda.rst.txt
@@ -859,8 +859,8 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 4, 32]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 128]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2885496
- No: 6 GFLOPS: 95.31/95.31 result: MeasureResult(costs=(0.0024288898541666667,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.794248342514038, timestamp=1653092258.3639417) [('tile_f', [-1, 1, 1, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 4]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3754080
- No: 7 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 6 GFLOPS: 93.66/93.66 result: MeasureResult(costs=(0.002471689604166667,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.7587053775787354, timestamp=1653105959.8730922) [('tile_f', [-1, 1, 1, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 4]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3754080
+ No: 7 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -983,7 +983,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 16, 32]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 256, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6225319
- No: 8 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 8 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1106,7 +1106,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 1, 32]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 8, 64]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,943546
- No: 9 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 9 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1229,7 +1229,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 16, 4]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 32]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2868708
- No: 10 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 10 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 142, in build
res = future.result()
File "/usr/lib/python3.7/concurrent/futures/_base.py", line 435, in result
@@ -1247,7 +1247,7 @@ for this template
TimeoutError
[('tile_f', [-1, 32, 2, 4]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 4, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4691833
- No: 11 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 11 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1370,7 +1370,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 2, 64]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1042124
- No: 12 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 12 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1493,7 +1493,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 32, 1, 4]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 32, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,10013405
- No: 13 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 13 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1616,7 +1616,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 8, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 32]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6732082
- No: 14 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 14 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1739,7 +1739,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 4, 32]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 128]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7536735
- No: 15 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 15 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1862,7 +1862,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 1, 4]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 128, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,482121
- No: 16 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 16 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1985,7 +1985,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 1, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 32, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2824525
- No: 17 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 17 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -2108,7 +2108,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 64, 1, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 8, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4559286
- No: 18 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 18 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -2231,7 +2231,7 @@ for this template
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 32, 16]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 1, 512]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9677544
- No: 19 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+ No: 19 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 721, in __call__
yield remote, remote.load_module(os.path.split(build_result.filename)[1])
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 685, in run_through_rpc
@@ -2319,7 +2319,7 @@ for this template
15: _PyEval_EvalFrameDefault
14: 0x0000000000537c30
13: _PyObject_FastCallKeywords
- 12: 0x00007f4ec66d1fa2
+ 12: 0x00007fe866cb3fa2
11: _ctypes_callproc
10: ffi_call
9: ffi_call_unix64
@@ -2384,7 +2384,7 @@ for this template
21: _PyFunction_FastCallKeywords
20: _PyEval_EvalFrameDefault
19: _PyFunction_FastCall [('tile_f', [-1, 8, 2, 16]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6390073
- No: 20 GFLOPS: 143.85/143.85 result: MeasureResult(costs=(0.0016093728999999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4286088943481445, timestamp=1653092284.9035137) [('tile_f', [-1, 1, 4, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9881539
+ No: 20 GFLOPS: 144.60/144.60 result: MeasureResult(costs=(0.0016010082399999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4342105388641357, timestamp=1653105986.3184521) [('tile_f', [-1, 1, 4, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9881539
@@ -2437,7 +2437,7 @@ and measure running time.
Best config:
[('tile_f', [-1, 1, 4, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9881539
- Time cost of this operator: 0.001960
+ Time cost of this operator: 0.002024
diff --git a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
index bf20cb0a7..e7b0edda6 100644
--- a/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/micro_autotune.rst.txt
@@ -292,10 +292,10 @@ Timing the untuned program
########## Build without Autotuning ##########
Node Name Ops Time(us) Time(%) Shape Inputs Outputs
--------- --- -------- ------- ----- ------ -------
- tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 313.4 98.71 (1, 2, 10, 10, 3) 2 1
- tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 3.178 1.001 (1, 6, 10, 10) 1 1
- tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.917 0.289 (1, 1, 10, 10, 3) 1 1
- Total_time - 317.495 - - - -
+ tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 312.6 98.732 (1, 2, 10, 10, 3) 2 1
+ tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 3.091 0.976 (1, 6, 10, 10) 1 1
+ tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.925 0.292 (1, 1, 10, 10, 3) 1 1
+ Total_time - 316.616 - - - -
@@ -357,10 +357,10 @@ Timing the tuned program
########## Build with Autotuning ##########
Node Name Ops Time(us) Time(%) Shape Inputs Outputs
--------- --- -------- ------- ----- ------ -------
- tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 225.7 98.764 (1, 1, 10, 10, 6) 2 1
- tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 1.99 0.871 (1, 6, 10, 10) 1 1
- tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.834 0.365 (1, 3, 10, 10, 1) 1 1
- Total_time - 228.524 - - - -
+ tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 131.2 98.007 (1, 6, 10, 10, 1) 2 1
+ tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 1.767 1.32 (1, 6, 10, 10) 1 1
+ tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.901 0.673 (1, 1, 10, 10, 3) 1 1
+ Total_time - 133.868 - - - -
diff --git a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
index a2b719959..722f7c5b5 100644
--- a/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_microtvm/sg_execution_times.rst.txt
@@ -5,10 +5,10 @@
Computation times
=================
-**00:47.748** total execution time for **how_to_work_with_microtvm** files:
+**00:46.345** total execution time for **how_to_work_with_microtvm** files:
-- **00:43.362**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)
-- **00:03.758**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)
-- **00:00.212**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)
-- **00:00.208**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_reference_vm.py` (``micro_reference_vm.py``)
-- **00:00.207**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_tvmc.py` (``micro_tvmc.py``)
+- **00:42.090**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_autotune.py` (``micro_autotune.py``)
+- **00:03.627**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_tflite.py` (``micro_tflite.py``)
+- **00:00.222**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_reference_vm.py` (``micro_reference_vm.py``)
+- **00:00.204**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_tvmc.py` (``micro_tvmc.py``)
+- **00:00.202**: :ref:`sphx_glr_how_to_work_with_microtvm_micro_ethosu.py` (``micro_ethosu.py``)
diff --git a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
index b3b29e217..58057ede2 100644
--- a/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_relay/sg_execution_times.rst.txt
@@ -5,8 +5,8 @@
Computation times
=================
-**00:09.549** total execution time for **how_to_work_with_relay** files:
+**00:09.710** total execution time for **how_to_work_with_relay** files:
-- **00:07.543**: :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)
-- **00:01.781**: :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)
-- **00:00.225**: :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``)
+- **00:07.104**: :ref:`sphx_glr_how_to_work_with_relay_using_external_lib.py` (``using_external_lib.py``)
+- **00:02.394**: :ref:`sphx_glr_how_to_work_with_relay_build_gcn.py` (``build_gcn.py``)
+- **00:00.212**: :ref:`sphx_glr_how_to_work_with_relay_using_relay_viz.py` (``using_relay_viz.py``)
diff --git a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
index dc1877a83..261698f0c 100644
--- a/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/sg_execution_times.rst.txt
@@ -5,13 +5,13 @@
Computation times
=================
-**00:05.957** total execution time for **how_to_work_with_schedules** files:
+**00:05.807** total execution time for **how_to_work_with_schedules** files:
-- **00:02.147**: :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)
-- **00:01.244**: :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)
-- **00:00.759**: :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)
-- **00:00.752**: :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)
-- **00:00.321**: :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)
-- **00:00.250**: :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)
-- **00:00.250**: :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``)
-- **00:00.234**: :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)
+- **00:02.132**: :ref:`sphx_glr_how_to_work_with_schedules_intrin_math.py` (``intrin_math.py``)
+- **00:01.196**: :ref:`sphx_glr_how_to_work_with_schedules_tensorize.py` (``tensorize.py``)
+- **00:00.740**: :ref:`sphx_glr_how_to_work_with_schedules_reduction.py` (``reduction.py``)
+- **00:00.726**: :ref:`sphx_glr_how_to_work_with_schedules_scan.py` (``scan.py``)
+- **00:00.315**: :ref:`sphx_glr_how_to_work_with_schedules_extern_op.py` (``extern_op.py``)
+- **00:00.245**: :ref:`sphx_glr_how_to_work_with_schedules_schedule_primitives.py` (``schedule_primitives.py``)
+- **00:00.230**: :ref:`sphx_glr_how_to_work_with_schedules_tedd.py` (``tedd.py``)
+- **00:00.223**: :ref:`sphx_glr_how_to_work_with_schedules_tuple_inputs.py` (``tuple_inputs.py``)
diff --git a/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt b/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt
index 646ef9425..4c2e72a71 100644
--- a/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt
+++ b/docs/_sources/how_to/work_with_schedules/tensorize.rst.txt
@@ -318,7 +318,7 @@ The importing needs to happen before the tensorized GEMV being executed.
C: Buffer(C_2: Pointer(float32), float32, [524288], [])}
buffer_map = {A_1: A, B_1: B, C_1: C}
preflattened_buffer_map = {A_1: A_3: Buffer(A_2, float32, [1024, 64], []), B_1: B_3: Buffer(B_2, float32, [512, 64], []), C_1: C_3: Buffer(C_2, float32, [1024, 512], [])} {
- attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmpbxn_r8qb/input0.cc'\nsource_filename = \"/tmp/tmpbxn_r8qb/input0.cc\"\ntarget datalayout = \"e-m:e-i64:64-f80:128-n8:16:32:64-S128\"\ntarget triple = \"x86_64-pc-linux-gnu\"\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n %7 = alloca float*, align 8\n %8 = alloca float*, align 8\n %9 = alloca floa [...]
+ attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmp4i8vuuix/input0.cc'\nsource_filename = \"/tmp/tmp4i8vuuix/input0.cc\"\ntarget datalayout = \"e-m:e-i64:64-f80:128-n8:16:32:64-S128\"\ntarget triple = \"x86_64-pc-linux-gnu\"\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n %7 = alloca float*, align 8\n %8 = alloca float*, align 8\n %9 = alloca floa [...]
for (i, 0, 1024) {
for (j.outer: int32, 0, 32) {
@tir.call_extern("gemv_update", @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), C_2, ((i*512) + (j.outer*16)), 16, 2, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), A_2, (i*64), 64, 1, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), B_2, (j.outer*1024), 1024, 1, dtype=handle), 16, 64, 64, dtype=int32)
diff --git a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
index fcd7c8358..6afde9bdc 100644
--- a/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
Computation times
=================
-**00:21.185** total execution time for **topic_vta_tutorials_autotvm** files:
+**00:20.393** total execution time for **topic_vta_tutorials_autotvm** files:
-- **00:20.970**: :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``)
-- **00:00.215**: :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)
+- **00:20.191**: :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``)
+- **00:00.202**: :ref:`sphx_glr_topic_vta_tutorials_autotvm_tune_alu_vta.py` (``tune_alu_vta.py``)
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
index 176625f4c..4f6f72b4a 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -265,7 +265,7 @@ The compilation steps are:
DeprecationWarning,
/workspace/vta/tutorials/frontend/deploy_classification.py:213: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the new recommended usage.
relay_prog, target=tvm.target.Target(target, host=env.target_host), params=params
- resnet18_v1 inference graph built in 22.43s!
+ resnet18_v1 inference graph built in 21.33s!
diff --git a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
index 909420185..c0561c4c6 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/deploy_detection.rst.txt
@@ -301,7 +301,7 @@ The compilation steps are:
/workspace/python/tvm/relay/build_module.py:431: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
DeprecationWarning,
- yolov3-tiny inference graph built in 15.50s!
+ yolov3-tiny inference graph built in 14.80s!
diff --git a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
index 4dc8766df..82dfecd8a 100644
--- a/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
Computation times
=================
-**01:30.179** total execution time for **topic_vta_tutorials_frontend** files:
+**01:28.651** total execution time for **topic_vta_tutorials_frontend** files:
-- **00:47.506**: :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)
-- **00:42.673**: :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``)
+- **00:47.025**: :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_detection.py` (``deploy_detection.py``)
+- **00:41.626**: :ref:`sphx_glr_topic_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``)
diff --git a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
index 0a5d9d7b3..859c5f5f0 100644
--- a/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
Computation times
=================
-**00:03.561** total execution time for **topic_vta_tutorials_optimize** files:
+**00:03.542** total execution time for **topic_vta_tutorials_optimize** files:
-- **00:02.992**: :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)
-- **00:00.569**: :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``)
+- **00:02.990**: :ref:`sphx_glr_topic_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)
+- **00:00.553**: :ref:`sphx_glr_topic_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``)
diff --git a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
index 33a1b37ba..9c26127d0 100644
--- a/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/topic/vta/tutorials/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
Computation times
=================
-**00:01.032** total execution time for **topic_vta_tutorials** files:
+**00:01.022** total execution time for **topic_vta_tutorials** files:
-- **00:00.526**: :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``)
-- **00:00.506**: :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``)
+- **00:00.519**: :ref:`sphx_glr_topic_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``)
+- **00:00.503**: :ref:`sphx_glr_topic_vta_tutorials_vta_get_started.py` (``vta_get_started.py``)
diff --git a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
index 5fa2da1ee..9118ba1d4 100644
--- a/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
+++ b/docs/_sources/tutorial/auto_scheduler_matmul_x86.rst.txt
@@ -184,7 +184,7 @@ trials, we can load the best schedule from the log file and apply it.
.. code-block:: none
-
+ *E
@@ -306,7 +306,7 @@ We build the binary and check its correctness and performance.
.. code-block:: none
- Execution time of this operator: 93.811 ms
+ Execution time of this operator: 93.507 ms
@@ -415,6 +415,11 @@ Expression (TE) language that demonstrates how TVM can optimize computational
operations.
+.. rst-class:: sphx-glr-timing
+
+ **Total running time of the script:** ( 1 minutes 6.694 seconds)
+
+
.. _sphx_glr_download_tutorial_auto_scheduler_matmul_x86.py:
diff --git a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
index 5f2188218..1bee31cd8 100644
--- a/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
+++ b/docs/_sources/tutorial/autotvm_relay_x86.rst.txt
@@ -271,7 +271,7 @@ standard deviation.
.. code-block:: none
- {'mean': 496.7864342000008, 'median': 496.7793454000031, 'std': 0.5719564614245756}
+ {'mean': 496.2918664199986, 'median': 496.27590769999586, 'std': 0.634099203437212}
@@ -485,31 +485,31 @@ the tuning data to.
.. code-block:: none
-
[Task 1/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 1/25] Current/Best: 17.37/ 17.37 GFLOPS | Progress: (4/20) | 6.10 s
[Task 1/25] Current/Best: 6.11/ 17.37 GFLOPS | Progress: (8/20) | 8.99 s
[Task 1/25] Current/Best: 11.52/ 22.73 GFLOPS | Progress: (12/20) | 11.45 s
[Task 1/25] Current/Best: 16.67/ 22.73 GFLOPS | Progress: (16/20) | 13.13 s
[Task 1/25] Current/Best: 11.60/ 23.83 GFLOPS | Progress: (20/20) | 14.88 s Done.
-
[Task 2/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 2/25] Current/Best: 12.27/ 13.24 GFLOPS | Progress: (4/20) | 3.87 s
[Task 2/25] Current/Best: 13.75/ 18.16 GFLOPS | Progress: (8/20) | 5.17 s
[Task 2/25] Current/Best: 21.02/ 21.02 GFLOPS | Progress: (12/20) | 6.48 s
[Task 2/25] Current/Best: 12.49/ 21.02 GFLOPS | Progress: (16/20) | 7.74 s
[Task 2/25] Current/Best: 18.82/ 21.02 GFLOPS | Progress: (20/20) | 9.32 s Done.
-
[Task 3/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 3/25] Current/Best: 1.62/ 10.58 GFLOPS | Progress: (4/20) | 5.81 s
[Task 3/25] Current/Best: 15.48/ 16.87 GFLOPS | Progress: (8/20) | 7.73 s
[Task 3/25] Current/Best: 14.85/ 16.87 GFLOPS | Progress: (12/20) | 9.46 s
[Task 3/25] Current/Best: 7.20/ 23.76 GFLOPS | Progress: (16/20) | 11.38 s
[Task 3/25] Current/Best: 12.49/ 23.76 GFLOPS | Progress: (20/20) | 15.95 s Done.
-
[Task 4/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 4/25] Current/Best: 9.55/ 20.35 GFLOPS | Progress: (4/20) | 2.34 s
[Task 4/25] Current/Best: 6.88/ 20.35 GFLOPS | Progress: (8/20) | 7.09 s
[Task 4/25] Current/Best: 21.70/ 21.70 GFLOPS | Progress: (12/20) | 12.15 s
[Task 4/25] Current/Best: 15.36/ 21.70 GFLOPS | Progress: (16/20) | 14.60 s
[Task 4/25] Current/Best: 13.18/ 21.70 GFLOPS | Progress: (20/20) | 16.61 s Done.
-
[Task 5/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 5/25] Current/Best: 9.41/ 10.12 GFLOPS | Progress: (4/20) | 2.55 s
[Task 5/25] Current/Best: 11.61/ 12.22 GFLOPS | Progress: (8/20) | 4.62 s
[Task 5/25] Current/Best: 9.83/ 17.83 GFLOPS | Progress: (12/20) | 7.72 s
[Task 5/25] Current/Best: 11.68/ 22.69 GFLOPS | Progress: (16/20) | 9.15 s
[Task 5/25] Current/Best: 11.40/ 22.69 GFLOPS | Progress: (20/20) | 11.08 s Done.
-
[Task 6/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 6/25] Current/Best: 12.21/ 20.64 GFLOPS | Progress: (4/20) | 4.06 s
[Task 6/25] Current/Best: 18.93/ 20.64 GFLOPS | Progress: (8/20) | 5.82 s
[Task 6/25] Current/Best: 13.28/ 20.64 GFLOPS | Progress: (12/20) | 7.76 s
[Task 6/25] Current/Best: 19.91/ 20.64 GFLOPS | Progress: (16/20) | 10.05 s
[Task 6/25] Current/Best: 3.72/ 20.64 GFLOPS | Progress: (20/20) | 12.59 s Done.
-
[Task 7/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 7/25] Current/Best: 11.22/ 12.15 GFLOPS | Progress: (4/20) | 3.54 s
[Task 7/25] Current/Best: 20.26/ 20.81 GFLOPS | Progress: (8/20) | 5.06 s
[Task 7/25] Current/Best: 15.29/ 20.81 GFLOPS | Progress: (12/20) | 6.96 s
[Task 7/25] Current/Best: 12.22/ 20.81 GFLOPS | Progress: (16/20) | 9.01 s
[Task 7/25] Current/Best: 6.36/ 21.75 GFLOPS | Progress: (20/20) | 11.46 s Done.
-
[Task 8/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 8/25] Current/Best: 10.04/ 14.18 GFLOPS | Progress: (4/20) | 2.85 s
[Task 8/25] Current/Best: 9.56/ 14.18 GFLOPS | Progress: (8/20) | 8.11 s
[Task 8/25] Current/Best: 12.97/ 14.18 GFLOPS | Progress: (12/20) | 14.70 s
[Task 8/25] Current/Best: 18.77/ 18.77 GFLOPS | Progress: (16/20) | 16.83 s
[Task 8/25] Current/Best: 20.05/ 20.05 GFLOPS | Progress: (20/20) | 23.92 s Done.
-
[Task 9/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 9/25] Current/Best: 14.32/ 15.81 GFLOPS | Progress: (4/20) | 19.14 s
[Task 9/25] Current/Best: 23.28/ 23.28 GFLOPS | Progress: (8/20) | 20.93 s
[Task 9/25] Current/Best: 8.22/ 23.28 GFLOPS | Progress: (12/20) | 23.45 s
[Task 9/25] Current/Best: 17.80/ 23.28 GFLOPS | Progress: (16/20) | 26.32 s
[Task 9/25] Current/Best: 9.04/ 23.28 GFLOPS | Progress: (20/20) | 35.11 s
[Task 10/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 10/25] Current/Best: 18.21/ 18.21 GFLOPS | Progress: (4/20) | 2.54 s
[Task 10/25] Current/Best: 15.57/ 18.21 GFLOPS | Progress: (8/20) | 4.21 s
[Task 10/25] Current/Best: 12.02/ 18.81 GFLOPS | Progress: (12/20) | 5.76 s
[Task 10/25] Current/Best: 19.02/ 20.33 GFLOPS | Progress: (16/20) | 6.87 s
[Task 10/25] Current/Best: 8.90/ 20.33 GFLOPS | Progress: (20/20
) | 8.43 s Done.
-
[Task 11/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 11/25] Current/Best: 12.24/ 18.06 GFLOPS | Progress: (4/20) | 3.32 s
[Task 11/25] Current/Best: 16.80/ 18.06 GFLOPS | Progress: (8/20) | 6.13 s
[Task 11/25] Current/Best: 18.19/ 18.19 GFLOPS | Progress: (12/20) | 8.20 s
[Task 11/25] Current/Best: 13.46/ 21.13 GFLOPS | Progress: (16/20) | 11.16 s
[Task 11/25] Current/Best: 19.42/ 21.35 GFLOPS | Progress: (20/20) | 13.26 s Done.
-
[Task 12/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 12/25] Current/Best: 7.78/ 18.01 GFLOPS | Progress: (4/20) | 5.64 s
[Task 12/25] Current/Best: 5.17/ 18.01 GFLOPS | Progress: (8/20) | 9.59 s
[Task 12/25] Current/Best: 18.90/ 18.90 GFLOPS | Progress: (12/20) | 11.61 s
[Task 12/25] Current/Best: 14.88/ 18.90 GFLOPS | Progress: (16/20) | 14.55 s
[Task 12/25] Current/Best: 15.13/ 18.90 GFLOPS | Progress: (20/20) | 16.48 s Done.
-
[Task 13/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 13/25] Current/Best: 8.80/ 17.22 GFLOPS | Progress: (4/20) | 3.67 s
[Task 13/25] Current/Best: 16.04/ 20.89 GFLOPS | Progress: (8/20) | 6.25 s
[Task 13/25] Current/Best: 19.47/ 21.57 GFLOPS | Progress: (12/20) | 9.41 s
[Task 13/25] Current/Best: 12.19/ 21.57 GFLOPS | Progress: (16/20) | 12.89 s
[Task 13/25] Current/Best: 18.56/ 21.57 GFLOPS | Progress: (20/20) | 15.20 s Done.
-
[Task 14/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 14/25] Current/Best: 13.55/ 13.55 GFLOPS | Progress: (4/20) | 3.37 s
[Task 14/25] Current/Best: 6.08/ 13.55 GFLOPS | Progress: (8/20) | 5.54 s
[Task 14/25] Current/Best: 20.68/ 20.68 GFLOPS | Progress: (12/20) | 8.23 s
[Task 14/25] Current/Best: 16.08/ 20.68 GFLOPS | Progress: (16/20) | 10.14 s
[Task 14/25] Current/Best: 17.16/ 20.68 GFLOPS | Progress: (20/20) | 11.87 s
[Task 15/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
+
[Task 1/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 1/25] Current/Best: 17.26/ 17.26 GFLOPS | Progress: (4/20) | 5.99 s
[Task 1/25] Current/Best: 6.16/ 17.26 GFLOPS | Progress: (8/20) | 8.92 s
[Task 1/25] Current/Best: 11.57/ 22.77 GFLOPS | Progress: (12/20) | 11.34 s
[Task 1/25] Current/Best: 16.81/ 22.87 GFLOPS | Progress: (16/20) | 13.01 s
[Task 1/25] Current/Best: 11.64/ 23.88 GFLOPS | Progress: (20/20) | 14.74 s Done.
+
[Task 2/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 2/25] Current/Best: 12.24/ 13.09 GFLOPS | Progress: (4/20) | 3.69 s
[Task 2/25] Current/Best: 14.01/ 18.49 GFLOPS | Progress: (8/20) | 4.98 s
[Task 2/25] Current/Best: 21.02/ 21.02 GFLOPS | Progress: (12/20) | 6.29 s
[Task 2/25] Current/Best: 12.32/ 21.02 GFLOPS | Progress: (16/20) | 7.57 s
[Task 2/25] Current/Best: 18.78/ 21.02 GFLOPS | Progress: (20/20) | 9.11 s Done.
+
[Task 3/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 3/25] Current/Best: 1.63/ 10.57 GFLOPS | Progress: (4/20) | 5.79 s
[Task 3/25] Current/Best: 15.58/ 16.91 GFLOPS | Progress: (8/20) | 7.69 s
[Task 3/25] Current/Best: 14.91/ 16.91 GFLOPS | Progress: (12/20) | 9.39 s
[Task 3/25] Current/Best: 7.14/ 23.80 GFLOPS | Progress: (16/20) | 11.31 s
[Task 3/25] Current/Best: 12.16/ 23.80 GFLOPS | Progress: (20/20) | 15.81 s Done.
+
[Task 4/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 4/25] Current/Best: 9.55/ 20.47 GFLOPS | Progress: (4/20) | 2.29 s
[Task 4/25] Current/Best: 6.68/ 20.47 GFLOPS | Progress: (8/20) | 6.62 s
[Task 4/25] Current/Best: 22.17/ 22.17 GFLOPS | Progress: (12/20) | 11.06 s
[Task 4/25] Current/Best: 17.36/ 22.17 GFLOPS | Progress: (16/20) | 13.28 s
[Task 4/25] Current/Best: 13.00/ 22.17 GFLOPS | Progress: (20/20) | 15.17 s Done.
+
[Task 5/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 5/25] Current/Best: 9.51/ 10.37 GFLOPS | Progress: (4/20) | 2.50 s
[Task 5/25] Current/Best: 11.65/ 12.55 GFLOPS | Progress: (8/20) | 4.57 s
[Task 5/25] Current/Best: 11.65/ 18.03 GFLOPS | Progress: (12/20) | 7.69 s
[Task 5/25] Current/Best: 11.71/ 22.63 GFLOPS | Progress: (16/20) | 9.09 s
[Task 5/25] Current/Best: 11.97/ 22.63 GFLOPS | Progress: (20/20) | 10.93 s Done.
+
[Task 6/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 6/25] Current/Best: 12.20/ 20.79 GFLOPS | Progress: (4/20) | 3.92 s
[Task 6/25] Current/Best: 18.97/ 20.79 GFLOPS | Progress: (8/20) | 5.67 s
[Task 6/25] Current/Best: 13.24/ 20.79 GFLOPS | Progress: (12/20) | 7.59 s
[Task 6/25] Current/Best: 19.99/ 20.79 GFLOPS | Progress: (16/20) | 9.84 s
[Task 6/25] Current/Best: 3.70/ 20.79 GFLOPS | Progress: (20/20) | 12.35 s Done.
+
[Task 7/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 7/25] Current/Best: 11.15/ 12.91 GFLOPS | Progress: (4/20) | 3.49 s
[Task 7/25] Current/Best: 20.35/ 20.97 GFLOPS | Progress: (8/20) | 4.99 s
[Task 7/25] Current/Best: 16.04/ 20.97 GFLOPS | Progress: (12/20) | 6.88 s
[Task 7/25] Current/Best: 12.24/ 20.97 GFLOPS | Progress: (16/20) | 8.91 s
[Task 7/25] Current/Best: 6.35/ 21.73 GFLOPS | Progress: (20/20) | 11.36 s Done.
+
[Task 8/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 8/25] Current/Best: 9.65/ 13.75 GFLOPS | Progress: (4/20) | 2.83 s
[Task 8/25] Current/Best: 9.58/ 13.75 GFLOPS | Progress: (8/20) | 7.67 s
[Task 8/25] Current/Best: 12.71/ 13.75 GFLOPS | Progress: (12/20) | 13.77 s
[Task 8/25] Current/Best: 18.62/ 18.62 GFLOPS | Progress: (16/20) | 15.88 s
[Task 8/25] Current/Best: 19.59/ 19.59 GFLOPS | Progress: (20/20) | 22.45 s Done.
+
[Task 9/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 9/25] Current/Best: 14.35/ 15.82 GFLOPS | Progress: (4/20) | 17.56 s
[Task 9/25] Current/Best: 23.52/ 23.52 GFLOPS | Progress: (8/20) | 19.25 s
[Task 9/25] Current/Best: 8.27/ 23.52 GFLOPS | Progress: (12/20) | 21.64 s
[Task 9/25] Current/Best: 17.83/ 23.52 GFLOPS | Progress: (16/20) | 24.32 s
[Task 9/25] Current/Best: 9.09/ 23.52 GFLOPS | Progress: (20/20) | 32.07 s
[Task 10/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 10/25] Current/Best: 17.98/ 17.98 GFLOPS | Progress: (4/20) | 2.47 s
[Task 10/25] Current/Best: 15.40/ 17.98 GFLOPS | Progress: (8/20) | 4.05 s
[Task 10/25] Current/Best: 12.81/ 18.56 GFLOPS | Progress: (12/20) | 5.56 s
[Task 10/25] Current/Best: 19.07/ 20.34 GFLOPS | Progress: (16/20) | 6.67 s
[Task 10/25] Current/Best: 8.88/ 20.34 GFLOPS | Progress: (20/20
) | 8.18 s Done.
+
[Task 11/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 11/25] Current/Best: 12.26/ 18.11 GFLOPS | Progress: (4/20) | 3.20 s
[Task 11/25] Current/Best: 16.90/ 18.11 GFLOPS | Progress: (8/20) | 5.90 s
[Task 11/25] Current/Best: 18.10/ 18.11 GFLOPS | Progress: (12/20) | 7.94 s
[Task 11/25] Current/Best: 13.33/ 21.19 GFLOPS | Progress: (16/20) | 10.66 s
[Task 11/25] Current/Best: 19.22/ 21.62 GFLOPS | Progress: (20/20) | 12.68 s Done.
+
[Task 12/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 12/25] Current/Best: 7.82/ 18.11 GFLOPS | Progress: (4/20) | 5.34 s
[Task 12/25] Current/Best: 5.16/ 18.11 GFLOPS | Progress: (8/20) | 9.03 s
[Task 12/25] Current/Best: 18.93/ 18.93 GFLOPS | Progress: (12/20) | 11.03 s
[Task 12/25] Current/Best: 15.33/ 18.93 GFLOPS | Progress: (16/20) | 13.85 s
[Task 12/25] Current/Best: 15.16/ 18.93 GFLOPS | Progress: (20/20) | 15.80 s Done.
+
[Task 13/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 13/25] Current/Best: 8.62/ 17.28 GFLOPS | Progress: (4/20) | 3.62 s
[Task 13/25] Current/Best: 15.91/ 21.03 GFLOPS | Progress: (8/20) | 6.04 s
[Task 13/25] Current/Best: 19.59/ 21.43 GFLOPS | Progress: (12/20) | 8.96 s
[Task 13/25] Current/Best: 12.28/ 21.43 GFLOPS | Progress: (16/20) | 12.37 s
[Task 13/25] Current/Best: 18.79/ 21.43 GFLOPS | Progress: (20/20) | 14.66 s Done.
+
[Task 14/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 14/25] Current/Best: 13.62/ 13.62 GFLOPS | Progress: (4/20) | 3.25 s
[Task 14/25] Current/Best: 6.12/ 13.62 GFLOPS | Progress: (8/20) | 5.46 s
[Task 14/25] Current/Best: 19.64/ 19.64 GFLOPS | Progress: (12/20) | 8.00 s
[Task 14/25] Current/Best: 16.46/ 19.64 GFLOPS | Progress: (16/20) | 9.86 s
[Task 14/25] Current/Best: 16.77/ 19.64 GFLOPS | Progress: (20/20) | 11.55 s
[Task 15/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s Done.
Done.
-
[Task 15/25] Current/Best: 16.14/ 17.60 GFLOPS | Progress: (4/20) | 2.64 s
[Task 15/25] Current/Best: 14.41/ 17.99 GFLOPS | Progress: (8/20) | 4.16 s
[Task 15/25] Current/Best: 10.36/ 22.29 GFLOPS | Progress: (12/20) | 6.50 s
[Task 15/25] Current/Best: 20.18/ 22.29 GFLOPS | Progress: (16/20) | 9.67 s
[Task 15/25] Current/Best: 9.65/ 22.29 GFLOPS | Progress: (20/20) | 10.86 s
[Task 16/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 16/25] Current/Best: 20.42/ 20.42 GFLOPS | Progress: (4/20) | 3.01 s
[Task 16/25] Current/Best: 3.03/ 20.42 GFLOPS | Progress: (8/20) | 4.63 s
[Task 16/25] Current/Best: 18.90/ 20.42 GFLOPS | Progress: (12/20) | 5.84 s
[Task 16/25] Current/Best: 17.50/ 20.42 GFLOPS | Progress: (16/20) | 7.23 s
[Task 16/25] Current/Best: 9.99/ 22.20 GFLOPS | Progress: (20/20) | 9.41 s Done.
-
[Task 17/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 17/25] Current/Best: 13.40/ 18.76 GFLOPS | Progress: (4/20) | 4.79 s
[Task 17/25] Current/Best: 14.43/ 22.99 GFLOPS | Progress: (8/20) | 7.72 s
[Task 17/25] Current/Best: 16.64/ 22.99 GFLOPS | Progress: (12/20) | 9.79 s
[Task 17/25] Current/Best: 16.51/ 22.99 GFLOPS | Progress: (16/20) | 12.00 s
[Task 17/25] Current/Best: 10.04/ 22.99 GFLOPS | Progress: (20/20) | 14.15 s Done.
-
[Task 18/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 18/25] Current/Best: 11.36/ 18.19 GFLOPS | Progress: (4/20) | 3.75 s
[Task 18/25] Current/Best: 10.53/ 19.57 GFLOPS | Progress: (8/20) | 7.45 s
[Task 18/25] Current/Best: 19.36/ 19.57 GFLOPS | Progress: (12/20) | 9.38 s
[Task 18/25] Current/Best: 10.06/ 19.57 GFLOPS | Progress: (16/20) | 13.26 s
[Task 18/25] Current/Best: 20.63/ 20.63 GFLOPS | Progress: (20/20) | 14.77 s Done.
-
[Task 19/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 19/25] Current/Best: 6.87/ 20.22 GFLOPS | Progress: (4/20) | 6.25 s
[Task 19/25] Current/Best: 2.61/ 20.22 GFLOPS | Progress: (8/20) | 9.58 s
[Task 19/25] Current/Best: 19.32/ 20.84 GFLOPS | Progress: (12/20) | 12.51 s
[Task 19/25] Current/Best: 15.05/ 21.62 GFLOPS | Progress: (16/20) | 15.48 s
[Task 19/25] Current/Best: 2.70/ 23.20 GFLOPS | Progress: (20/20) | 18.24 s Done.
-
[Task 20/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 20/25] Current/Best: 8.74/ 14.31 GFLOPS | Progress: (4/20) | 3.30 s
[Task 20/25] Current/Best: 9.94/ 14.31 GFLOPS | Progress: (8/20) | 6.90 s
[Task 20/25] Current/Best: 2.32/ 16.34 GFLOPS | Progress: (12/20) | 10.87 s Done.
-
[Task 20/25] Current/Best: 12.39/ 16.34 GFLOPS | Progress: (16/20) | 14.86 s
[Task 20/25] Current/Best: 13.16/ 21.76 GFLOPS | Progress: (20/20) | 16.96 s Done.
-
[Task 21/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 21/25] Current/Best: 6.39/ 17.60 GFLOPS | Progress: (4/20) | 3.24 s
[Task 21/25] Current/Best: 14.49/ 17.60 GFLOPS | Progress: (8/20) | 4.85 s
[Task 21/25] Current/Best: 1.61/ 17.60 GFLOPS | Progress: (12/20) | 7.02 s
[Task 21/25] Current/Best: 17.90/ 17.90 GFLOPS | Progress: (16/20) | 10.53 s
[Task 21/25] Current/Best: 4.47/ 17.90 GFLOPS | Progress: (20/20) | 17.94 s
[Task 22/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 22/25] Current/Best: 2.70/ 17.00 GFLOPS | Progress: (4/20) | 2.64 s
[Task 22/25] Current/Best: 8.72/ 21.83 GFLOPS | Progress: (8/20) | 4.61 s
[Task 22/25] Current/Best: 19.80/ 21.83 GFLOPS | Progress: (12/20) | 7.02 s
[Task 22/25] Current/Best: 15.07/ 21.83 GFLOPS | Progress: (16/20) | 9.15 s
[Task 22/25] Current/Best: 14.51/ 21.83 GFLOPS | Progress: (20/20) |
10.91 s Done.
-
[Task 23/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 23/25] Current/Best: 17.37/ 20.39 GFLOPS | Progress: (4/20) | 3.19 s
[Task 23/25] Current/Best: 15.79/ 20.39 GFLOPS | Progress: (8/20) | 6.67 s
[Task 23/25] Current/Best: 20.82/ 21.32 GFLOPS | Progress: (12/20) | 8.52 s
[Task 23/25] Current/Best: 6.25/ 21.32 GFLOPS | Progress: (16/20) | 15.75 s
[Task 23/25] Current/Best: 7.67/ 21.32 GFLOPS | Progress: (20/20) | 19.99 s Done.
-
[Task 24/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 24/25] Current/Best: 8.34/ 8.34 GFLOPS | Progress: (4/20) | 13.53 s
[Task 24/25] Current/Best: 3.35/ 8.34 GFLOPS | Progress: (8/20) | 29.47 s
[Task 24/25] Current/Best: 4.00/ 8.34 GFLOPS | Progress: (12/20) | 53.44 s
[Task 24/25] Current/Best: 6.84/ 8.80 GFLOPS | Progress: (16/20) | 59.14 s Done.
-
[Task 24/25] Current/Best: 3.16/ 8.87 GFLOPS | Progress: (20/20) | 65.29 s Done.
-
[Task 25/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 25/25] Current/Best: 1.55/ 2.89 GFLOPS | Progress: (4/20) | 29.63 s
[Task 25/25] Current/Best: 5.41/ 7.82 GFLOPS | Progress: (8/20) | 350.91 s
[Task 25/25] Current/Best: 5.85/ 7.82 GFLOPS | Progress: (12/20) | 379.60 s
[Task 25/25] Current/Best: 5.76/ 8.76 GFLOPS | Progress: (16/20) | 381.31 s
[Task 25/25] Current/Best: 2.92/ 8.76 GFLOPS | Progress: (20/20) | 401.62 s
+
[Task 15/25] Current/Best: 16.17/ 17.59 GFLOPS | Progress: (4/20) | 2.59 s
[Task 15/25] Current/Best: 14.36/ 18.10 GFLOPS | Progress: (8/20) | 4.09 s
[Task 15/25] Current/Best: 10.37/ 22.23 GFLOPS | Progress: (12/20) | 6.17 s
[Task 15/25] Current/Best: 20.32/ 22.23 GFLOPS | Progress: (16/20) | 9.06 s
[Task 15/25] Current/Best: 9.69/ 22.23 GFLOPS | Progress: (20/20) | 10.20 s
[Task 16/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 16/25] Current/Best: 20.64/ 20.64 GFLOPS | Progress: (4/20) | 2.83 s
[Task 16/25] Current/Best: 3.04/ 20.64 GFLOPS | Progress: (8/20) | 4.43 s
[Task 16/25] Current/Best: 19.47/ 20.64 GFLOPS | Progress: (12/20) | 5.64 s
[Task 16/25] Current/Best: 17.83/ 20.64 GFLOPS | Progress: (16/20) | 6.97 s
[Task 16/25] Current/Best: 10.02/ 21.88 GFLOPS | Progress: (20/20) | 8.98 s Done.
+
[Task 17/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 17/25] Current/Best: 13.05/ 18.88 GFLOPS | Progress: (4/20) | 4.62 s
[Task 17/25] Current/Best: 14.34/ 23.37 GFLOPS | Progress: (8/20) | 7.45 s
[Task 17/25] Current/Best: 16.79/ 23.37 GFLOPS | Progress: (12/20) | 9.48 s
[Task 17/25] Current/Best: 16.51/ 23.37 GFLOPS | Progress: (16/20) | 11.64 s
[Task 17/25] Current/Best: 10.03/ 23.37 GFLOPS | Progress: (20/20) | 13.75 s Done.
+
[Task 18/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 18/25] Current/Best: 11.31/ 17.76 GFLOPS | Progress: (4/20) | 3.62 s
[Task 18/25] Current/Best: 10.62/ 19.84 GFLOPS | Progress: (8/20) | 7.09 s
[Task 18/25] Current/Best: 18.92/ 19.84 GFLOPS | Progress: (12/20) | 9.02 s
[Task 18/25] Current/Best: 10.03/ 19.84 GFLOPS | Progress: (16/20) | 12.64 s
[Task 18/25] Current/Best: 20.63/ 20.63 GFLOPS | Progress: (20/20) | 14.16 s Done.
+
[Task 19/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 19/25] Current/Best: 7.16/ 20.40 GFLOPS | Progress: (4/20) | 5.90 s
[Task 19/25] Current/Best: 2.61/ 20.40 GFLOPS | Progress: (8/20) | 9.19 s
[Task 19/25] Current/Best: 19.93/ 21.90 GFLOPS | Progress: (12/20) | 12.00 s
[Task 19/25] Current/Best: 14.24/ 21.90 GFLOPS | Progress: (16/20) | 14.89 s
[Task 19/25] Current/Best: 2.70/ 23.52 GFLOPS | Progress: (20/20) | 17.75 s Done.
+
[Task 20/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 20/25] Current/Best: 9.65/ 15.18 GFLOPS | Progress: (4/20) | 3.24 s
[Task 20/25] Current/Best: 9.95/ 15.18 GFLOPS | Progress: (8/20) | 6.62 s
[Task 20/25] Current/Best: 2.32/ 16.74 GFLOPS | Progress: (12/20) | 10.52 s Done.
+
[Task 20/25] Current/Best: 12.29/ 16.74 GFLOPS | Progress: (16/20) | 14.08 s
[Task 20/25] Current/Best: 12.44/ 22.17 GFLOPS | Progress: (20/20) | 16.15 s Done.
+
[Task 21/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 21/25] Current/Best: 6.42/ 17.68 GFLOPS | Progress: (4/20) | 3.16 s
[Task 21/25] Current/Best: 14.64/ 17.68 GFLOPS | Progress: (8/20) | 4.68 s
[Task 21/25] Current/Best: 1.61/ 17.68 GFLOPS | Progress: (12/20) | 6.79 s
[Task 21/25] Current/Best: 18.13/ 18.13 GFLOPS | Progress: (16/20) | 10.19 s
[Task 21/25] Current/Best: 4.47/ 18.13 GFLOPS | Progress: (20/20) | 17.31 s
[Task 22/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 22/25] Current/Best: 2.70/ 16.90 GFLOPS | Progress: (4/20) | 2.59 s
[Task 22/25] Current/Best: 8.80/ 21.96 GFLOPS | Progress: (8/20) | 4.57 s
[Task 22/25] Current/Best: 19.90/ 21.96 GFLOPS | Progress: (12/20) | 6.86 s
[Task 22/25] Current/Best: 15.27/ 21.96 GFLOPS | Progress: (16/20) | 8.90 s
[Task 22/25] Current/Best: 13.80/ 21.96 GFLOPS | Progress: (20/20) |
10.61 s Done.
+
[Task 23/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 23/25] Current/Best: 17.60/ 20.44 GFLOPS | Progress: (4/20) | 3.15 s
[Task 23/25] Current/Best: 13.95/ 20.44 GFLOPS | Progress: (8/20) | 6.53 s
[Task 23/25] Current/Best: 21.00/ 21.78 GFLOPS | Progress: (12/20) | 8.30 s
[Task 23/25] Current/Best: 6.38/ 21.78 GFLOPS | Progress: (16/20) | 15.32 s
[Task 23/25] Current/Best: 7.75/ 21.78 GFLOPS | Progress: (20/20) | 19.51 s Done.
+
[Task 24/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 24/25] Current/Best: 8.36/ 8.36 GFLOPS | Progress: (4/20) | 13.47 s
[Task 24/25] Current/Best: 2.08/ 8.36 GFLOPS | Progress: (8/20) | 30.30 s
[Task 24/25] Current/Best: 4.30/ 8.36 GFLOPS | Progress: (12/20) | 53.72 s
[Task 24/25] Current/Best: 6.01/ 8.43 GFLOPS | Progress: (16/20) | 59.09 s Done.
+
[Task 24/25] Current/Best: 3.40/ 8.82 GFLOPS | Progress: (20/20) | 65.13 s Done.
+
[Task 25/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
[Task 25/25] Current/Best: 1.55/ 2.75 GFLOPS | Progress: (4/20) | 32.19 s
[Task 25/25] Current/Best: 5.96/ 7.61 GFLOPS | Progress: (8/20) | 329.27 s
[Task 25/25] Current/Best: 5.98/ 7.61 GFLOPS | Progress: (12/20) | 357.33 s
[Task 25/25] Current/Best: 5.76/ 8.75 GFLOPS | Progress: (16/20) | 359.22 s
[Task 25/25] Current/Best: 2.83/ 8.75 GFLOPS | Progress: (20/20) | 379.51 s
The output from this tuning process will look something like this:
@@ -651,8 +651,8 @@ improvement in comparing the optimized model to the unoptimized model.
.. code-block:: none
- optimized: {'mean': 413.66940738000267, 'median': 413.8154794499883, 'std': 0.809620000670305}
- unoptimized: {'mean': 496.7864342000008, 'median': 496.7793454000031, 'std': 0.5719564614245756}
+ optimized: {'mean': 412.6867303899985, 'median': 412.5973116999944, 'std': 1.1433429448509558}
+ unoptimized: {'mean': 496.2918664199986, 'median': 496.27590769999586, 'std': 0.634099203437212}
@@ -672,7 +672,7 @@ profiling/benchmarking.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 16 minutes 50.926 seconds)
+ **Total running time of the script:** ( 16 minutes 7.791 seconds)
.. _sphx_glr_download_tutorial_autotvm_relay_x86.py:
diff --git a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
index 6b62c46c7..c4b934ac1 100644
--- a/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorial/cross_compilation_and_rpc.rst.txt
@@ -235,7 +235,7 @@ device and returns the measured cost. Network overhead is excluded.
.. code-block:: none
- 1.267e-07 secs/op
+ 1.241e-07 secs/op
diff --git a/docs/_sources/tutorial/intro_topi.rst.txt b/docs/_sources/tutorial/intro_topi.rst.txt
index b366fd070..4990f603f 100644
--- a/docs/_sources/tutorial/intro_topi.rst.txt
+++ b/docs/_sources/tutorial/intro_topi.rst.txt
@@ -233,7 +233,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
.. code-block:: none
- [stage(a, placeholder(a, 0x4a5cb60)), stage(b, placeholder(b, 0x4468040)), stage(T_add, compute(T_add, body=[(a[ax0, ax1, ax2] + b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min=0, ext=10))], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[(a[ax0, ax1, ax2]*b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min= [...]
+ [stage(a, placeholder(a, 0x12e0f470)), stage(b, placeholder(b, 0xde86e50)), stage(T_add, compute(T_add, body=[(a[ax0, ax1, ax2] + b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min=0, ext=10))], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[(a[ax0, ax1, ax2]*b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min [...]
diff --git a/docs/_sources/tutorial/sg_execution_times.rst.txt b/docs/_sources/tutorial/sg_execution_times.rst.txt
index 3f394fa7a..b15acbed4 100644
--- a/docs/_sources/tutorial/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorial/sg_execution_times.rst.txt
@@ -5,17 +5,17 @@
Computation times
=================
-**19:34.763** total execution time for **tutorial** files:
+**19:08.716** total execution time for **tutorial** files:
-- **16:50.926**: :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)
-- **01:01.319**: :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)
-- **00:49.549**: :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``)
-- **00:26.900**: :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)
-- **00:23.842**: :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)
-- **00:01.069**: :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)
-- **00:00.738**: :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)
-- **00:00.218**: :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``)
-- **00:00.055**: :ref:`sphx_glr_tutorial_introduction.py` (``introduction.py``)
-- **00:00.050**: :ref:`sphx_glr_tutorial_install.py` (``install.py``)
-- **00:00.049**: :ref:`sphx_glr_tutorial_tvmc_python.py` (``tvmc_python.py``)
-- **00:00.049**: :ref:`sphx_glr_tutorial_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)
+- **16:07.791**: :ref:`sphx_glr_tutorial_autotvm_relay_x86.py` (``autotvm_relay_x86.py``)
+- **01:06.694**: :ref:`sphx_glr_tutorial_auto_scheduler_matmul_x86.py` (``auto_scheduler_matmul_x86.py``)
+- **01:01.757**: :ref:`sphx_glr_tutorial_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)
+- **00:26.128**: :ref:`sphx_glr_tutorial_relay_quick_start.py` (``relay_quick_start.py``)
+- **00:23.991**: :ref:`sphx_glr_tutorial_autotvm_matmul_x86.py` (``autotvm_matmul_x86.py``)
+- **00:01.278**: :ref:`sphx_glr_tutorial_tensor_ir_blitz_course.py` (``tensor_ir_blitz_course.py``)
+- **00:00.703**: :ref:`sphx_glr_tutorial_intro_topi.py` (``intro_topi.py``)
+- **00:00.195**: :ref:`sphx_glr_tutorial_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``)
+- **00:00.050**: :ref:`sphx_glr_tutorial_introduction.py` (``introduction.py``)
+- **00:00.049**: :ref:`sphx_glr_tutorial_install.py` (``install.py``)
+- **00:00.042**: :ref:`sphx_glr_tutorial_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)
+- **00:00.038**: :ref:`sphx_glr_tutorial_tvmc_python.py` (``tvmc_python.py``)
diff --git a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
index a9624b738..ab1383bb6 100644
--- a/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorial/tensor_expr_get_started.rst.txt
@@ -243,8 +243,8 @@ helper function to run a profile of the TVM generated code.
.. code-block:: none
- Numpy running time: 0.000008
- naive: 0.000006
+ Numpy running time: 0.000007
+ naive: 0.000008
@@ -335,7 +335,7 @@ compile and run this new schedule with the parallel operation applied:
.. code-block:: none
- parallel: 0.000006
+ parallel: 0.000007
@@ -438,10 +438,10 @@ We can now compare the different schedules
.. code-block:: none
Operator Timing Performance
- numpy 8.49692999963736e-06 1.0
- naive 5.8957e-06 0.6938623714978965
- parallel 6.388199999999999e-06 0.751824482521645
- vector 2.46776e-05 2.904296022334327
+ numpy 6.76600000133476e-06 1.0
+ naive 7.9438e-06 1.1740762634396822
+ parallel 6.897e-06 1.0193615132485068
+ vector 2.4566200000000002e-05 3.6308306229904987
@@ -830,7 +830,7 @@ matrix multiplication.
.. code-block:: none
- Numpy running time: 0.019851
+ Numpy running time: 0.018184
@@ -886,7 +886,7 @@ optimizations.
.. code-block:: none
- none: 3.404720
+ none: 3.479903
@@ -985,7 +985,7 @@ schedule.
.. code-block:: none
- blocking: 0.311770
+ blocking: 0.307421
@@ -1077,7 +1077,7 @@ already cache friendly from our previous optimizations.
.. code-block:: none
- vectorization: 0.343860
+ vectorization: 0.335066
@main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1149,7 +1149,7 @@ more cache friendly.
.. code-block:: none
- loop permutation: 0.122788
+ loop permutation: 0.114581
@main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1246,7 +1246,7 @@ optimized schedule.
.. code-block:: none
- array packing: 0.111482
+ array packing: 0.108276
@main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1337,7 +1337,7 @@ to `C` when all the block results are ready.
.. code-block:: none
- block caching: 0.111112
+ block caching: 0.110911
@main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1421,7 +1421,7 @@ of thread-level parallelization.
.. code-block:: none
- parallelization: 0.145702
+ parallelization: 0.144936
@main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_2: Pointer(float32), float32, [1048576], []),
@@ -1500,13 +1500,13 @@ working, we can compare the results.
.. code-block:: none
Operator Timing Performance
- none 3.4047198742 1.0
- blocking 0.3117700605 0.09156995935627625
- vectorization 0.3438601121 0.10099512582684826
- loop permutation 0.1227878452 0.03606400812309154
- array packing 0.11148248000000001 0.03274351022085035
- block caching 0.1111120836 0.03263472112404189
- parallelization 0.14570197940000001 0.042794116633232654
+ none 3.4799027516 1.0
+ blocking 0.3074208406 0.08834179071775874
+ vectorization 0.33506588530000003 0.09628599108005029
+ loop permutation 0.11458059760000001 0.03292637920623437
+ array packing 0.10827570610000001 0.031114578144523345
+ block caching 0.1109108221 0.031871816546886284
+ parallelization 0.1449361969 0.04164949633531018
@@ -1543,7 +1543,7 @@ the computation for specific platforms.
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** ( 1 minutes 1.319 seconds)
+ **Total running time of the script:** ( 1 minutes 1.757 seconds)
.. _sphx_glr_download_tutorial_tensor_expr_get_started.py:
diff --git a/docs/commit_hash b/docs/commit_hash
index c8654996e..2965c2777 100644
--- a/docs/commit_hash
+++ b/docs/commit_hash
@@ -1 +1 @@
-50997035befc0383dcba21808ab739d9ed8df08c
+d0999bbd3b40b9466cc3b5c01f2b4b7fb09b478d
diff --git a/docs/how_to/compile_models/from_mxnet.html b/docs/how_to/compile_models/from_mxnet.html
index e2f5b6b2c..a62f86950 100644
--- a/docs/how_to/compile_models/from_mxnet.html
+++ b/docs/how_to/compile_models/from_mxnet.html
@@ -401,7 +401,7 @@
</div>
<img alt="../../_images/sphx_glr_from_mxnet_001.png" class="sphx-glr-single-img" src="../../_images/sphx_glr_from_mxnet_001.png" />
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip51e829dd-1d99-4b44-9836-8d4391dba25d from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/resnet18_v1-a0666292.zip25d94653-141a-4692-983a-bb4507787197 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet18_v1-a0666292.zip...
x (1, 3, 224, 224)
</pre></div>
</div>
diff --git a/docs/how_to/compile_models/from_oneflow.html b/docs/how_to/compile_models/from_oneflow.html
index 443a5ad9e..bd1274569 100644
--- a/docs/how_to/compile_models/from_oneflow.html
+++ b/docs/how_to/compile_models/from_oneflow.html
@@ -406,45 +406,47 @@ python3 -m pip install -f https://release.oneflow.info <span class="nv">oneflow<
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: "https://oneflow-public.oss-cn-beijing.aliyuncs.com/model_zoo/flowvision/classification/ResNet/resnet18.zip" to /workspace/.oneflow/flowvision_cache/resnet18.zip
0%| | 0.00/41.5M [00:00<?, ?B/s]
- 0%| | 16.0k/41.5M [00:00<08:46, 82.6kB/s]
- 0%| | 48.0k/41.5M [00:00<05:32, 131kB/s]
- 0%| | 96.0k/41.5M [00:00<03:56, 184kB/s]
- 0%| | 160k/41.5M [00:00<02:59, 241kB/s]
- 1%| | 288k/41.5M [00:00<01:50, 391kB/s]
- 1%|1 | 552k/41.5M [00:01<00:59, 719kB/s]
- 3%|2 | 1.05M/41.5M [00:01<00:31, 1.36MB/s]
- 5%|4 | 2.06M/41.5M [00:01<00:15, 2.63MB/s]
- 9%|8 | 3.54M/41.5M [00:01<00:09, 4.23MB/s]
- 12%|#2 | 5.00M/41.5M [00:01<00:07, 5.30MB/s]
- 16%|#5 | 6.48M/41.5M [00:02<00:06, 6.05MB/s]
- 19%|#9 | 7.95M/41.5M [00:02<00:05, 6.57MB/s]
- 23%|##2 | 9.43M/41.5M [00:02<00:04, 6.93MB/s]
- 26%|##6 | 10.9M/41.5M [00:02<00:04, 7.17MB/s]
- 30%|##9 | 12.4M/41.5M [00:02<00:04, 7.35MB/s]
- 33%|###3 | 13.9M/41.5M [00:03<00:03, 7.48MB/s]
- 37%|###6 | 15.3M/41.5M [00:03<00:03, 7.55MB/s]
- 41%|#### | 16.8M/41.5M [00:03<00:03, 7.61MB/s]
- 44%|####4 | 18.3M/41.5M [00:03<00:03, 7.65MB/s]
- 48%|####7 | 19.7M/41.5M [00:03<00:02, 8.98MB/s]
- 50%|####9 | 20.7M/41.5M [00:03<00:02, 9.07MB/s]
- 52%|#####2 | 21.6M/41.5M [00:04<00:02, 7.63MB/s]
- 55%|#####4 | 22.7M/41.5M [00:04<00:02, 7.01MB/s]
- 58%|#####8 | 24.1M/41.5M [00:04<00:02, 8.59MB/s]
- 60%|###### | 25.1M/41.5M [00:04<00:01, 8.76MB/s]
- 63%|######2 | 26.0M/41.5M [00:04<00:02, 7.33MB/s]
- 65%|######5 | 27.1M/41.5M [00:04<00:02, 6.81MB/s]
- 69%|######8 | 28.6M/41.5M [00:05<00:01, 8.51MB/s]
- 71%|#######1 | 29.5M/41.5M [00:05<00:01, 8.69MB/s]
- 73%|#######3 | 30.4M/41.5M [00:05<00:01, 7.27MB/s]
- 76%|#######5 | 31.5M/41.5M [00:05<00:01, 6.77MB/s]
- 79%|#######9 | 33.0M/41.5M [00:05<00:01, 8.49MB/s]
- 82%|########1 | 33.9M/41.5M [00:05<00:00, 8.67MB/s]
- 84%|########3 | 34.8M/41.5M [00:05<00:00, 7.25MB/s]
- 87%|########6 | 35.9M/41.5M [00:06<00:00, 6.77MB/s]
- 90%|######### | 37.4M/41.5M [00:06<00:00, 7.10MB/s]
- 94%|#########3| 38.9M/41.5M [00:06<00:00, 7.32MB/s]
- 97%|#########7| 40.3M/41.5M [00:06<00:00, 7.43MB/s]
-100%|##########| 41.5M/41.5M [00:06<00:00, 6.37MB/s]
+ 0%| | 16.0k/41.5M [00:00<08:03, 90.0kB/s]
+ 0%| | 40.0k/41.5M [00:00<06:14, 116kB/s]
+ 0%| | 88.0k/41.5M [00:00<03:53, 186kB/s]
+ 0%| | 144k/41.5M [00:00<03:03, 236kB/s]
+ 1%| | 280k/41.5M [00:00<01:41, 425kB/s]
+ 1%|1 | 488k/41.5M [00:01<01:03, 677kB/s]
+ 2%|2 | 920k/41.5M [00:01<00:34, 1.25MB/s]
+ 4%|4 | 1.75M/41.5M [00:01<00:17, 2.39MB/s]
+ 8%|7 | 3.22M/41.5M [00:01<00:09, 4.29MB/s]
+ 11%|#1 | 4.69M/41.5M [00:01<00:06, 5.57MB/s]
+ 15%|#4 | 6.16M/41.5M [00:02<00:05, 6.44MB/s]
+ 18%|#8 | 7.62M/41.5M [00:02<00:05, 7.04MB/s]
+ 22%|##1 | 9.09M/41.5M [00:02<00:04, 7.46MB/s]
+ 25%|##5 | 10.6M/41.5M [00:02<00:04, 7.74MB/s]
+ 29%|##8 | 12.0M/41.5M [00:02<00:03, 7.95MB/s]
+ 33%|###2 | 13.5M/41.5M [00:02<00:03, 8.09MB/s]
+ 36%|###6 | 15.0M/41.5M [00:03<00:03, 8.18MB/s]
+ 40%|###9 | 16.4M/41.5M [00:03<00:03, 8.24MB/s]
+ 43%|####3 | 17.9M/41.5M [00:03<00:02, 8.29MB/s]
+ 47%|####6 | 19.4M/41.5M [00:03<00:02, 8.33MB/s]
+ 50%|##### | 20.8M/41.5M [00:03<00:02, 8.49MB/s]
+ 54%|#####3 | 22.3M/41.5M [00:03<00:02, 9.62MB/s]
+ 56%|#####6 | 23.2M/41.5M [00:04<00:02, 9.47MB/s]
+ 58%|#####8 | 24.2M/41.5M [00:04<00:02, 8.40MB/s]
+ 61%|###### | 25.2M/41.5M [00:04<00:02, 8.26MB/s]
+ 64%|######4 | 26.6M/41.5M [00:04<00:01, 9.78MB/s]
+ 67%|######6 | 27.6M/41.5M [00:04<00:01, 9.10MB/s]
+ 69%|######8 | 28.5M/41.5M [00:04<00:01, 7.83MB/s]
+ 71%|#######1 | 29.6M/41.5M [00:04<00:01, 8.58MB/s]
+ 74%|#######3 | 30.6M/41.5M [00:04<00:01, 8.79MB/s]
+ 76%|#######5 | 31.5M/41.5M [00:05<00:01, 7.77MB/s]
+ 78%|#######8 | 32.5M/41.5M [00:05<00:01, 8.61MB/s]
+ 81%|######## | 33.5M/41.5M [00:05<00:00, 8.81MB/s]
+ 83%|########2 | 34.4M/41.5M [00:05<00:00, 7.75MB/s]
+ 86%|########5 | 35.5M/41.5M [00:05<00:00, 8.57MB/s]
+ 88%|########7 | 36.4M/41.5M [00:05<00:00, 8.81MB/s]
+ 90%|########9 | 37.3M/41.5M [00:05<00:00, 7.72MB/s]
+ 93%|#########2| 38.4M/41.5M [00:06<00:00, 7.57MB/s]
+ 96%|#########6| 39.9M/41.5M [00:06<00:00, 9.32MB/s]
+ 98%|#########8| 40.8M/41.5M [00:06<00:00, 8.87MB/s]
+100%|##########| 41.5M/41.5M [00:06<00:00, 6.79MB/s]
</pre></div>
</div>
</div>
diff --git a/docs/how_to/compile_models/from_paddle.html b/docs/how_to/compile_models/from_paddle.html
index d7175ef00..e70da95a9 100644
--- a/docs/how_to/compile_models/from_paddle.html
+++ b/docs/how_to/compile_models/from_paddle.html
@@ -464,7 +464,7 @@ A quick solution is</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>TVM prediction top-1 id: 282, class name: 282: 'tiger cat',
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 7.692 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 4.138 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-paddle-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/16269b77359771348d507395692524cf/from_paddle.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_paddle.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/from_pytorch.html b/docs/how_to/compile_models/from_pytorch.html
index 368d50ede..208300dec 100644
--- a/docs/how_to/compile_models/from_pytorch.html
+++ b/docs/how_to/compile_models/from_pytorch.html
@@ -387,12 +387,37 @@ be unstable.</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /workspace/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
0%| | 0.00/44.7M [00:00<?, ?B/s]
- 12%|#1 | 5.20M/44.7M [00:00<00:00, 53.9MB/s]
- 31%|###1 | 13.9M/44.7M [00:00<00:00, 75.8MB/s]
- 47%|####7 | 21.1M/44.7M [00:00<00:00, 73.2MB/s]
- 71%|####### | 31.5M/44.7M [00:00<00:00, 86.2MB/s]
- 93%|#########2| 41.5M/44.7M [00:00<00:00, 92.8MB/s]
-100%|##########| 44.7M/44.7M [00:00<00:00, 87.6MB/s]
+ 3%|3 | 1.50M/44.7M [00:00<00:02, 15.4MB/s]
+ 7%|7 | 3.19M/44.7M [00:00<00:02, 16.6MB/s]
+ 11%|# | 4.78M/44.7M [00:00<00:02, 16.1MB/s]
+ 14%|#4 | 6.38M/44.7M [00:00<00:02, 16.3MB/s]
+ 18%|#8 | 8.12M/44.7M [00:00<00:02, 17.0MB/s]
+ 22%|##1 | 9.75M/44.7M [00:00<00:02, 16.7MB/s]
+ 26%|##5 | 11.6M/44.7M [00:00<00:02, 17.3MB/s]
+ 30%|##9 | 13.2M/44.7M [00:00<00:02, 16.3MB/s]
+ 33%|###3 | 14.8M/44.7M [00:00<00:01, 16.0MB/s]
+ 37%|###6 | 16.5M/44.7M [00:01<00:01, 16.1MB/s]
+ 40%|#### | 18.0M/44.7M [00:01<00:01, 15.4MB/s]
+ 44%|####4 | 19.7M/44.7M [00:01<00:01, 15.9MB/s]
+ 48%|####7 | 21.2M/44.7M [00:01<00:01, 15.1MB/s]
+ 51%|##### | 22.7M/44.7M [00:01<00:01, 14.0MB/s]
+ 54%|#####3 | 24.0M/44.7M [00:01<00:01, 14.0MB/s]
+ 57%|#####6 | 25.4M/44.7M [00:01<00:01, 14.0MB/s]
+ 60%|#####9 | 26.7M/44.7M [00:01<00:01, 13.9MB/s]
+ 63%|######2 | 28.1M/44.7M [00:01<00:01, 13.4MB/s]
+ 67%|######7 | 30.0M/44.7M [00:02<00:01, 15.1MB/s]
+ 70%|####### | 31.4M/44.7M [00:02<00:00, 14.8MB/s]
+ 74%|#######3 | 32.9M/44.7M [00:02<00:00, 13.5MB/s]
+ 77%|#######6 | 34.2M/44.7M [00:02<00:00, 12.7MB/s]
+ 79%|#######9 | 35.4M/44.7M [00:02<00:00, 10.4MB/s]
+ 82%|########1 | 36.5M/44.7M [00:02<00:00, 10.0MB/s]
+ 84%|########3 | 37.5M/44.7M [00:02<00:00, 9.47MB/s]
+ 87%|########6 | 38.7M/44.7M [00:02<00:00, 10.3MB/s]
+ 90%|########9 | 40.2M/44.7M [00:03<00:00, 11.4MB/s]
+ 92%|#########2| 41.3M/44.7M [00:03<00:00, 11.0MB/s]
+ 95%|#########4| 42.4M/44.7M [00:03<00:00, 11.0MB/s]
+ 98%|#########8| 43.8M/44.7M [00:03<00:00, 11.9MB/s]
+100%|##########| 44.7M/44.7M [00:03<00:00, 13.6MB/s]
</pre></div>
</div>
</div>
diff --git a/docs/how_to/compile_models/from_tensorflow.html b/docs/how_to/compile_models/from_tensorflow.html
index de17c5c3a..17c98094f 100644
--- a/docs/how_to/compile_models/from_tensorflow.html
+++ b/docs/how_to/compile_models/from_tensorflow.html
@@ -607,7 +607,7 @@ banana (score = 0.00022)
desk (score = 0.00019)
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 6.408 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 4.194 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-compile-models-from-tensorflow-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/7f1d3d1b878694c201c614c807cdebc8/from_tensorflow.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">from_tensorflow.py</span></code></a></p>
diff --git a/docs/how_to/compile_models/sg_execution_times.html b/docs/how_to/compile_models/sg_execution_times.html
index def091026..e214b6ffd 100644
--- a/docs/how_to/compile_models/sg_execution_times.html
+++ b/docs/how_to/compile_models/sg_execution_times.html
@@ -300,18 +300,18 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-compile-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>05:30.548</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
+<p><strong>05:20.193</strong> total execution time for <strong>how_to_compile_models</strong> files:</p>
<ul class="simple">
-<li><p><strong>01:07.692</strong>: <a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></li>
-<li><p><strong>01:06.408</strong>: <a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></li>
-<li><p><strong>00:57.800</strong>: <a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></li>
-<li><p><strong>00:32.301</strong>: <a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></li>
-<li><p><strong>00:25.078</strong>: <a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></li>
-<li><p><strong>00:22.836</strong>: <a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></li>
-<li><p><strong>00:21.658</strong>: <a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></li>
-<li><p><strong>00:20.176</strong>: <a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></li>
-<li><p><strong>00:13.909</strong>: <a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></li>
-<li><p><strong>00:02.691</strong>: <a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></li>
+<li><p><strong>01:04.194</strong>: <a class="reference internal" href="from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py"><span class="std std-ref">Compile Tensorflow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tensorflow.py</span></code>)</p></li>
+<li><p><strong>01:04.138</strong>: <a class="reference internal" href="from_paddle.html#sphx-glr-how-to-compile-models-from-paddle-py"><span class="std std-ref">Compile PaddlePaddle Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_paddle.py</span></code>)</p></li>
+<li><p><strong>00:56.314</strong>: <a class="reference internal" href="from_darknet.html#sphx-glr-how-to-compile-models-from-darknet-py"><span class="std std-ref">Compile YOLO-V2 and YOLO-V3 in DarkNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_darknet.py</span></code>)</p></li>
+<li><p><strong>00:30.639</strong>: <a class="reference internal" href="from_oneflow.html#sphx-glr-how-to-compile-models-from-oneflow-py"><span class="std std-ref">Compile OneFlow Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_oneflow.py</span></code>)</p></li>
+<li><p><strong>00:24.464</strong>: <a class="reference internal" href="from_tflite.html#sphx-glr-how-to-compile-models-from-tflite-py"><span class="std std-ref">Compile TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_tflite.py</span></code>)</p></li>
+<li><p><strong>00:22.029</strong>: <a class="reference internal" href="from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py"><span class="std std-ref">Compile PyTorch Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_pytorch.py</span></code>)</p></li>
+<li><p><strong>00:21.340</strong>: <a class="reference internal" href="from_coreml.html#sphx-glr-how-to-compile-models-from-coreml-py"><span class="std std-ref">Compile CoreML Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_coreml.py</span></code>)</p></li>
+<li><p><strong>00:21.037</strong>: <a class="reference internal" href="from_mxnet.html#sphx-glr-how-to-compile-models-from-mxnet-py"><span class="std std-ref">Compile MXNet Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_mxnet.py</span></code>)</p></li>
+<li><p><strong>00:13.535</strong>: <a class="reference internal" href="from_keras.html#sphx-glr-how-to-compile-models-from-keras-py"><span class="std std-ref">Compile Keras Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_keras.py</span></code>)</p></li>
+<li><p><strong>00:02.503</strong>: <a class="reference internal" href="from_onnx.html#sphx-glr-how-to-compile-models-from-onnx-py"><span class="std std-ref">Compile ONNX Models</span></a> (<code class="docutils literal notranslate"><span class="pre">from_onnx.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/how_to/deploy_models/deploy_model_on_android.html b/docs/how_to/deploy_models/deploy_model_on_android.html
index 3dc7a4fb3..85692d9e8 100644
--- a/docs/how_to/deploy_models/deploy_model_on_android.html
+++ b/docs/how_to/deploy_models/deploy_model_on_android.html
@@ -622,7 +622,7 @@ to the remote android device.</p>
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 16.6092 16.5794 17.4391 16.3050 0.3147
+ 15.8462 15.8339 15.9624 15.7438 0.0632
</pre></div>
</div>
</div>
diff --git a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
index 347ce3063..4eb4ad4ba 100644
--- a/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
+++ b/docs/how_to/deploy_models/deploy_object_detection_pytorch.html
@@ -409,18 +409,115 @@ be unstable.</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /workspace/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
0%| | 0.00/170M [00:00<?, ?B/s]
- 4%|3 | 6.31M/170M [00:00<00:02, 65.9MB/s]
- 12%|#1 | 20.3M/170M [00:00<00:01, 113MB/s]
- 23%|##3 | 39.5M/170M [00:00<00:00, 153MB/s]
- 32%|###2 | 54.5M/170M [00:00<00:00, 155MB/s]
- 41%|#### | 69.3M/170M [00:00<00:00, 149MB/s]
- 49%|####9 | 83.6M/170M [00:00<00:00, 148MB/s]
- 58%|#####7 | 97.7M/170M [00:00<00:00, 147MB/s]
- 66%|######5 | 112M/170M [00:00<00:00, 145MB/s]
- 74%|#######4 | 126M/170M [00:00<00:00, 147MB/s]
- 83%|########2 | 141M/170M [00:01<00:00, 149MB/s]
- 91%|#########1| 155M/170M [00:01<00:00, 150MB/s]
-100%|##########| 170M/170M [00:01<00:00, 146MB/s]
+ 1%| | 1.12M/170M [00:00<00:15, 11.7MB/s]
+ 1%|1 | 2.42M/170M [00:00<00:13, 12.7MB/s]
+ 2%|2 | 4.01M/170M [00:00<00:12, 13.9MB/s]
+ 3%|3 | 5.34M/170M [00:00<00:13, 12.9MB/s]
+ 4%|3 | 6.58M/170M [00:00<00:13, 12.8MB/s]
+ 5%|4 | 7.94M/170M [00:00<00:12, 13.1MB/s]
+ 5%|5 | 9.20M/170M [00:00<00:13, 12.3MB/s]
+ 6%|6 | 10.5M/170M [00:00<00:13, 12.5MB/s]
+ 8%|7 | 12.9M/170M [00:00<00:10, 16.3MB/s]
+ 9%|8 | 14.5M/170M [00:01<00:10, 15.5MB/s]
+ 10%|# | 17.1M/170M [00:01<00:09, 17.6MB/s]
+ 11%|#1 | 18.7M/170M [00:01<00:09, 17.4MB/s]
+ 12%|#2 | 20.4M/170M [00:01<00:09, 17.3MB/s]
+ 13%|#3 | 22.1M/170M [00:01<00:09, 16.3MB/s]
+ 14%|#3 | 23.7M/170M [00:01<00:09, 16.3MB/s]
+ 15%|#4 | 25.2M/170M [00:01<00:10, 14.8MB/s]
+ 16%|#5 | 26.7M/170M [00:01<00:10, 14.7MB/s]
+ 17%|#6 | 28.4M/170M [00:01<00:09, 15.0MB/s]
+ 18%|#7 | 29.8M/170M [00:02<00:09, 15.0MB/s]
+ 18%|#8 | 31.4M/170M [00:02<00:10, 14.1MB/s]
+ 19%|#9 | 32.7M/170M [00:02<00:10, 13.8MB/s]
+ 20%|## | 34.1M/170M [00:02<00:11, 12.2MB/s]
+ 21%|## | 35.3M/170M [00:02<00:12, 11.6MB/s]
+ 21%|##1 | 36.4M/170M [00:02<00:12, 11.2MB/s]
+ 22%|##2 | 37.9M/170M [00:02<00:11, 12.2MB/s]
+ 23%|##3 | 39.6M/170M [00:02<00:10, 13.7MB/s]
+ 24%|##4 | 41.4M/170M [00:03<00:08, 15.2MB/s]
+ 25%|##5 | 43.1M/170M [00:03<00:08, 16.0MB/s]
+ 26%|##6 | 44.7M/170M [00:03<00:08, 16.3MB/s]
+ 27%|##7 | 46.4M/170M [00:03<00:07, 16.5MB/s]
+ 28%|##8 | 48.0M/170M [00:03<00:07, 16.5MB/s]
+ 30%|##9 | 50.4M/170M [00:03<00:06, 18.8MB/s]
+ 31%|### | 52.2M/170M [00:03<00:06, 18.6MB/s]
+ 32%|###1 | 54.0M/170M [00:03<00:07, 16.6MB/s]
+ 33%|###2 | 55.6M/170M [00:03<00:07, 16.2MB/s]
+ 34%|###3 | 57.7M/170M [00:03<00:06, 17.6MB/s]
+ 35%|###4 | 59.4M/170M [00:04<00:06, 17.4MB/s]
+ 36%|###5 | 61.1M/170M [00:04<00:07, 14.5MB/s]
+ 37%|###6 | 62.6M/170M [00:04<00:08, 13.2MB/s]
+ 38%|###7 | 64.2M/170M [00:04<00:07, 14.2MB/s]
+ 39%|###8 | 65.6M/170M [00:04<00:08, 12.5MB/s]
+ 39%|###9 | 66.9M/170M [00:04<00:09, 11.4MB/s]
+ 40%|#### | 68.4M/170M [00:04<00:08, 12.4MB/s]
+ 41%|####1 | 69.7M/170M [00:05<00:08, 11.9MB/s]
+ 42%|####1 | 71.0M/170M [00:05<00:08, 12.0MB/s]
+ 43%|####2 | 72.2M/170M [00:05<00:08, 12.0MB/s]
+ 43%|####3 | 73.9M/170M [00:05<00:08, 12.3MB/s]
+ 44%|####4 | 75.4M/170M [00:05<00:07, 13.1MB/s]
+ 45%|####5 | 77.1M/170M [00:05<00:06, 14.1MB/s]
+ 46%|####6 | 78.4M/170M [00:05<00:06, 13.7MB/s]
+ 47%|####7 | 79.9M/170M [00:05<00:07, 11.8MB/s]
+ 48%|####7 | 81.1M/170M [00:06<00:09, 10.0MB/s]
+ 49%|####8 | 82.7M/170M [00:06<00:07, 11.6MB/s]
+ 49%|####9 | 84.1M/170M [00:06<00:07, 12.1MB/s]
+ 50%|##### | 85.3M/170M [00:06<00:08, 11.0MB/s]
+ 51%|##### | 86.4M/170M [00:06<00:08, 10.8MB/s]
+ 52%|#####1 | 87.6M/170M [00:06<00:07, 11.1MB/s]
+ 52%|#####2 | 88.7M/170M [00:06<00:08, 10.5MB/s]
+ 53%|#####3 | 90.3M/170M [00:06<00:06, 12.2MB/s]
+ 54%|#####4 | 92.1M/170M [00:06<00:05, 13.7MB/s]
+ 55%|#####5 | 93.5M/170M [00:07<00:05, 13.7MB/s]
+ 56%|#####6 | 95.5M/170M [00:07<00:04, 15.7MB/s]
+ 57%|#####7 | 97.0M/170M [00:07<00:05, 14.4MB/s]
+ 58%|#####7 | 98.5M/170M [00:07<00:05, 14.5MB/s]
+ 59%|#####8 | 99.9M/170M [00:07<00:05, 13.5MB/s]
+ 60%|#####9 | 101M/170M [00:07<00:05, 12.9MB/s]
+ 60%|###### | 102M/170M [00:07<00:05, 12.0MB/s]
+ 61%|######1 | 104M/170M [00:07<00:05, 12.2MB/s]
+ 62%|######1 | 105M/170M [00:08<00:05, 12.5MB/s]
+ 63%|######2 | 106M/170M [00:08<00:05, 12.7MB/s]
+ 63%|######3 | 108M/170M [00:08<00:05, 11.8MB/s]
+ 64%|######4 | 109M/170M [00:08<00:05, 11.9MB/s]
+ 65%|######4 | 110M/170M [00:08<00:05, 11.8MB/s]
+ 65%|######5 | 111M/170M [00:08<00:06, 9.54MB/s]
+ 66%|######6 | 112M/170M [00:08<00:05, 10.4MB/s]
+ 67%|######6 | 113M/170M [00:08<00:05, 10.6MB/s]
+ 68%|######7 | 115M/170M [00:08<00:04, 12.5MB/s]
+ 69%|######8 | 117M/170M [00:09<00:04, 13.0MB/s]
+ 69%|######9 | 118M/170M [00:09<00:04, 12.9MB/s]
+ 70%|####### | 119M/170M [00:09<00:03, 13.8MB/s]
+ 71%|#######1 | 121M/170M [00:09<00:03, 14.3MB/s]
+ 72%|#######2 | 122M/170M [00:09<00:04, 11.9MB/s]
+ 73%|#######2 | 124M/170M [00:09<00:03, 12.8MB/s]
+ 74%|#######3 | 125M/170M [00:09<00:03, 13.9MB/s]
+ 75%|#######4 | 127M/170M [00:09<00:03, 14.4MB/s]
+ 76%|#######5 | 128M/170M [00:09<00:03, 14.4MB/s]
+ 76%|#######6 | 130M/170M [00:10<00:02, 14.5MB/s]
+ 77%|#######7 | 131M/170M [00:10<00:02, 14.1MB/s]
+ 78%|#######8 | 133M/170M [00:10<00:02, 13.6MB/s]
+ 79%|#######8 | 134M/170M [00:10<00:02, 14.2MB/s]
+ 80%|#######9 | 135M/170M [00:10<00:02, 12.9MB/s]
+ 81%|########1 | 138M/170M [00:10<00:02, 15.5MB/s]
+ 82%|########2 | 140M/170M [00:10<00:01, 17.2MB/s]
+ 84%|########3 | 142M/170M [00:10<00:01, 18.4MB/s]
+ 85%|########4 | 144M/170M [00:10<00:01, 19.3MB/s]
+ 86%|########5 | 146M/170M [00:11<00:01, 19.4MB/s]
+ 87%|########7 | 148M/170M [00:11<00:01, 19.0MB/s]
+ 89%|########8 | 151M/170M [00:11<00:00, 21.8MB/s]
+ 90%|######### | 153M/170M [00:11<00:00, 20.6MB/s]
+ 91%|#########1| 155M/170M [00:11<00:00, 16.5MB/s]
+ 92%|#########2| 157M/170M [00:11<00:00, 17.4MB/s]
+ 93%|#########3| 159M/170M [00:11<00:00, 17.3MB/s]
+ 94%|#########4| 160M/170M [00:11<00:00, 17.2MB/s]
+ 96%|#########5| 163M/170M [00:11<00:00, 17.8MB/s]
+ 97%|#########6| 164M/170M [00:12<00:00, 17.0MB/s]
+ 98%|#########7| 166M/170M [00:12<00:00, 15.0MB/s]
+ 99%|#########8| 168M/170M [00:12<00:00, 14.8MB/s]
+100%|#########9| 169M/170M [00:12<00:00, 16.0MB/s]
+100%|##########| 170M/170M [00:12<00:00, 14.3MB/s]
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:3878: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
for i in range(dim)
/usr/local/lib/python3.7/dist-packages/torchvision/models/detection/anchor_utils.py:127: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
@@ -513,7 +610,7 @@ torchvision rcnn models.</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Get 9 valid boxes
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 14.969 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 14.476 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-object-detection-pytorch-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/7795da4b258c8feff986668b95ef57ad/deploy_object_detection_pytorch.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_object_detection_pytorch.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized.html b/docs/how_to/deploy_models/deploy_prequantized.html
index be451026d..f7dec57a8 100644
--- a/docs/how_to/deploy_models/deploy_prequantized.html
+++ b/docs/how_to/deploy_models/deploy_prequantized.html
@@ -450,12 +450,10 @@ training. Other models require a full post training calibration.</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /workspace/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth
0%| | 0.00/13.6M [00:00<?, ?B/s]
- 19%|#9 | 2.62M/13.6M [00:00<00:00, 27.1MB/s]
- 40%|###9 | 5.38M/13.6M [00:00<00:00, 28.0MB/s]
- 59%|#####9 | 8.05M/13.6M [00:00<00:00, 23.4MB/s]
- 76%|#######6 | 10.4M/13.6M [00:00<00:00, 22.3MB/s]
- 94%|#########3| 12.7M/13.6M [00:00<00:00, 22.9MB/s]
-100%|##########| 13.6M/13.6M [00:00<00:00, 23.2MB/s]
+ 11%|# | 1.44M/13.6M [00:00<00:00, 15.1MB/s]
+ 31%|### | 4.17M/13.6M [00:00<00:00, 23.1MB/s]
+ 68%|######7 | 9.20M/13.6M [00:00<00:00, 36.4MB/s]
+100%|##########| 13.6M/13.6M [00:00<00:00, 39.9MB/s]
</pre></div>
</div>
</div>
@@ -544,7 +542,7 @@ output values are identical out of 1000 outputs from mobilenet v2.</p>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 90.7040 90.8425 93.0880 90.1547 0.4057
+ 90.4965 90.4885 91.1011 90.1071 0.2119
</pre></div>
</div>
<div class="admonition note">
@@ -583,7 +581,7 @@ This includes support for the VNNI 8 bit dot product instruction (CascadeLake or
<div class="section" id="deploy-a-quantized-tflite-model">
<h2>Deploy a quantized TFLite Model<a class="headerlink" href="#deploy-a-quantized-tflite-model" title="Permalink to this headline">¶</a></h2>
<p>TODO</p>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 7.981 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 4.896 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/fb8217c13f4351224c6cf3aacf1a87fc/deploy_prequantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_prequantized_tflite.html b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
index 87f2b71c6..5e11c54d0 100644
--- a/docs/how_to/deploy_models/deploy_prequantized_tflite.html
+++ b/docs/how_to/deploy_models/deploy_prequantized_tflite.html
@@ -540,7 +540,7 @@ TFLite Top-5 labels: [387 102 386 341 349]
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 119.6547 119.6150 120.5882 118.8113 0.3526
+ 118.7466 118.6988 124.2167 117.9157 0.6235
</pre></div>
</div>
<div class="admonition note">
@@ -568,7 +568,7 @@ network for ARM CPU</span></a>.</p></li>
</ul>
</div></blockquote>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 0.512 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 1.474 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-prequantized-tflite-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/56691c7a27d45da61d112276334640d3/deploy_prequantized_tflite.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_prequantized_tflite.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_quantized.html b/docs/how_to/deploy_models/deploy_quantized.html
index b48ea3e6d..05b5c2c28 100644
--- a/docs/how_to/deploy_models/deploy_quantized.html
+++ b/docs/how_to/deploy_models/deploy_quantized.html
@@ -480,7 +480,7 @@ for calibration. But the accuracy might be impacted.</p>
DeprecationWarning,
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 18.771 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 16.612 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-quantized-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/7810ecf51bfc05f7d5e8a400ac3e815d/deploy_quantized.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_quantized.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/deploy_ssd_gluoncv.html b/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
index 1a273b848..a767ea33e 100644
--- a/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
+++ b/docs/how_to/deploy_models/deploy_ssd_gluoncv.html
@@ -415,23 +415,28 @@ to your device.</p>
Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_512_resnet50_v1_voc-9c8b225a.zip...
0%| | 0/132723 [00:00<?, ?KB/s]
- 4%|4 | 5753/132723 [00:00<00:02, 57520.04KB/s]
- 10%|9 | 12722/132723 [00:00<00:01, 64675.74KB/s]
- 15%|#5 | 20193/132723 [00:00<00:01, 69247.47KB/s]
- 22%|##1 | 28684/132723 [00:00<00:01, 75427.22KB/s]
- 27%|##7 | 36227/132723 [00:00<00:01, 60414.75KB/s]
- 34%|###3 | 44625/132723 [00:00<00:01, 67295.27KB/s]
- 40%|###9 | 53074/132723 [00:00<00:01, 72355.01KB/s]
- 46%|####6 | 61536/132723 [00:00<00:00, 75984.80KB/s]
- 53%|#####2 | 69958/132723 [00:00<00:00, 78430.89KB/s]
- 59%|#####9 | 78345/132723 [00:01<00:00, 80050.08KB/s]
- 65%|######5 | 86648/132723 [00:01<00:00, 80938.42KB/s]
- 72%|#######1 | 95046/132723 [00:01<00:00, 81845.87KB/s]
- 78%|#######7 | 103464/132723 [00:01<00:00, 82543.00KB/s]
- 84%|########4 | 111766/132723 [00:01<00:00, 82684.72KB/s]
- 91%|######### | 120173/132723 [00:01<00:00, 83096.59KB/s]
- 97%|#########6| 128708/132723 [00:01<00:00, 83769.14KB/s]
-100%|##########| 132723/132723 [00:01<00:00, 77266.43KB/s]
+ 1%|1 | 1629/132723 [00:00<00:08, 15108.34KB/s]
+ 3%|3 | 4430/132723 [00:00<00:05, 22460.26KB/s]
+ 7%|6 | 9070/132723 [00:00<00:03, 33093.30KB/s]
+ 10%|# | 13726/132723 [00:00<00:03, 38319.29KB/s]
+ 15%|#4 | 19870/132723 [00:00<00:02, 46503.08KB/s]
+ 21%|## | 27445/132723 [00:00<00:01, 56376.95KB/s]
+ 27%|##6 | 35593/132723 [00:00<00:01, 64543.51KB/s]
+ 32%|###1 | 42398/132723 [00:00<00:01, 64878.02KB/s]
+ 37%|###6 | 48897/132723 [00:00<00:01, 61712.49KB/s]
+ 42%|####2 | 55859/132723 [00:01<00:01, 64045.03KB/s]
+ 47%|####6 | 62299/132723 [00:01<00:01, 63333.29KB/s]
+ 52%|#####2 | 69418/132723 [00:01<00:00, 65650.66KB/s]
+ 57%|#####7 | 76198/132723 [00:01<00:00, 66286.39KB/s]
+ 63%|######2 | 83422/132723 [00:01<00:00, 68052.47KB/s]
+ 68%|######7 | 90243/132723 [00:01<00:00, 63112.72KB/s]
+ 74%|#######3 | 97621/132723 [00:01<00:00, 66131.07KB/s]
+ 80%|#######9 | 105781/132723 [00:01<00:00, 70583.46KB/s]
+ 85%|########5 | 112911/132723 [00:01<00:00, 66947.25KB/s]
+ 90%|######### | 119689/132723 [00:02<00:00, 61302.31KB/s]
+ 95%|#########4| 125951/132723 [00:02<00:00, 54798.91KB/s]
+100%|#########9| 132411/132723 [00:02<00:00, 57294.21KB/s]
+100%|##########| 132723/132723 [00:02<00:00, 58546.79KB/s]
</pre></div>
</div>
<p>Create TVM runtime and do inference
@@ -471,7 +476,7 @@ Downloading /workspace/.mxnet/models/ssd_512_resnet50_v1_voc-9c8b225a.zip from h
</pre></div>
</div>
<img alt="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" class="sphx-glr-single-img" src="../../_images/sphx_glr_deploy_ssd_gluoncv_001.png" />
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 30.048 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 22.199 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-deploy-models-deploy-ssd-gluoncv-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/cccb17d28e5e8b2e94ea8cd5ec59f6ed/deploy_ssd_gluoncv.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">deploy_ssd_gluoncv.py</span></code></a></p>
diff --git a/docs/how_to/deploy_models/sg_execution_times.html b/docs/how_to/deploy_models/sg_execution_times.html
index 1d721c8f2..9a14758ad 100644
--- a/docs/how_to/deploy_models/sg_execution_times.html
+++ b/docs/how_to/deploy_models/sg_execution_times.html
@@ -300,16 +300,16 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-deploy-models-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>11:05.635</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
+<p><strong>10:49.724</strong> total execution time for <strong>how_to_deploy_models</strong> files:</p>
<ul class="simple">
-<li><p><strong>03:14.969</strong>: <a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></li>
-<li><p><strong>02:30.048</strong>: <a class="reference internal" href="deploy_ssd_gluoncv.html#sphx-glr-how-to-deploy-models-deploy-ssd-gluoncv-py"><span class="std std-ref">Deploy Single Shot Multibox Detector(SSD) model</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_ssd_gluoncv.py</span></code>)</p></li>
-<li><p><strong>02:00.512</strong>: <a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></li>
-<li><p><strong>01:18.771</strong>: <a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></li>
-<li><p><strong>01:07.981</strong>: <a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></li>
-<li><p><strong>00:30.029</strong>: <a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></li>
-<li><p><strong>00:23.119</strong>: <a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></li>
-<li><p><strong>00:00.207</strong>: <a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></li>
+<li><p><strong>03:14.476</strong>: <a class="reference internal" href="deploy_object_detection_pytorch.html#sphx-glr-how-to-deploy-models-deploy-object-detection-pytorch-py"><span class="std std-ref">Compile PyTorch Object Detection Models</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_object_detection_pytorch.py</span></code>)</p></li>
+<li><p><strong>02:22.199</strong>: <a class="reference internal" href="deploy_ssd_gluoncv.html#sphx-glr-how-to-deploy-models-deploy-ssd-gluoncv-py"><span class="std std-ref">Deploy Single Shot Multibox Detector(SSD) model</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_ssd_gluoncv.py</span></code>)</p></li>
+<li><p><strong>02:01.474</strong>: <a class="reference internal" href="deploy_prequantized_tflite.html#sphx-glr-how-to-deploy-models-deploy-prequantized-tflite-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized_tflite.py</span></code>)</p></li>
+<li><p><strong>01:16.612</strong>: <a class="reference internal" href="deploy_quantized.html#sphx-glr-how-to-deploy-models-deploy-quantized-py"><span class="std std-ref">Deploy a Quantized Model on Cuda</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_quantized.py</span></code>)</p></li>
+<li><p><strong>01:04.896</strong>: <a class="reference internal" href="deploy_prequantized.html#sphx-glr-how-to-deploy-models-deploy-prequantized-py"><span class="std std-ref">Deploy a Framework-prequantized Model with TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_prequantized.py</span></code>)</p></li>
+<li><p><strong>00:28.366</strong>: <a class="reference internal" href="deploy_model_on_android.html#sphx-glr-how-to-deploy-models-deploy-model-on-android-py"><span class="std std-ref">Deploy the Pretrained Model on Android</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_android.py</span></code>)</p></li>
+<li><p><strong>00:21.500</strong>: <a class="reference internal" href="deploy_model_on_rasp.html#sphx-glr-how-to-deploy-models-deploy-model-on-rasp-py"><span class="std std-ref">Deploy the Pretrained Model on Raspberry Pi</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_model_on_rasp.py</span></code>)</p></li>
+<li><p><strong>00:00.200</strong>: <a class="reference internal" href="deploy_sparse.html#sphx-glr-how-to-deploy-models-deploy-sparse-py"><span class="std std-ref">Deploy a Hugging Face Pruned Model on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_sparse.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/how_to/extend_tvm/bring_your_own_datatypes.html b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
index d8ee27977..9696984aa 100644
--- a/docs/how_to/extend_tvm/bring_your_own_datatypes.html
+++ b/docs/how_to/extend_tvm/bring_your_own_datatypes.html
@@ -588,7 +588,7 @@ In this alpha state of the Bring Your Own Datatypes framework, we have not imple
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zip1f6ea228-ff1b-4af1-9183-cf78189f33d5 from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Downloading /workspace/.mxnet/models/mobilenet0.25-9f83e440.zipd3f3452b-7645-428a-b305-d7484200241d from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
</pre></div>
</div>
<p>It’s easy to execute MobileNet with native TVM:</p>
diff --git a/docs/how_to/extend_tvm/sg_execution_times.html b/docs/how_to/extend_tvm/sg_execution_times.html
index cef58ed8b..6313c6772 100644
--- a/docs/how_to/extend_tvm/sg_execution_times.html
+++ b/docs/how_to/extend_tvm/sg_execution_times.html
@@ -300,12 +300,12 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-extend-tvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:39.153</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
+<p><strong>00:37.975</strong> total execution time for <strong>how_to_extend_tvm</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:35.506</strong>: <a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></li>
-<li><p><strong>00:02.343</strong>: <a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></li>
-<li><p><strong>00:01.089</strong>: <a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></li>
-<li><p><strong>00:00.214</strong>: <a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></li>
+<li><p><strong>00:34.444</strong>: <a class="reference internal" href="bring_your_own_datatypes.html#sphx-glr-how-to-extend-tvm-bring-your-own-datatypes-py"><span class="std std-ref">Bring Your Own Datatypes to TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">bring_your_own_datatypes.py</span></code>)</p></li>
+<li><p><strong>00:02.280</strong>: <a class="reference internal" href="use_pass_instrument.html#sphx-glr-how-to-extend-tvm-use-pass-instrument-py"><span class="std std-ref">How to Use TVM Pass Instrument</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_instrument.py</span></code>)</p></li>
+<li><p><strong>00:01.053</strong>: <a class="reference internal" href="use_pass_infra.html#sphx-glr-how-to-extend-tvm-use-pass-infra-py"><span class="std std-ref">How to Use TVM Pass Infra</span></a> (<code class="docutils literal notranslate"><span class="pre">use_pass_infra.py</span></code>)</p></li>
+<li><p><strong>00:00.199</strong>: <a class="reference internal" href="low_level_custom_pass.html#sphx-glr-how-to-extend-tvm-low-level-custom-pass-py"><span class="std std-ref">Writing a Customized Pass</span></a> (<code class="docutils literal notranslate"><span class="pre">low_level_custom_pass.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/how_to/extend_tvm/use_pass_instrument.html b/docs/how_to/extend_tvm/use_pass_instrument.html
index aa2687b37..02a8c4a72 100644
--- a/docs/how_to/extend_tvm/use_pass_instrument.html
+++ b/docs/how_to/extend_tvm/use_pass_instrument.html
@@ -486,10 +486,10 @@ profile the execution time of each passes.</p>
</div>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 6158us [6158us] (45.74%; 45.74%)
-FoldScaleAxis: 7306us [2us] (54.26%; 54.26%)
- FoldConstant: 7303us [1510us] (54.25%; 99.97%)
- InferType: 5794us [5794us] (43.03%; 79.33%)
+InferType: 6241us [6241us] (45.74%; 45.74%)
+FoldScaleAxis: 7403us [2us] (54.26%; 54.26%)
+ FoldConstant: 7401us [1519us] (54.25%; 99.97%)
+ InferType: 5882us [5882us] (43.11%; 79.48%)
</pre></div>
</div>
</div>
@@ -512,10 +512,10 @@ Refer to following sections and <a class="reference internal" href="../../refere
</div>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Printing results of timing profile...
-InferType: 5874us [5874us] (44.88%; 44.88%)
-FoldScaleAxis: 7213us [2us] (55.12%; 55.12%)
- FoldConstant: 7211us [1508us] (55.10%; 99.97%)
- InferType: 5703us [5703us] (43.58%; 79.09%)
+InferType: 6028us [6028us] (44.55%; 44.55%)
+FoldScaleAxis: 7503us [2us] (55.45%; 55.45%)
+ FoldConstant: 7501us [1571us] (55.43%; 99.97%)
+ InferType: 5930us [5930us] (43.83%; 79.06%)
</pre></div>
</div>
<p>Register empty list to clear existing instruments.</p>
diff --git a/docs/how_to/optimize_operators/opt_conv_cuda.html b/docs/how_to/optimize_operators/opt_conv_cuda.html
index b7ae9140f..a7181535e 100644
--- a/docs/how_to/optimize_operators/opt_conv_cuda.html
+++ b/docs/how_to/optimize_operators/opt_conv_cuda.html
@@ -534,7 +534,7 @@ latency of convolution.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 52.730581 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Convolution: 35.315314 ms
</pre></div>
</div>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-optimize-operators-opt-conv-cuda-py">
diff --git a/docs/how_to/optimize_operators/opt_conv_tensorcore.html b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
index f08ff42a3..3606e5350 100644
--- a/docs/how_to/optimize_operators/opt_conv_tensorcore.html
+++ b/docs/how_to/optimize_operators/opt_conv_tensorcore.html
@@ -878,7 +878,7 @@ be able to run on our build server</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 6.616086 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>conv2d with tensor core: 8.942790 ms
</pre></div>
</div>
</div>
diff --git a/docs/how_to/optimize_operators/opt_gemm.html b/docs/how_to/optimize_operators/opt_gemm.html
index 31179cd04..91bd98471 100644
--- a/docs/how_to/optimize_operators/opt_gemm.html
+++ b/docs/how_to/optimize_operators/opt_gemm.html
@@ -431,8 +431,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.019359
-Baseline: 3.477377
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Numpy running time: 0.018525
+Baseline: 3.480135
</pre></div>
</div>
<p>In TVM, we can always inspect lower level IR to debug or optimize our schedule.
@@ -494,7 +494,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.309022
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt1: 0.294867
</pre></div>
</div>
<p>Here is the generated IR after blocking.</p>
@@ -563,7 +563,7 @@ vastly.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.340348
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt2: 0.330186
</pre></div>
</div>
<p>Here is the generated IR after vectorization.</p>
@@ -626,7 +626,7 @@ the access pattern for A matrix is more cache friendly.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.122857
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt3: 0.117279
</pre></div>
</div>
<p>Here is the generated IR after loop permutation.</p>
@@ -711,7 +711,7 @@ flattening.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.112846
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt4: 0.112383
</pre></div>
</div>
<p>Here is the generated IR after array packing.</p>
@@ -799,7 +799,7 @@ write to C when all the block results are ready.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.113382
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt5: 0.111201
</pre></div>
</div>
<p>Here is the generated IR after blocking.</p>
@@ -891,7 +891,7 @@ write to C when all the block results are ready.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.146819
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Opt6: 0.145077
</pre></div>
</div>
<p>Here is the generated IR after parallelization.</p>
diff --git a/docs/how_to/optimize_operators/sg_execution_times.html b/docs/how_to/optimize_operators/sg_execution_times.html
index 1ff018810..c0ac97da0 100644
--- a/docs/how_to/optimize_operators/sg_execution_times.html
+++ b/docs/how_to/optimize_operators/sg_execution_times.html
@@ -300,11 +300,11 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-optimize-operators-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:35.754</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
+<p><strong>00:35.131</strong> total execution time for <strong>how_to_optimize_operators</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:33.065</strong>: <a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></li>
-<li><p><strong>00:01.422</strong>: <a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></li>
-<li><p><strong>00:01.267</strong>: <a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></li>
+<li><p><strong>00:32.474</strong>: <a class="reference internal" href="opt_gemm.html#sphx-glr-how-to-optimize-operators-opt-gemm-py"><span class="std std-ref">How to optimize GEMM on CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_gemm.py</span></code>)</p></li>
+<li><p><strong>00:01.458</strong>: <a class="reference internal" href="opt_conv_tensorcore.html#sphx-glr-how-to-optimize-operators-opt-conv-tensorcore-py"><span class="std std-ref">How to optimize convolution using TensorCores</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_tensorcore.py</span></code>)</p></li>
+<li><p><strong>00:01.199</strong>: <a class="reference internal" href="opt_conv_cuda.html#sphx-glr-how-to-optimize-operators-opt-conv-cuda-py"><span class="std std-ref">How to optimize convolution on GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">opt_conv_cuda.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
index 4b1e73f0f..99c9185e5 100644
--- a/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
+++ b/docs/how_to/tune_with_autoscheduler/sg_execution_times.html
@@ -300,14 +300,14 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-tune-with-autoscheduler-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>05:09.442</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
+<p><strong>04:54.695</strong> total execution time for <strong>how_to_tune_with_autoscheduler</strong> files:</p>
<ul class="simple">
-<li><p><strong>02:27.217</strong>: <a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></li>
-<li><p><strong>01:21.204</strong>: <a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></li>
-<li><p><strong>00:41.339</strong>: <a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></li>
-<li><p><strong>00:21.812</strong>: <a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></li>
-<li><p><strong>00:08.989</strong>: <a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></li>
-<li><p><strong>00:08.881</strong>: <a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></li>
+<li><p><strong>02:20.941</strong>: <a class="reference internal" href="tune_conv2d_layer_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py"><span class="std std-ref">Auto-scheduling a Convolution Layer for GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_layer_cuda.py</span></code>)</p></li>
+<li><p><strong>01:18.949</strong>: <a class="reference internal" href="tune_network_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-x86-py"><span class="std std-ref">Auto-scheduling a Neural Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_x86.py</span></code>)</p></li>
+<li><p><strong>00:40.281</strong>: <a class="reference internal" href="tune_network_cuda.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-cuda-py"><span class="std std-ref">Auto-scheduling a Neural Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_cuda.py</span></code>)</p></li>
+<li><p><strong>00:17.236</strong>: <a class="reference internal" href="tune_sparse_x86.html#sphx-glr-how-to-tune-with-autoscheduler-tune-sparse-x86-py"><span class="std std-ref">Auto-scheduling Sparse Matrix Multiplication on CPU with Custom Sketch Rule</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_sparse_x86.py</span></code>)</p></li>
+<li><p><strong>00:08.934</strong>: <a class="reference internal" href="tune_network_mali.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-mali-py"><span class="std std-ref">Auto-scheduling a Neural Network for mali GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_mali.py</span></code>)</p></li>
+<li><p><strong>00:08.355</strong>: <a class="reference internal" href="tune_network_arm.html#sphx-glr-how-to-tune-with-autoscheduler-tune-network-arm-py"><span class="std std-ref">Auto-scheduling a Neural Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_network_arm.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
index d5419a889..516f7fb8e 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_conv2d_layer_cuda.html
@@ -470,472 +470,232 @@ cooperative fetching, unrolling and operator fusion.</p>
compute: Buffer(compute_2: Pointer(float32), float32, [25088], [])}
buffer_map = {data_1: data, kernel_1: kernel, bias_1: bias, compute_1: compute}
preflattened_buffer_map = {data_1: data_3: Buffer(data_2, float32, [1, 512, 7, 7], []), kernel_1: kernel_3: Buffer(kernel_2, float32, [512, 512, 3, 3], []), bias_1: bias_3: Buffer(bias_2, float32, [1, 512, 1, 1], []), compute_1: compute_3: Buffer(compute_2, float32, [1, 512, 7, 7], [])} {
- attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 64;
- allocate(conv2d_nchw: Pointer(local float32), float32, [4]), storage_scope = local;
- allocate(pad_temp.shared: Pointer(shared float32), float32, [2016]), storage_scope = shared;
- allocate(kernel.shared: Pointer(shared float32), float32, [768]), storage_scope = shared;
- attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98 {
- conv2d_nchw_1: Buffer(conv2d_nchw, float32, [4], [], scope="local", align=8)[0] = 0f32
- conv2d_nchw_1[2] = 0f32
+ attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 32;
+ allocate(conv2d_nchw: Pointer(local float32), float32, [14]), storage_scope = local;
+ allocate(pad_temp.shared: Pointer(shared float32), float32, [1296]), storage_scope = shared;
+ allocate(kernel.shared: Pointer(shared float32), float32, [2304]), storage_scope = shared;
+ attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56 {
+ conv2d_nchw_1: Buffer(conv2d_nchw, float32, [14], [], scope="local", align=32)[0] = 0f32
conv2d_nchw_1[1] = 0f32
+ conv2d_nchw_1[2] = 0f32
conv2d_nchw_1[3] = 0f32
- for (rc.outer.outer: int32, 0, 16) {
- for (rx.outer.outer: int32, 0, 3) {
- let cse_var_2: int32 = (rc.outer.outer*1568)
- let cse_var_1: int32 = (rc.outer.outer*288)
- {
- attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1: Buffer(pad_temp.shared, float32, [2016], [], scope="shared")[threadIdx.x_1] = @tir.if_then_else(((((7 <= floormod(threadIdx.x_1, 63)) && (floormod(threadIdx.x_1, 63) < 56)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[((((cse_var_2 + (floordiv(threadIdx.x_1, 63)*49)) + rx.outer.outer) + floormod(threadIdx.x_1, 63)) - 8)], 0f32, dtype=float32)
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 98)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 5), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 5), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 14), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 5), 9)*7)) + rx.outer.outer) + floormod(thr [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 196)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 1), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 1), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 28), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 1), 9)*7)) + rx.outer.outer) + floormod(th [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 294)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 6), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 6), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 42), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 6), 9)*7)) + rx.outer.outer) + floormod(th [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 392)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 2), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 2), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 56), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 2), 9)*7)) + rx.outer.outer) + floormod(th [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 490)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 7), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 7), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 70), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 7), 9)*7)) + rx.outer.outer) + floormod(th [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 588)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 3), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 3), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 84), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 3), 9)*7)) + rx.outer.outer) + floormod(th [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 686)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 8), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 8), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 98), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 8), 9)*7)) + rx.outer.outer) + floormod(th [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 784)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 4), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 4), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 112), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 4), 9)*7)) + rx.outer.outer) + floormod(t [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 882)] = @tir.if_then_else(((((7 <= floormod(threadIdx.x_1, 63)) && (floormod(threadIdx.x_1, 63) < 56)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[((((cse_var_2 + (floordiv(floordiv(threadIdx.x_1, 7), 9)*49)) + rx.outer.outer) + floormod(threadIdx.x_1, 63)) + 678)], 0f32, dtype=float32)
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 980)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 5), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 5), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 140), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 5), 9)*7)) + rx.outer.outer) + floormod(t [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1078)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 1), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 1), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 154), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 1), 9)*7)) + rx.outer.outer) + floormod( [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1176)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 6), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 6), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 168), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 6), 9)*7)) + rx.outer.outer) + floormod( [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1274)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 2), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 2), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 182), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 2), 9)*7)) + rx.outer.outer) + floormod( [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1372)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 7), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 7), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 196), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 7), 9)*7)) + rx.outer.outer) + floormod( [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1470)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 3), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 3), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 210), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 3), 9)*7)) + rx.outer.outer) + floormod( [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1568)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 8), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 8), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 224), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 8), 9)*7)) + rx.outer.outer) + floormod( [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1666)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 4), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 4), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 238), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 4), 9)*7)) + rx.outer.outer) + floormod( [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1764)] = @tir.if_then_else(((((7 <= floormod(threadIdx.x_1, 63)) && (floormod(threadIdx.x_1, 63) < 56)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[((((cse_var_2 + (floordiv(floordiv(threadIdx.x_1, 7), 9)*49)) + rx.outer.outer) + floormod(threadIdx.x_1, 63)) + 1364)], 0f32, dtype=float32)
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- pad_temp.shared_1[(threadIdx.x_1 + 1862)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 7) + 5), 9)) && (floormod((floordiv(threadIdx.x_1, 7) + 5), 9) < 8)) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 266), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 5), 9)*7)) + rx.outer.outer) + floormod( [...]
- attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- if @tir.likely((threadIdx.x_1 < 56), dtype=bool) {
- pad_temp.shared_1[(threadIdx.x_1 + 1960)] = @tir.if_then_else((((floormod((floordiv(threadIdx.x_1, 7) + 1), 9) < 8) && (1 <= (rx.outer.outer + floormod(threadIdx.x_1, 7)))) && ((rx.outer.outer + floormod(threadIdx.x_1, 7)) < 8)), data[(((((cse_var_2 + (floordiv((floordiv(threadIdx.x_1, 7) + 280), 9)*49)) + (floormod((floordiv(threadIdx.x_1, 7) + 1), 9)*7)) + rx.outer.outer) + floormod(threadIdx.x_1, 7)) - 8)], 0f32, dtype=float32)
+ conv2d_nchw_1[4] = 0f32
+ conv2d_nchw_1[5] = 0f32
+ conv2d_nchw_1[6] = 0f32
+ conv2d_nchw_1[7] = 0f32
+ conv2d_nchw_1[8] = 0f32
+ conv2d_nchw_1[9] = 0f32
+ conv2d_nchw_1[10] = 0f32
+ conv2d_nchw_1[11] = 0f32
+ conv2d_nchw_1[12] = 0f32
+ conv2d_nchw_1[13] = 0f32
+ for (rc.outer.outer: int32, 0, 32) {
+ let cse_var_1: int32 = (rc.outer.outer*784)
+ {
+ attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1: Buffer(pad_temp.shared, float32, [1296], [], scope="shared")[threadIdx.x_1] = @tir.if_then_else((((9 <= threadIdx.x_1) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data[(((cse_var_1 + (floordiv(threadIdx.x_1, 9)*7)) + floormod(threadIdx.x_1, 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 56)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 56), 81)) && (floormod((threadIdx.x_1 + 56), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 2), 9))) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 56), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 56), 81), 9)*7)) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 112)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 112), 81)) && (floormod((threadIdx.x_1 + 31), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 112), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 112), 81), 9)*7)) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 168)] = @tir.if_then_else((((9 <= floormod((threadIdx.x_1 + 168), 81)) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 168), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 168), 81), 9)*7)) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 224)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 224), 81)) && (floormod((threadIdx.x_1 + 62), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 8), 9))) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 224), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 224), 81), 9)*7)) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 280)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 280), 81)) && (floormod((threadIdx.x_1 + 37), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 1), 9))) && (floormod((threadIdx.x_1 + 1), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 280), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 280), 81), 9)*7)) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 336)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 3), 9)) && (floormod((threadIdx.x_1 + 3), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 336), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 336), 81), 9)*7)) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 392)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 392), 81)) && (floormod((threadIdx.x_1 + 68), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 5), 9))) && (floormod((threadIdx.x_1 + 5), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 392), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 392), 81), 9)*7)) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 448)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 448), 81)) && (floormod((threadIdx.x_1 + 43), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 7), 9))) && (floormod((threadIdx.x_1 + 7), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 448), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 448), 81), 9)*7)) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 504)] = @tir.if_then_else((((floormod((threadIdx.x_1 + 18), 81) < 72) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 504), 81)*49)) + (floormod((floordiv(threadIdx.x_1, 9) + 2), 9)*7)) + floormod(threadIdx.x_1, 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 560)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 560), 81)) && (floormod((threadIdx.x_1 + 74), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 2), 9))) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 560), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 560), 81), 9)*7)) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 616)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 616), 81)) && (floormod((threadIdx.x_1 + 49), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 616), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 616), 81), 9)*7)) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 672)] = @tir.if_then_else((((floormod((threadIdx.x_1 + 24), 81) < 72) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 672), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 672), 81), 9)*7)) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 728)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 728), 81)) && (floormod((threadIdx.x_1 + 80), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 8), 9))) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 728), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 728), 81), 9)*7)) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 784)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 784), 81)) && (floormod((threadIdx.x_1 + 55), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 1), 9))) && (floormod((threadIdx.x_1 + 1), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 784), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 784), 81), 9)*7)) + floormod((threadIdx.x_1 + 1), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 840)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 840), 81)) && (floormod((threadIdx.x_1 + 30), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 3), 9))) && (floormod((threadIdx.x_1 + 3), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 840), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 840), 81), 9)*7)) + floormod((threadIdx.x_1 + 3), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 896)] = @tir.if_then_else((((9 <= floormod((threadIdx.x_1 + 896), 81)) && (1 <= floormod((threadIdx.x_1 + 5), 9))) && (floormod((threadIdx.x_1 + 5), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 896), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 896), 81), 9)*7)) + floormod((threadIdx.x_1 + 5), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 952)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 952), 81)) && (floormod((threadIdx.x_1 + 61), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 7), 9))) && (floormod((threadIdx.x_1 + 7), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 952), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 952), 81), 9)*7)) + floormod((threadIdx.x_1 + 7), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1008)] = @tir.if_then_else(((((1 <= floormod((floordiv(threadIdx.x_1, 9) + 4), 9)) && (floormod((threadIdx.x_1 + 36), 81) < 72)) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1008), 81)*49)) + (floormod((floordiv(threadIdx.x_1, 9) + 4), 9)*7)) + floormod(threadIdx.x_1, 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1064)] = @tir.if_then_else(((1 <= floormod((threadIdx.x_1 + 2), 9)) && (floormod((threadIdx.x_1 + 2), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1064), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 1064), 81), 9)*7)) + floormod((threadIdx.x_1 + 2), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1120)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 1120), 81)) && (floormod((threadIdx.x_1 + 67), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 4), 9))) && (floormod((threadIdx.x_1 + 4), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1120), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 1120), 81), 9)*7)) + floormod((threadIdx.x_1 + 4), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1176)] = @tir.if_then_else(((((9 <= floormod((threadIdx.x_1 + 1176), 81)) && (floormod((threadIdx.x_1 + 42), 81) < 72)) && (1 <= floormod((threadIdx.x_1 + 6), 9))) && (floormod((threadIdx.x_1 + 6), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1176), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 1176), 81), 9)*7)) + floormod((threadIdx.x_1 + 6), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ pad_temp.shared_1[(threadIdx.x_1 + 1232)] = @tir.if_then_else((((floormod((threadIdx.x_1 + 17), 81) < 72) && (1 <= floormod((threadIdx.x_1 + 8), 9))) && (floormod((threadIdx.x_1 + 8), 9) < 8)), data[((((cse_var_1 + (floordiv((threadIdx.x_1 + 1232), 81)*49)) + (floordiv(floormod((threadIdx.x_1 + 1232), 81), 9)*7)) + floormod((threadIdx.x_1 + 8), 9)) - 8)], 0f32, dtype=float32)
+ attr [IterVar(threadIdx.x_1, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+ if @tir.likely((threadIdx.x_1 < 8), dtype=bool) {
+ pad_temp.shared_1[(threadIdx.x_1 + 1288)] = 0f32
+ }
+ attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56 {
+ kernel.shared_1: Buffer(kernel.shared, float32, [2304], [], scope="shared")[(threadIdx.x_2*24)] = kernel[((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24))]
+ kernel.shared_1[((threadIdx.x_2*24) + 1)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 1)]
+ kernel.shared_1[((threadIdx.x_2*24) + 2)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 2)]
+ kernel.shared_1[((threadIdx.x_2*24) + 3)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 3)]
+ kernel.shared_1[((threadIdx.x_2*24) + 4)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 4)]
+ kernel.shared_1[((threadIdx.x_2*24) + 5)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 5)]
+ kernel.shared_1[((threadIdx.x_2*24) + 6)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 6)]
+ kernel.shared_1[((threadIdx.x_2*24) + 7)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 7)]
+ kernel.shared_1[((threadIdx.x_2*24) + 8)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 8)]
+ kernel.shared_1[((threadIdx.x_2*24) + 9)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 9)]
+ kernel.shared_1[((threadIdx.x_2*24) + 10)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 10)]
+ kernel.shared_1[((threadIdx.x_2*24) + 11)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 11)]
+ kernel.shared_1[((threadIdx.x_2*24) + 12)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 12)]
+ kernel.shared_1[((threadIdx.x_2*24) + 13)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 13)]
+ kernel.shared_1[((threadIdx.x_2*24) + 14)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 14)]
+ kernel.shared_1[((threadIdx.x_2*24) + 15)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 15)]
+ kernel.shared_1[((threadIdx.x_2*24) + 16)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 16)]
+ kernel.shared_1[((threadIdx.x_2*24) + 17)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 17)]
+ kernel.shared_1[((threadIdx.x_2*24) + 18)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 18)]
+ kernel.shared_1[((threadIdx.x_2*24) + 19)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 19)]
+ kernel.shared_1[((threadIdx.x_2*24) + 20)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 20)]
+ kernel.shared_1[((threadIdx.x_2*24) + 21)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 21)]
+ kernel.shared_1[((threadIdx.x_2*24) + 22)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 22)]
+ kernel.shared_1[((threadIdx.x_2*24) + 23)] = kernel[(((((blockIdx.x*73728) + (floordiv(threadIdx.x_2, 6)*4608)) + (rc.outer.outer*144)) + (floormod(threadIdx.x_2, 6)*24)) + 23)]
+ }
+ attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56 {
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1344)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 16), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1345)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 16), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1346)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 16), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1347)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 17), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1348)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 17), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1349)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 17), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1350)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 18), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1351)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 18), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1352)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 18), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1353)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 19), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1354)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 19), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1355)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 19), 48)*3)) + 2)]
}
- attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1: Buffer(kernel.shared, float32, [768], [], scope="shared")[threadIdx.x_2] = kernel[(((((blockIdx.x*36864) + (floordiv(threadIdx.x_2, 96)*4608)) + cse_var_1) + (floormod(threadIdx.x_2, 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 98)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 49), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 2), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 196)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 98), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 4), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 294)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 147), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 6), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 392)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 196), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 8), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 490)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 245), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 10), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- kernel.shared_1[(threadIdx.x_2 + 588)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 294), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 12), 96)*3)) + rx.outer.outer)]
- attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 98;
- if @tir.likely((threadIdx.x_2 < 82), dtype=bool) {
- kernel.shared_1[(threadIdx.x_2 + 686)] = kernel[(((((blockIdx.x*36864) + (floordiv((floordiv(threadIdx.x_2, 2) + 343), 48)*4608)) + cse_var_1) + (floormod((threadIdx.x_2 + 14), 96)*3)) + rx.outer.outer)]
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1356)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 20), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1357)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 20), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1358)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 20), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1359)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 21), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1360)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 21), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1361)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 21), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1362)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 22), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1363)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 22), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1364)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 22), 48)*3)) + 2)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1365)] = kernel[((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 23), 48)*3))]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1366)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 23), 48)*3)) + 1)]
+ }
+ if @tir.likely((threadIdx.x_2 < 40), dtype=bool) {
+ kernel.shared_1[((threadIdx.x_2*24) + 1367)] = kernel[(((((blockIdx.x*73728) + (floordiv((floordiv(threadIdx.x_2, 2) + 28), 3)*4608)) + (rc.outer.outer*144)) + (floormod(((threadIdx.x_2*8) + 23), 48)*3)) + 2)]
+ }
+ }
+ for (rc.outer.inner: int32, 0, 2) {
+ for (rx.outer.inner: int32, 0, 3) {
+ for (rc.inner: int32, 0, 8) {
+ conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 1)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 2)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 3)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 4)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 5)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 6)]*kernel.shared_1[((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner)]))
+ conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 1)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 2)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 3)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 4)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 5)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 6)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 144)]))
+ conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 9)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 10)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 11)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 12)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 13)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 14)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 15)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 3)]))
+ conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 9)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 10)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 11)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 12)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 13)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 14)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 15)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 147)]))
+ conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 18)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 19)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 20)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 21)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[4] = (conv2d_nchw_1[4] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 22)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[5] = (conv2d_nchw_1[5] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 23)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[6] = (conv2d_nchw_1[6] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 24)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 6)]))
+ conv2d_nchw_1[7] = (conv2d_nchw_1[7] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 18)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[8] = (conv2d_nchw_1[8] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 19)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[9] = (conv2d_nchw_1[9] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 20)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[10] = (conv2d_nchw_1[10] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 21)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[11] = (conv2d_nchw_1[11] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 22)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[12] = (conv2d_nchw_1[12] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 23)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ conv2d_nchw_1[13] = (conv2d_nchw_1[13] + (pad_temp.shared_1[(((((rc.outer.inner*648) + (rc.inner*81)) + (floormod(threadIdx.x, 7)*9)) + rx.outer.inner) + 24)]*kernel.shared_1[(((((floordiv(threadIdx.x, 7)*288) + (rc.outer.inner*72)) + (rc.inner*9)) + rx.outer.inner) + 150)]))
+ }
}
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[floormod(threadIdx.x, 49)]*kernel.shared_1[(floordiv(threadIdx.x, 49)*192)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[floormod(threadIdx.x, 49)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 384)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[floormod(threadIdx.x, 49)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 96)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[floormod(threadIdx.x, 49)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 480)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 7)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 1)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 7)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 385)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 7)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 97)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 7)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 481)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 14)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 2)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 14)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 386)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 14)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 98)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 14)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 482)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 3)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 387)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 99)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 63)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 483)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 70)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 4)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 70)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 388)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 70)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 100)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 70)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 484)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 77)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 5)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 77)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 389)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 77)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 101)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 77)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 485)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 6)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 390)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 102)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 126)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 486)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 133)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 7)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 133)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 391)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 133)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 103)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 133)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 487)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 140)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 8)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 140)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 392)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 140)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 104)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 140)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 488)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 9)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 393)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 105)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 189)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 489)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 196)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 10)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 196)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 394)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 196)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 106)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 196)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 490)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 203)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 11)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 203)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 395)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 203)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 107)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 203)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 491)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 12)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 396)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 108)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 252)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 492)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 259)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 13)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 259)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 397)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 259)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 109)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 259)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 493)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 266)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 14)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 266)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 398)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 266)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 110)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 266)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 494)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 15)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 399)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 111)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 315)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 495)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 322)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 16)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 322)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 400)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 322)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 112)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 322)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 496)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 329)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 17)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 329)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 401)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 329)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 113)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 329)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 497)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 378)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 18)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 378)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 402)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 378)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 114)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 378)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 498)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 385)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 19)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 385)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 403)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 385)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 115)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 385)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 499)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 392)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 20)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 392)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 404)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 392)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 116)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 392)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 500)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 441)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 21)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 441)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 405)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 441)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 117)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 441)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 501)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 448)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 22)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 448)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 406)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 448)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 118)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 448)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 502)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 455)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 23)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 455)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 407)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 455)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 119)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 455)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 503)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 504)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 24)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 504)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 408)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 504)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 120)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 504)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 504)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 511)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 25)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 511)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 409)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 511)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 121)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 511)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 505)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 518)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 26)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 518)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 410)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 518)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 122)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 518)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 506)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 567)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 27)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 567)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 411)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 567)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 123)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 567)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 507)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 574)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 28)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 574)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 412)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 574)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 124)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 574)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 508)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 581)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 29)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 581)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 413)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 581)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 125)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 581)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 509)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 630)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 30)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 630)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 414)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 630)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 126)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 630)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 510)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 637)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 31)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 637)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 415)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 637)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 127)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 637)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 511)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 644)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 32)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 644)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 416)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 644)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 128)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 644)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 512)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 693)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 33)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 693)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 417)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 693)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 129)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 693)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 513)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 700)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 34)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 700)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 418)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 700)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 130)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 700)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 514)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 707)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 35)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 707)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 419)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 707)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 131)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 707)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 515)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 756)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 36)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 756)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 420)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 756)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 132)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 756)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 516)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 763)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 37)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 763)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 421)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 763)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 133)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 763)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 517)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 770)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 38)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 770)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 422)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 770)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 134)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 770)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 518)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 819)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 39)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 819)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 423)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 819)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 135)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 819)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 519)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 826)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 40)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 826)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 424)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 826)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 136)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 826)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 520)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 833)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 41)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 833)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 425)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 833)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 137)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 833)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 521)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 882)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 42)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 882)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 426)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 882)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 138)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 882)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 522)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 889)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 43)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 889)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 427)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 889)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 139)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 889)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 523)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 896)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 44)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 896)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 428)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 896)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 140)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 896)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 524)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 945)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 45)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 945)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 429)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 945)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 141)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 945)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 525)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 952)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 46)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 952)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 430)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 952)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 142)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 952)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 526)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 959)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 47)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 959)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 431)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 959)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 143)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 959)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 527)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1008)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 48)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1008)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 432)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1008)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 144)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1008)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 528)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1015)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 49)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1015)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 433)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1015)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 145)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1015)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 529)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1022)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 50)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1022)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 434)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1022)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 146)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1022)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 530)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1071)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 51)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1071)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 435)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1071)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 147)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1071)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 531)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1078)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 52)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1078)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 436)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1078)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 148)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1078)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 532)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1085)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 53)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1085)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 437)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1085)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 149)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1085)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 533)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1134)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 54)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1134)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 438)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1134)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 150)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1134)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 534)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1141)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 55)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1141)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 439)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1141)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 151)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1141)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 535)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1148)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 56)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1148)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 440)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1148)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 152)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1148)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 536)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1197)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 57)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1197)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 441)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1197)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 153)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1197)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 537)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1204)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 58)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1204)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 442)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1204)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 154)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1204)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 538)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1211)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 59)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1211)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 443)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1211)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 155)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1211)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 539)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1260)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 60)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1260)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 444)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1260)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 156)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1260)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 540)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1267)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 61)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1267)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 445)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1267)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 157)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1267)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 541)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1274)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 62)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1274)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 446)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1274)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 158)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1274)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 542)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1323)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 63)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1323)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 447)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1323)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 159)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1323)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 543)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1330)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 64)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1330)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 448)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1330)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 160)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1330)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 544)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1337)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 65)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1337)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 449)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1337)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 161)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1337)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 545)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1386)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 66)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1386)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 450)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1386)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 162)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1386)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 546)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1393)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 67)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1393)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 451)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1393)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 163)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1393)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 547)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1400)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 68)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1400)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 452)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1400)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 164)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1400)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 548)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1449)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 69)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1449)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 453)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1449)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 165)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1449)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 549)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1456)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 70)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1456)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 454)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1456)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 166)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1456)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 550)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1463)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 71)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1463)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 455)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1463)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 167)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1463)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 551)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1512)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 72)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1512)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 456)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1512)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 168)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1512)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 552)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1519)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 73)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1519)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 457)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1519)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 169)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1519)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 553)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1526)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 74)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1526)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 458)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1526)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 170)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1526)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 554)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1575)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 75)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1575)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 459)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1575)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 171)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1575)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 555)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1582)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 76)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1582)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 460)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1582)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 172)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1582)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 556)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1589)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 77)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1589)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 461)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1589)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 173)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1589)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 557)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1638)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 78)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1638)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 462)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1638)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 174)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1638)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 558)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1645)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 79)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1645)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 463)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1645)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 175)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1645)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 559)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1652)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 80)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1652)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 464)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1652)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 176)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1652)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 560)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1701)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 81)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1701)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 465)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1701)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 177)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1701)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 561)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1708)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 82)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1708)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 466)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1708)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 178)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1708)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 562)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1715)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 83)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1715)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 467)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1715)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 179)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1715)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 563)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1764)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 84)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1764)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 468)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1764)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 180)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1764)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 564)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1771)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 85)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1771)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 469)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1771)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 181)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1771)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 565)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1778)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 86)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1778)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 470)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1778)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 182)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1778)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 566)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1827)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 87)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1827)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 471)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1827)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 183)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1827)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 567)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1834)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 88)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1834)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 472)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1834)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 184)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1834)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 568)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1841)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 89)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1841)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 473)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1841)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 185)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1841)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 569)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1890)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 90)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1890)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 474)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1890)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 186)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1890)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 570)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1897)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 91)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1897)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 475)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1897)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 187)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1897)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 571)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1904)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 92)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1904)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 476)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1904)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 188)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1904)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 572)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1953)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 93)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1953)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 477)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1953)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 189)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1953)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 573)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1960)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 94)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1960)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 478)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1960)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 190)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1960)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 574)]))
- conv2d_nchw_1[0] = (conv2d_nchw_1[0] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1967)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 95)]))
- conv2d_nchw_1[2] = (conv2d_nchw_1[2] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1967)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 479)]))
- conv2d_nchw_1[1] = (conv2d_nchw_1[1] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1967)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 191)]))
- conv2d_nchw_1[3] = (conv2d_nchw_1[3] + (pad_temp.shared_1[(floormod(threadIdx.x, 49) + 1967)]*kernel.shared_1[((floordiv(threadIdx.x, 49)*192) + 575)]))
}
}
}
for (i1.inner: int32, 0, 2) {
- compute[((((blockIdx.x*392) + (floordiv(threadIdx.x, 49)*98)) + (i1.inner*49)) + floormod(threadIdx.x, 49))] = max((conv2d_nchw_1[i1.inner] + bias[(((blockIdx.x*8) + (floordiv(threadIdx.x, 49)*2)) + i1.inner)]), 0f32)
- compute[(((((blockIdx.x*392) + (floordiv(threadIdx.x, 49)*98)) + (i1.inner*49)) + floormod(threadIdx.x, 49)) + 196)] = max((conv2d_nchw_1[(i1.inner + 2)] + bias[((((blockIdx.x*8) + (floordiv(threadIdx.x, 49)*2)) + i1.inner) + 4)]), 0f32)
+ for (i3.inner: int32, 0, 7) {
+ compute[(((((blockIdx.x*784) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(threadIdx.x, 7)*7)) + i3.inner)] = max((conv2d_nchw_1[((i1.inner*7) + i3.inner)] + bias[(((blockIdx.x*16) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+ }
}
}
}
@@ -973,7 +733,7 @@ cooperative fetching, unrolling and operator fusion.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.298 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 0.236 ms
</pre></div>
</div>
</div>
@@ -1003,36 +763,36 @@ conv2d_nchw_nn_o_i, conv2d_nchw_nn_i = s[conv2d_nchw].split(conv2d_nchw_nn, fact
conv2d_nchw_nn_o_o_i, conv2d_nchw_nn_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_i, factor=1)
conv2d_nchw_nn_o_o_o_i, conv2d_nchw_nn_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_i, factor=1)
conv2d_nchw_nn_o_o_o_o, conv2d_nchw_nn_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_nn_o_o_o_i, factor=1)
-conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=1)
-conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=2)
-conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=2)
-conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=2)
+conv2d_nchw_ff_o_i, conv2d_nchw_ff_i = s[conv2d_nchw].split(conv2d_nchw_ff, factor=2)
+conv2d_nchw_ff_o_o_i, conv2d_nchw_ff_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_i, factor=1)
+conv2d_nchw_ff_o_o_o_i, conv2d_nchw_ff_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_i, factor=8)
+conv2d_nchw_ff_o_o_o_o, conv2d_nchw_ff_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_ff_o_o_o_i, factor=1)
conv2d_nchw_yy_o_i, conv2d_nchw_yy_i = s[conv2d_nchw].split(conv2d_nchw_yy, factor=1)
conv2d_nchw_yy_o_o_i, conv2d_nchw_yy_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_i, factor=1)
conv2d_nchw_yy_o_o_o_i, conv2d_nchw_yy_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_i, factor=7)
conv2d_nchw_yy_o_o_o_o, conv2d_nchw_yy_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_yy_o_o_o_i, factor=1)
-conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=1)
+conv2d_nchw_xx_o_i, conv2d_nchw_xx_i = s[conv2d_nchw].split(conv2d_nchw_xx, factor=7)
conv2d_nchw_xx_o_o_i, conv2d_nchw_xx_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_i, factor=1)
-conv2d_nchw_xx_o_o_o_i, conv2d_nchw_xx_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_i, factor=7)
+conv2d_nchw_xx_o_o_o_i, conv2d_nchw_xx_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_i, factor=1)
conv2d_nchw_xx_o_o_o_o, conv2d_nchw_xx_o_o_o_i = s[conv2d_nchw].split(conv2d_nchw_xx_o_o_o_i, factor=1)
-conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=1)
-conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=32)
-conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=1)
-conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=3)
+conv2d_nchw_rc_o_i, conv2d_nchw_rc_i = s[conv2d_nchw].split(conv2d_nchw_rc, factor=8)
+conv2d_nchw_rc_o_o, conv2d_nchw_rc_o_i = s[conv2d_nchw].split(conv2d_nchw_rc_o_i, factor=2)
+conv2d_nchw_ry_o_i, conv2d_nchw_ry_i = s[conv2d_nchw].split(conv2d_nchw_ry, factor=3)
+conv2d_nchw_ry_o_o, conv2d_nchw_ry_o_i = s[conv2d_nchw].split(conv2d_nchw_ry_o_i, factor=1)
conv2d_nchw_rx_o_i, conv2d_nchw_rx_i = s[conv2d_nchw].split(conv2d_nchw_rx, factor=1)
-conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=1)
+conv2d_nchw_rx_o_o, conv2d_nchw_rx_o_i = s[conv2d_nchw].split(conv2d_nchw_rx_o_i, factor=3)
s[conv2d_nchw].reorder(conv2d_nchw_nn_o_o_o_o, conv2d_nchw_ff_o_o_o_o, conv2d_nchw_yy_o_o_o_o, conv2d_nchw_xx_o_o_o_o, conv2d_nchw_nn_o_o_o_i, conv2d_nchw_ff_o_o_o_i, conv2d_nchw_yy_o_o_o_i, conv2d_nchw_xx_o_o_o_i, conv2d_nchw_nn_o_o_i, conv2d_nchw_ff_o_o_i, conv2d_nchw_yy_o_o_i, conv2d_nchw_xx_o_o_i, conv2d_nchw_rc_o_o, conv2d_nchw_ry_o_o, conv2d_nchw_rx_o_o, conv2d_nchw_rc_o_i, conv2d_nchw_ry_o_i, conv2d_nchw_rx_o_i, conv2d_nchw_nn_o_i, conv2d_nchw_ff_o_i, conv2d_nchw_yy_o_i, conv2d_nc [...]
compute_i0_o_i, compute_i0_i = s[compute].split(compute_i0, factor=1)
compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=2)
-compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=2)
-compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=2)
+compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=8)
+compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=7)
compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
-compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=1)
-compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=7)
+compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=7)
+compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=1)
compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=1)
s[compute].reorder(compute_i0_o_o_o, compute_i1_o_o_o, compute_i2_o_o_o, compute_i3_o_o_o, compute_i0_o_o_i, compute_i1_o_o_i, compute_i2_o_o_i, compute_i3_o_o_i, compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i, compute_i0_i, compute_i1_i, compute_i2_i, compute_i3_i)
s[conv2d_nchw].compute_at(s[compute], compute_i3_o_i)
@@ -1050,16 +810,16 @@ s[compute].bind(compute_i0_o_o_i_i1_o_o_i_fused_i2_o_o_i_fused_i3_o_o_i_fused, t
compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused = s[compute].fuse(compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i)
s[compute].bind(compute_i0_o_i_i1_o_i_fused_i2_o_i_fused_i3_o_i_fused, te.thread_axis("threadIdx.x"))
kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
-kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
+kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=24)
s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=98)
+kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=98)
+pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
-s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 512)
+s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "auto_unroll_max_step", 64)
s[conv2d_nchw].pragma(conv2d_nchw_nn_o_o_o_o, "unroll_explicit", True)
CUDA source code:
@@ -1077,440 +837,202 @@ CUDA source code:
#define int64_t long long
#define uint64_t unsigned long long
#endif
-extern "C" __global__ void __launch_bounds__(98) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
- float conv2d_nchw[4];
- __shared__ float pad_temp_shared[2016];
- __shared__ float kernel_shared[768];
+extern "C" __global__ void __launch_bounds__(56) default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
+ float conv2d_nchw[14];
+ __shared__ float pad_temp_shared[1296];
+ __shared__ float kernel_shared[2304];
conv2d_nchw[0] = 0.000000e+00f;
- conv2d_nchw[2] = 0.000000e+00f;
conv2d_nchw[1] = 0.000000e+00f;
+ conv2d_nchw[2] = 0.000000e+00f;
conv2d_nchw[3] = 0.000000e+00f;
- for (int rc_outer_outer = 0; rc_outer_outer < 16; ++rc_outer_outer) {
- for (int rx_outer_outer = 0; rx_outer_outer < 3; ++rx_outer_outer) {
- __syncthreads();
- pad_temp_shared[((int)threadIdx.x)] = (((((7 <= (((int)threadIdx.x) % 63)) && ((((int)threadIdx.x) % 63) < 56)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[(((((rc_outer_outer * 1568) + ((((int)threadIdx.x) / 63) * 49)) + rx_outer_outer) + (((int)threadIdx.x) % 63)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 98)] = (((((1 <= (((((int)threadIdx.x) / 7) + 5) % 9)) && ((((((int)threadIdx.x) / 7) + 5) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 98) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 5) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 196)] = (((((1 <= (((((int)threadIdx.x) / 7) + 1) % 9)) && ((((((int)threadIdx.x) / 7) + 1) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 196) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 1) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 294)] = (((((1 <= (((((int)threadIdx.x) / 7) + 6) % 9)) && ((((((int)threadIdx.x) / 7) + 6) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 294) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 6) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 392)] = (((((1 <= (((((int)threadIdx.x) / 7) + 2) % 9)) && ((((((int)threadIdx.x) / 7) + 2) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 392) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 2) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 490)] = (((((1 <= (((((int)threadIdx.x) / 7) + 7) % 9)) && ((((((int)threadIdx.x) / 7) + 7) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 490) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 7) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 588)] = (((((1 <= (((((int)threadIdx.x) / 7) + 3) % 9)) && ((((((int)threadIdx.x) / 7) + 3) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 588) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 3) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 686)] = (((((1 <= (((((int)threadIdx.x) / 7) + 8) % 9)) && ((((((int)threadIdx.x) / 7) + 8) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 686) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 8) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 784)] = (((((1 <= (((((int)threadIdx.x) / 7) + 4) % 9)) && ((((((int)threadIdx.x) / 7) + 4) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 784) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 4) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 882)] = (((((7 <= (((int)threadIdx.x) % 63)) && ((((int)threadIdx.x) % 63) < 56)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[(((((rc_outer_outer * 1568) + ((((int)threadIdx.x) / 63) * 49)) + rx_outer_outer) + (((int)threadIdx.x) % 63)) + 678)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 980)] = (((((1 <= (((((int)threadIdx.x) / 7) + 5) % 9)) && ((((((int)threadIdx.x) / 7) + 5) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 980) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 5) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1078)] = (((((1 <= (((((int)threadIdx.x) / 7) + 1) % 9)) && ((((((int)threadIdx.x) / 7) + 1) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1078) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 1) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1176)] = (((((1 <= (((((int)threadIdx.x) / 7) + 6) % 9)) && ((((((int)threadIdx.x) / 7) + 6) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1176) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 6) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1274)] = (((((1 <= (((((int)threadIdx.x) / 7) + 2) % 9)) && ((((((int)threadIdx.x) / 7) + 2) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1274) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 2) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1372)] = (((((1 <= (((((int)threadIdx.x) / 7) + 7) % 9)) && ((((((int)threadIdx.x) / 7) + 7) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1372) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 7) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1470)] = (((((1 <= (((((int)threadIdx.x) / 7) + 3) % 9)) && ((((((int)threadIdx.x) / 7) + 3) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1470) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 3) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1568)] = (((((1 <= (((((int)threadIdx.x) / 7) + 8) % 9)) && ((((((int)threadIdx.x) / 7) + 8) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1568) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 8) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1666)] = (((((1 <= (((((int)threadIdx.x) / 7) + 4) % 9)) && ((((((int)threadIdx.x) / 7) + 4) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1666) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 4) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1764)] = (((((7 <= (((int)threadIdx.x) % 63)) && ((((int)threadIdx.x) % 63) < 56)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[(((((rc_outer_outer * 1568) + ((((int)threadIdx.x) / 63) * 49)) + rx_outer_outer) + (((int)threadIdx.x) % 63)) + 1364)] : 0.000000e+00f);
- pad_temp_shared[(((int)threadIdx.x) + 1862)] = (((((1 <= (((((int)threadIdx.x) / 7) + 5) % 9)) && ((((((int)threadIdx.x) / 7) + 5) % 9) < 8)) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1862) / 63) * 49)) + ((((((int)threadIdx.x) / 7) + 5) % 9) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- if (((int)threadIdx.x) < 56) {
- pad_temp_shared[(((int)threadIdx.x) + 1960)] = ((((((int)threadIdx.x) < 49) && (1 <= (rx_outer_outer + (((int)threadIdx.x) % 7)))) && ((rx_outer_outer + (((int)threadIdx.x) % 7)) < 8)) ? data[((((((rc_outer_outer * 1568) + (((((int)threadIdx.x) + 1960) / 63) * 49)) + (((((int)threadIdx.x) / 7) + 1) * 7)) + rx_outer_outer) + (((int)threadIdx.x) % 7)) - 8)] : 0.000000e+00f);
- }
- kernel_shared[((int)threadIdx.x)] = kernel[(((((((int)blockIdx.x) * 36864) + ((((int)threadIdx.x) / 96) * 4608)) + (rc_outer_outer * 288)) + ((((int)threadIdx.x) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 98)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 98) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 2) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 196)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 196) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 4) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 294)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 294) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 6) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 392)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 392) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 8) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 490)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 490) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 10) % 96) * 3)) + rx_outer_outer)];
- kernel_shared[(((int)threadIdx.x) + 588)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 588) / 96) * 4608)) + (rc_outer_outer * 288)) + (((((int)threadIdx.x) + 12) % 96) * 3)) + rx_outer_outer)];
- if (((int)threadIdx.x) < 82) {
- kernel_shared[(((int)threadIdx.x) + 686)] = kernel[(((((((int)blockIdx.x) * 36864) + (((((int)threadIdx.x) + 686) / 96) * 4608)) + (rc_outer_outer * 288)) + ((((int)threadIdx.x) + 14) * 3)) + rx_outer_outer)];
+ conv2d_nchw[4] = 0.000000e+00f;
+ conv2d_nchw[5] = 0.000000e+00f;
+ conv2d_nchw[6] = 0.000000e+00f;
+ conv2d_nchw[7] = 0.000000e+00f;
+ conv2d_nchw[8] = 0.000000e+00f;
+ conv2d_nchw[9] = 0.000000e+00f;
+ conv2d_nchw[10] = 0.000000e+00f;
+ conv2d_nchw[11] = 0.000000e+00f;
+ conv2d_nchw[12] = 0.000000e+00f;
+ conv2d_nchw[13] = 0.000000e+00f;
+ for (int rc_outer_outer = 0; rc_outer_outer < 32; ++rc_outer_outer) {
+ __syncthreads();
+ pad_temp_shared[((int)threadIdx.x)] = ((((9 <= ((int)threadIdx.x)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[((((rc_outer_outer * 784) + ((((int)threadIdx.x) / 9) * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 56)] = (((((9 <= ((((int)threadIdx.x) + 56) % 81)) && (((((int)threadIdx.x) + 56) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 2) % 9))) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 56) / 81) * 49)) + ((((((int)threadIdx.x) + 56) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 112)] = (((((9 <= ((((int)threadIdx.x) + 31) % 81)) && (((((int)threadIdx.x) + 31) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 112) / 81) * 49)) + ((((((int)threadIdx.x) + 31) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 168)] = ((((9 <= ((((int)threadIdx.x) + 6) % 81)) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 168) / 81) * 49)) + ((((((int)threadIdx.x) + 6) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 224)] = (((((9 <= ((((int)threadIdx.x) + 62) % 81)) && (((((int)threadIdx.x) + 62) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 8) % 9))) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 224) / 81) * 49)) + ((((((int)threadIdx.x) + 62) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 280)] = (((((9 <= ((((int)threadIdx.x) + 37) % 81)) && (((((int)threadIdx.x) + 37) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 1) % 9))) && (((((int)threadIdx.x) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 280) / 81) * 49)) + ((((((int)threadIdx.x) + 37) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 336)] = (((1 <= ((((int)threadIdx.x) + 3) % 9)) && (((((int)threadIdx.x) + 3) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 336) / 81) * 49)) + ((((((int)threadIdx.x) + 12) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 392)] = (((((9 <= ((((int)threadIdx.x) + 68) % 81)) && (((((int)threadIdx.x) + 68) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 5) % 9))) && (((((int)threadIdx.x) + 5) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 392) / 81) * 49)) + ((((((int)threadIdx.x) + 68) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 448)] = (((((9 <= ((((int)threadIdx.x) + 43) % 81)) && (((((int)threadIdx.x) + 43) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 7) % 9))) && (((((int)threadIdx.x) + 7) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 448) / 81) * 49)) + ((((((int)threadIdx.x) + 43) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 504)] = ((((((int)threadIdx.x) < 54) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 504) / 81) * 49)) + (((((int)threadIdx.x) / 9) + 2) * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 560)] = (((((9 <= ((((int)threadIdx.x) + 74) % 81)) && (((((int)threadIdx.x) + 74) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 2) % 9))) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 560) / 81) * 49)) + ((((((int)threadIdx.x) + 74) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 616)] = (((((9 <= ((((int)threadIdx.x) + 49) % 81)) && (((((int)threadIdx.x) + 49) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 616) / 81) * 49)) + ((((((int)threadIdx.x) + 49) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 672)] = ((((((int)threadIdx.x) < 48) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 672) / 81) * 49)) + ((((((int)threadIdx.x) + 24) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 728)] = (((((9 <= ((((int)threadIdx.x) + 80) % 81)) && (((((int)threadIdx.x) + 80) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 8) % 9))) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 728) / 81) * 49)) + ((((((int)threadIdx.x) + 80) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 784)] = (((((9 <= ((((int)threadIdx.x) + 55) % 81)) && (((((int)threadIdx.x) + 55) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 1) % 9))) && (((((int)threadIdx.x) + 1) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 784) / 81) * 49)) + ((((((int)threadIdx.x) + 55) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 1) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 840)] = (((((9 <= ((((int)threadIdx.x) + 30) % 81)) && (((((int)threadIdx.x) + 30) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 3) % 9))) && (((((int)threadIdx.x) + 3) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 840) / 81) * 49)) + ((((((int)threadIdx.x) + 30) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 3) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 896)] = ((((9 <= ((((int)threadIdx.x) + 5) % 81)) && (1 <= ((((int)threadIdx.x) + 5) % 9))) && (((((int)threadIdx.x) + 5) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 896) / 81) * 49)) + ((((((int)threadIdx.x) + 5) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 5) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 952)] = (((((9 <= ((((int)threadIdx.x) + 61) % 81)) && (((((int)threadIdx.x) + 61) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 7) % 9))) && (((((int)threadIdx.x) + 7) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 952) / 81) * 49)) + ((((((int)threadIdx.x) + 61) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 7) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1008)] = (((((1 <= (((((int)threadIdx.x) / 9) + 4) % 9)) && (((((int)threadIdx.x) + 36) % 81) < 72)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1008) / 81) * 49)) + ((((((int)threadIdx.x) / 9) + 4) % 9) * 7)) + (((int)threadIdx.x) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1064)] = (((1 <= ((((int)threadIdx.x) + 2) % 9)) && (((((int)threadIdx.x) + 2) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1064) / 81) * 49)) + ((((((int)threadIdx.x) + 11) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 2) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1120)] = (((((9 <= ((((int)threadIdx.x) + 67) % 81)) && (((((int)threadIdx.x) + 67) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 4) % 9))) && (((((int)threadIdx.x) + 4) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1120) / 81) * 49)) + ((((((int)threadIdx.x) + 67) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 4) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1176)] = (((((9 <= ((((int)threadIdx.x) + 42) % 81)) && (((((int)threadIdx.x) + 42) % 81) < 72)) && (1 <= ((((int)threadIdx.x) + 6) % 9))) && (((((int)threadIdx.x) + 6) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1176) / 81) * 49)) + ((((((int)threadIdx.x) + 42) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 6) % 9)) - 8)] : 0.000000e+00f);
+ pad_temp_shared[(((int)threadIdx.x) + 1232)] = ((((((int)threadIdx.x) < 55) && (1 <= ((((int)threadIdx.x) + 8) % 9))) && (((((int)threadIdx.x) + 8) % 9) < 8)) ? data[(((((rc_outer_outer * 784) + (((((int)threadIdx.x) + 1232) / 81) * 49)) + ((((((int)threadIdx.x) + 17) % 81) / 9) * 7)) + ((((int)threadIdx.x) + 8) % 9)) - 8)] : 0.000000e+00f);
+ if (((int)threadIdx.x) < 8) {
+ pad_temp_shared[(((int)threadIdx.x) + 1288)] = 0.000000e+00f;
+ }
+ kernel_shared[(((int)threadIdx.x) * 24)] = kernel[((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24))];
+ kernel_shared[((((int)threadIdx.x) * 24) + 1)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 1)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 2)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 2)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 3)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 3)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 4)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 4)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 5)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 5)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 6)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 6)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 7)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 7)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 8)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 8)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 9)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 9)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 10)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 10)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 11)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 11)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 12)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 12)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 13)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 13)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 14)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 14)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 15)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 15)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 16)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 16)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 17)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 17)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 18)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 18)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 19)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 19)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 20)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 20)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 21)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 21)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 22)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 22)];
+ kernel_shared[((((int)threadIdx.x) * 24) + 23)] = kernel[(((((((int)blockIdx.x) * 73728) + ((((int)threadIdx.x) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((int)threadIdx.x) % 6) * 24)) + 23)];
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1344)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 16) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1345)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 16) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1346)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 16) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1347)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 17) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1348)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 17) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1349)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 17) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1350)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 18) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1351)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 18) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1352)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 18) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1353)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 19) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1354)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 19) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1355)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 19) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1356)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 20) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1357)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 20) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1358)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 20) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1359)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 21) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1360)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 21) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1361)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 21) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1362)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 22) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1363)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 22) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1364)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 22) % 48) * 3)) + 2)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1365)] = kernel[((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 23) % 48) * 3))];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1366)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 23) % 48) * 3)) + 1)];
+ }
+ if (((int)threadIdx.x) < 40) {
+ kernel_shared[((((int)threadIdx.x) * 24) + 1367)] = kernel[(((((((int)blockIdx.x) * 73728) + (((((int)threadIdx.x) + 56) / 6) * 4608)) + (rc_outer_outer * 144)) + ((((((int)threadIdx.x) * 8) + 23) % 48) * 3)) + 2)];
+ }
+ __syncthreads();
+ for (int rc_outer_inner = 0; rc_outer_inner < 2; ++rc_outer_inner) {
+ for (int rx_outer_inner = 0; rx_outer_inner < 3; ++rx_outer_inner) {
+ for (int rc_inner = 0; rc_inner < 8; ++rc_inner) {
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 1)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 2)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 3)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 4)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 5)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 6)] * kernel_shared[(((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 1)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 2)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 3)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 4)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 5)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 6)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 144)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 9)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 10)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 11)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 12)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 13)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 14)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 15)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 3)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 9)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 10)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 11)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 12)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 13)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 14)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 15)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 147)]));
+ conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 18)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 19)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 20)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 21)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 22)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 23)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 24)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 6)]));
+ conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 18)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 19)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 20)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 21)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 22)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 23)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp_shared[(((((rc_outer_inner * 648) + (rc_inner * 81)) + ((((int)threadIdx.x) % 7) * 9)) + rx_outer_inner) + 24)] * kernel_shared[((((((((int)threadIdx.x) / 7) * 288) + (rc_outer_inner * 72)) + (rc_inner * 9)) + rx_outer_inner) + 150)]));
+ }
}
- __syncthreads();
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[(((int)threadIdx.x) % 49)] * kernel_shared[((((int)threadIdx.x) / 49) * 192)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[(((int)threadIdx.x) % 49)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 384)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[(((int)threadIdx.x) % 49)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 96)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[(((int)threadIdx.x) % 49)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 480)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 7)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 1)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 7)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 385)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 7)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 97)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 7)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 481)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 14)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 2)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 14)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 386)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 14)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 98)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 14)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 482)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 63)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 3)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 63)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 387)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 63)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 99)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 63)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 483)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 70)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 4)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 70)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 388)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 70)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 100)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 70)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 484)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 77)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 5)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 77)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 389)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 77)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 101)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 77)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 485)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 126)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 6)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 126)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 390)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 126)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 102)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 126)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 486)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 133)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 7)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 133)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 391)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 133)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 103)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 133)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 487)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 140)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 8)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 140)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 392)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 140)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 104)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 140)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 488)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 189)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 9)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 189)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 393)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 189)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 105)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 189)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 489)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 196)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 10)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 196)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 394)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 196)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 106)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 196)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 490)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 203)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 11)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 203)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 395)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 203)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 107)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 203)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 491)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 252)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 12)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 252)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 396)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 252)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 108)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 252)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 492)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 259)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 13)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 259)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 397)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 259)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 109)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 259)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 493)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 266)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 14)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 266)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 398)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 266)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 110)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 266)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 494)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 315)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 15)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 315)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 399)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 315)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 111)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 315)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 495)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 322)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 16)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 322)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 400)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 322)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 112)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 322)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 496)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 329)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 17)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 329)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 401)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 329)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 113)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 329)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 497)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 378)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 18)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 378)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 402)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 378)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 114)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 378)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 498)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 385)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 19)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 385)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 403)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 385)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 115)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 385)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 499)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 392)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 20)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 392)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 404)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 392)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 116)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 392)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 500)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 441)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 21)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 441)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 405)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 441)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 117)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 441)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 501)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 448)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 22)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 448)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 406)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 448)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 118)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 448)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 502)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 455)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 23)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 455)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 407)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 455)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 119)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 455)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 503)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 504)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 24)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 504)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 408)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 504)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 120)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 504)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 504)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 511)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 25)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 511)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 409)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 511)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 121)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 511)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 505)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 518)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 26)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 518)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 410)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 518)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 122)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 518)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 506)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 567)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 27)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 567)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 411)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 567)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 123)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 567)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 507)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 574)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 28)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 574)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 412)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 574)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 124)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 574)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 508)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 581)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 29)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 581)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 413)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 581)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 125)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 581)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 509)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 630)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 30)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 630)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 414)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 630)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 126)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 630)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 510)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 637)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 31)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 637)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 415)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 637)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 127)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 637)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 511)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 644)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 32)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 644)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 416)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 644)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 128)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 644)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 512)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 693)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 33)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 693)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 417)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 693)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 129)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 693)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 513)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 700)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 34)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 700)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 418)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 700)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 130)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 700)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 514)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 707)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 35)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 707)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 419)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 707)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 131)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 707)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 515)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 756)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 36)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 756)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 420)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 756)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 132)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 756)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 516)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 763)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 37)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 763)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 421)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 763)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 133)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 763)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 517)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 770)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 38)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 770)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 422)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 770)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 134)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 770)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 518)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 819)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 39)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 819)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 423)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 819)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 135)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 819)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 519)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 826)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 40)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 826)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 424)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 826)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 136)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 826)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 520)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 833)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 41)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 833)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 425)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 833)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 137)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 833)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 521)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 882)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 42)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 882)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 426)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 882)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 138)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 882)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 522)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 889)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 43)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 889)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 427)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 889)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 139)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 889)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 523)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 896)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 44)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 896)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 428)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 896)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 140)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 896)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 524)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 945)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 45)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 945)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 429)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 945)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 141)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 945)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 525)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 952)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 46)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 952)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 430)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 952)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 142)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 952)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 526)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 959)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 47)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 959)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 431)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 959)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 143)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 959)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 527)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1008)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 48)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1008)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 432)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1008)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 144)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1008)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 528)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1015)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 49)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1015)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 433)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1015)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 145)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1015)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 529)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1022)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 50)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1022)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 434)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1022)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 146)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1022)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 530)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1071)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 51)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1071)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 435)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1071)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 147)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1071)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 531)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1078)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 52)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1078)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 436)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1078)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 148)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1078)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 532)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1085)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 53)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1085)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 437)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1085)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 149)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1085)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 533)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1134)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 54)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1134)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 438)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1134)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 150)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1134)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 534)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1141)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 55)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1141)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 439)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1141)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 151)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1141)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 535)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1148)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 56)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1148)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 440)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1148)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 152)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1148)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 536)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1197)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 57)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1197)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 441)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1197)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 153)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1197)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 537)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1204)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 58)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1204)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 442)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1204)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 154)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1204)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 538)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1211)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 59)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1211)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 443)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1211)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 155)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1211)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 539)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1260)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 60)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1260)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 444)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1260)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 156)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1260)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 540)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1267)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 61)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1267)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 445)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1267)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 157)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1267)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 541)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1274)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 62)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1274)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 446)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1274)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 158)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1274)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 542)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1323)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 63)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1323)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 447)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1323)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 159)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1323)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 543)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1330)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 64)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1330)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 448)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1330)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 160)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1330)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 544)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1337)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 65)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1337)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 449)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1337)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 161)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1337)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 545)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1386)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 66)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1386)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 450)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1386)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 162)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1386)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 546)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1393)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 67)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1393)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 451)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1393)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 163)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1393)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 547)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1400)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 68)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1400)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 452)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1400)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 164)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1400)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 548)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1449)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 69)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1449)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 453)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1449)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 165)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1449)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 549)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1456)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 70)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1456)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 454)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1456)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 166)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1456)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 550)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1463)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 71)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1463)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 455)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1463)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 167)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1463)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 551)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1512)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 72)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1512)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 456)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1512)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 168)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1512)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 552)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1519)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 73)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1519)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 457)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1519)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 169)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1519)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 553)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1526)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 74)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1526)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 458)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1526)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 170)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1526)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 554)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1575)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 75)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1575)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 459)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1575)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 171)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1575)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 555)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1582)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 76)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1582)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 460)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1582)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 172)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1582)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 556)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1589)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 77)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1589)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 461)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1589)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 173)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1589)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 557)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1638)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 78)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1638)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 462)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1638)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 174)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1638)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 558)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1645)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 79)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1645)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 463)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1645)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 175)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1645)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 559)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1652)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 80)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1652)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 464)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1652)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 176)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1652)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 560)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1701)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 81)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1701)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 465)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1701)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 177)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1701)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 561)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1708)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 82)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1708)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 466)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1708)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 178)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1708)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 562)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1715)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 83)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1715)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 467)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1715)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 179)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1715)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 563)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1764)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 84)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1764)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 468)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1764)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 180)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1764)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 564)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1771)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 85)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1771)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 469)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1771)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 181)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1771)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 565)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1778)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 86)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1778)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 470)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1778)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 182)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1778)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 566)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1827)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 87)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1827)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 471)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1827)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 183)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1827)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 567)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1834)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 88)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1834)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 472)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1834)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 184)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1834)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 568)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1841)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 89)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1841)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 473)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1841)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 185)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1841)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 569)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1890)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 90)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1890)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 474)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1890)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 186)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1890)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 570)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1897)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 91)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1897)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 475)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1897)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 187)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1897)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 571)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1904)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 92)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1904)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 476)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1904)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 188)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1904)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 572)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1953)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 93)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1953)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 477)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1953)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 189)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1953)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 573)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1960)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 94)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1960)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 478)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1960)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 190)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1960)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 574)]));
- conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1967)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 95)]));
- conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1967)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 479)]));
- conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1967)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 191)]));
- conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp_shared[((((int)threadIdx.x) % 49) + 1967)] * kernel_shared[(((((int)threadIdx.x) / 49) * 192) + 575)]));
}
}
for (int i1_inner = 0; i1_inner < 2; ++i1_inner) {
- compute[((((((int)blockIdx.x) * 392) + ((((int)threadIdx.x) / 49) * 98)) + (i1_inner * 49)) + (((int)threadIdx.x) % 49))] = max((conv2d_nchw[i1_inner] + bias[(((((int)blockIdx.x) * 8) + ((((int)threadIdx.x) / 49) * 2)) + i1_inner)]), 0.000000e+00f);
- compute[(((((((int)blockIdx.x) * 392) + ((((int)threadIdx.x) / 49) * 98)) + (i1_inner * 49)) + (((int)threadIdx.x) % 49)) + 196)] = max((conv2d_nchw[(i1_inner + 2)] + bias[((((((int)blockIdx.x) * 8) + ((((int)threadIdx.x) / 49) * 2)) + i1_inner) + 4)]), 0.000000e+00f);
+ for (int i3_inner = 0; i3_inner < 7; ++i3_inner) {
+ compute[(((((((int)blockIdx.x) * 784) + ((((int)threadIdx.x) / 7) * 98)) + (i1_inner * 49)) + ((((int)threadIdx.x) % 7) * 7)) + i3_inner)] = max((conv2d_nchw[((i1_inner * 7) + i3_inner)] + bias[(((((int)blockIdx.x) * 16) + ((((int)threadIdx.x) / 7) * 2)) + i1_inner)]), 0.000000e+00f);
+ }
}
}
</pre></div>
@@ -1548,7 +1070,7 @@ In the example below we resume the status and do more 5 trials.</p>
Get devices for measurement successfully!
</pre></div>
</div>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 27.217 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 20.941 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-conv2d-layer-cuda-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/e3e540f3b477c0c52d8eb73e674e8ffd/tune_conv2d_layer_cuda.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_conv2d_layer_cuda.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
index b6791e56c..3ac433f63 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_cuda.html
@@ -876,7 +876,7 @@ so we can read the log file and load the best schedules.</p>
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 9.7397 9.7346 9.7896 9.6948 0.0389
+ 9.6835 9.6888 9.7216 9.6402 0.0334
</pre></div>
</div>
</div>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
index 228aff7e4..350f7a4b3 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_network_x86.html
@@ -895,7 +895,7 @@ so we can read the log file and load the best schedules.</p>
Evaluate inference time cost...
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
- 766.0562 767.0835 768.9867 762.0984 2.9045
+ 790.2151 788.5246 795.5004 786.6204 3.8172
</pre></div>
</div>
</div>
@@ -917,7 +917,7 @@ to learn how to use the RPC Tracker and RPC Server.
To use the RPC Tracker in auto-scheduler, replace the runner in <code class="code docutils literal notranslate"><span class="pre">TuningOptions</span></code>
with <a class="reference internal" href="../../reference/api/python/auto_scheduler.html#tvm.auto_scheduler.RPCRunner" title="tvm.auto_scheduler.RPCRunner"><code class="xref any py py-class docutils literal notranslate"><span class="pre">auto_scheduler.RPCRunner</span></code></a>.</p></li>
</ol>
-<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 21.204 seconds)</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 18.949 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autoscheduler-tune-network-x86-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/e416b94ca1090b0897c0f6e0df95b911/tune_network_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">tune_network_x86.py</span></code></a></p>
diff --git a/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html b/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
index e2ce7f4d3..a58c862d1 100644
--- a/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
+++ b/docs/how_to/tune_with_autoscheduler/tune_sparse_x86.html
@@ -600,80 +600,30 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
placeholder_4: Buffer(placeholder_14: Pointer(float32), float32, [65536], []),
compute: Buffer(compute_2: Pointer(float32), float32, [65536], [])}
buffer_map = {placeholder_5: placeholder, placeholder_6: placeholder_1, placeholder_7: placeholder_2, placeholder_8: placeholder_3, placeholder_9: placeholder_4, compute_1: compute}
- preflattened_buffer_map = {placeholder_9: placeholder_15: Buffer(placeholder_14, float32, [128, 512], []), compute_1: compute_3: Buffer(compute_2, float32, [128, 512], []), placeholder_7: placeholder_16: Buffer(placeholder_12, int32, [4916], []), placeholder_8: placeholder_17: Buffer(placeholder_13, int32, [33], []), placeholder_6: placeholder_18: Buffer(placeholder_11, float32, [4916, 16, 1], []), placeholder_5: placeholder_19: Buffer(placeholder_10, float32, [128, 256], [])} {
- for (i0.outer.i1.outer.fused: int32, 0, 32) "parallel" {
- allocate(compute_4: Pointer(global float32), float32, [2048]), storage_scope = global {
- for (i.outer.inner: int32, 0, 2) {
+ preflattened_buffer_map = {placeholder_7: placeholder_15: Buffer(placeholder_12, int32, [4916], []), placeholder_5: placeholder_16: Buffer(placeholder_10, float32, [128, 256], []), placeholder_8: placeholder_17: Buffer(placeholder_13, int32, [33], []), placeholder_6: placeholder_18: Buffer(placeholder_11, float32, [4916, 16, 1], []), compute_1: compute_3: Buffer(compute_2, float32, [128, 512], []), placeholder_9: placeholder_19: Buffer(placeholder_14, float32, [128, 512], [])} {
+ for (i0.outer.i1.outer.fused: int32, 0, 16) "parallel" {
+ allocate(compute_4: Pointer(global float32), float32, [4096]), storage_scope = global {
+ for (i.outer.inner: int32, 0, 64) {
for (nb_j.inner: int32, 0, 2) {
- for (i.inner.init: int32, 0, 32) {
- let cse_var_1: int32 = (((i.outer.inner*1024) + (i.inner.init*32)) + (nb_j.inner*16))
- {
- compute_5: Buffer(compute_4, float32, [2048], [])[cse_var_1] = 0f32
- compute_5[(cse_var_1 + 1)] = 0f32
- compute_5[(cse_var_1 + 2)] = 0f32
- compute_5[(cse_var_1 + 3)] = 0f32
- compute_5[(cse_var_1 + 4)] = 0f32
- compute_5[(cse_var_1 + 5)] = 0f32
- compute_5[(cse_var_1 + 6)] = 0f32
- compute_5[(cse_var_1 + 7)] = 0f32
- compute_5[(cse_var_1 + 8)] = 0f32
- compute_5[(cse_var_1 + 9)] = 0f32
- compute_5[(cse_var_1 + 10)] = 0f32
- compute_5[(cse_var_1 + 11)] = 0f32
- compute_5[(cse_var_1 + 12)] = 0f32
- compute_5[(cse_var_1 + 13)] = 0f32
- compute_5[(cse_var_1 + 14)] = 0f32
- compute_5[(cse_var_1 + 15)] = 0f32
+ for (i.inner.init: int32, 0, 2) {
+ for (j.init: int32, 0, 16) {
+ compute_5: Buffer(compute_4, float32, [4096], [])[((((i.outer.inner*64) + (i.inner.init*32)) + (nb_j.inner*16)) + j.init)] = 0f32
}
}
- for (elem_idx: int32, 0, let cse_var_2: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner) in (placeholder_3[(cse_var_2 + 1)] - placeholder_3[cse_var_2])) {
- for (i.inner: int32, 0, 32) {
- let cse_var_21: int32 = (elem_idx*16)
- let cse_var_20: int32 = ((floormod(i0.outer.i1.outer.fused, 16)*2) + nb_j.inner)
- let cse_var_19: int32 = (((i.outer.inner*1024) + (i.inner*32)) + (nb_j.inner*16))
- let cse_var_18: int32 = (cse_var_19 + 1)
- let cse_var_17: int32 = (cse_var_19 + 11)
- let cse_var_16: int32 = (cse_var_19 + 12)
- let cse_var_15: int32 = (cse_var_19 + 13)
- let cse_var_14: int32 = (cse_var_19 + 14)
- let cse_var_13: int32 = (cse_var_19 + 15)
- let cse_var_12: int32 = (cse_var_19 + 2)
- let cse_var_11: int32 = (cse_var_19 + 3)
- let cse_var_10: int32 = (cse_var_19 + 4)
- let cse_var_9: int32 = (cse_var_19 + 5)
- let cse_var_8: int32 = (cse_var_19 + 6)
- let cse_var_7: int32 = (cse_var_19 + 7)
- let cse_var_6: int32 = (cse_var_19 + 8)
- let cse_var_5: int32 = (cse_var_19 + 9)
- let cse_var_4: int32 = (((floordiv(i0.outer.i1.outer.fused, 16)*16384) + (i.outer.inner*8192)) + (i.inner*256))
- let cse_var_3: int32 = (cse_var_19 + 10)
- {
- compute_5[cse_var_19] = (compute_5[cse_var_19] + (placeholder_1[((placeholder_3[cse_var_20]*16) + cse_var_21)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_18] = (compute_5[cse_var_18] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 1)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_12] = (compute_5[cse_var_12] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 2)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_11] = (compute_5[cse_var_11] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 3)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_10] = (compute_5[cse_var_10] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 4)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_9] = (compute_5[cse_var_9] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 5)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_8] = (compute_5[cse_var_8] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 6)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_7] = (compute_5[cse_var_7] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 7)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_6] = (compute_5[cse_var_6] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 8)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_5] = (compute_5[cse_var_5] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 9)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_3] = (compute_5[cse_var_3] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 10)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_17] = (compute_5[cse_var_17] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 11)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_16] = (compute_5[cse_var_16] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 12)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_15] = (compute_5[cse_var_15] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 13)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_14] = (compute_5[cse_var_14] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 14)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
- compute_5[cse_var_13] = (compute_5[cse_var_13] + (placeholder_1[(((placeholder_3[cse_var_20]*16) + cse_var_21) + 15)]*max(placeholder[(cse_var_4 + placeholder_2[(placeholder_3[cse_var_20] + elem_idx)])], 0f32)))
+ for (elem_idx: int32, 0, let cse_var_1: int32 = ((i0.outer.i1.outer.fused*2) + nb_j.inner) in (placeholder_3[(cse_var_1 + 1)] - placeholder_3[cse_var_1])) {
+ for (i.inner: int32, 0, 2) {
+ for (j: int32, 0, 16) {
+ let cse_var_3: int32 = ((i0.outer.i1.outer.fused*2) + nb_j.inner)
+ let cse_var_2: int32 = ((((i.outer.inner*64) + (i.inner*32)) + (nb_j.inner*16)) + j)
+ compute_5[cse_var_2] = (compute_5[cse_var_2] + (placeholder_1[(((placeholder_3[cse_var_3]*16) + (elem_idx*16)) + j)]*max(placeholder[(((i.outer.inner*512) + (i.inner*256)) + placeholder_2[(placeholder_3[cse_var_3] + elem_idx)])], 0f32)))
}
}
}
}
}
- for (i0.inner: int32, 0, 64) {
- for (i1.inner: int32, 0, 32) {
- let cse_var_22: int32 = ((((floordiv(i0.outer.i1.outer.fused, 16)*32768) + (i0.inner*512)) + (floormod(i0.outer.i1.outer.fused, 16)*32)) + i1.inner)
- compute[cse_var_22] = max((compute_5[((i0.inner*32) + i1.inner)] + placeholder_4[cse_var_22]), 0f32)
- }
+ for (i0.inner: int32, 0, 128) {
+ let cse_var_4: int32 = ((i0.inner*512) + (i0.outer.i1.outer.fused*32))
+ compute[ramp(cse_var_4, 1, 32)] = max((compute_5[ramp((i0.inner*32), 1, 32)] + placeholder_4[ramp(cse_var_4, 1, 32)]), broadcast(0f32, 32))
}
}
}
@@ -712,7 +662,7 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 1.723 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 2.110 ms
</pre></div>
</div>
<div class="admonition note">
diff --git a/docs/how_to/tune_with_autotvm/sg_execution_times.html b/docs/how_to/tune_with_autotvm/sg_execution_times.html
index 78e85574c..a9e41bc72 100644
--- a/docs/how_to/tune_with_autotvm/sg_execution_times.html
+++ b/docs/how_to/tune_with_autotvm/sg_execution_times.html
@@ -300,13 +300,13 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-tune-with-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:46.224</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
+<p><strong>00:45.441</strong> total execution time for <strong>how_to_tune_with_autotvm</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:45.298</strong>: <a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></li>
-<li><p><strong>00:00.245</strong>: <a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></li>
-<li><p><strong>00:00.227</strong>: <a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></li>
-<li><p><strong>00:00.227</strong>: <a class="reference internal" href="tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></li>
-<li><p><strong>00:00.227</strong>: <a class="reference internal" href="tune_relay_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a Convolutional Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></li>
+<li><p><strong>00:44.572</strong>: <a class="reference internal" href="tune_conv2d_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-conv2d-cuda-py"><span class="std std-ref">Tuning High Performance Convolution on NVIDIA GPUs</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_conv2d_cuda.py</span></code>)</p></li>
+<li><p><strong>00:00.225</strong>: <a class="reference internal" href="tune_relay_x86.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-x86-py"><span class="std std-ref">Auto-tuning a Convolutional Network for x86 CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_x86.py</span></code>)</p></li>
+<li><p><strong>00:00.217</strong>: <a class="reference internal" href="tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py"><span class="std std-ref">Auto-tuning a Convolutional Network for ARM CPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_arm.py</span></code>)</p></li>
+<li><p><strong>00:00.214</strong>: <a class="reference internal" href="tune_relay_mobile_gpu.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-mobile-gpu-py"><span class="std std-ref">Auto-tuning a Convolutional Network for Mobile GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_mobile_gpu.py</span></code>)</p></li>
+<li><p><strong>00:00.212</strong>: <a class="reference internal" href="tune_relay_cuda.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-cuda-py"><span class="std std-ref">Auto-tuning a Convolutional Network for NVIDIA GPU</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_cuda.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
index 73411b64f..db64f8993 100644
--- a/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
+++ b/docs/how_to/tune_with_autotvm/tune_conv2d_cuda.html
@@ -1142,8 +1142,8 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 4, 32]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 1, 128]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2885496
-No: 6 GFLOPS: 95.31/95.31 result: MeasureResult(costs=(0.0024288898541666667,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.794248342514038, timestamp=1653092258.3639417) [('tile_f', [-1, 1, 1, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 4]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3754080
-No: 7 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 6 GFLOPS: 93.66/93.66 result: MeasureResult(costs=(0.002471689604166667,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.7587053775787354, timestamp=1653105959.8730922) [('tile_f', [-1, 1, 1, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 4]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3754080
+No: 7 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1266,7 +1266,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 16, 32]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 256, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6225319
-No: 8 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 8 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1389,7 +1389,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 1, 32]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 8, 64]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,943546
-No: 9 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 9 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1512,7 +1512,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 4, 16, 4]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 16, 32]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2868708
-No: 10 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 10 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 142, in build
res = future.result()
File "/usr/lib/python3.7/concurrent/futures/_base.py", line 435, in result
@@ -1530,7 +1530,7 @@ No: 10 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
TimeoutError
[('tile_f', [-1, 32, 2, 4]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 4, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4691833
-No: 11 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 11 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1653,7 +1653,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 2, 64]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1042124
-No: 12 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 12 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1776,7 +1776,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 32, 1, 4]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 32, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,10013405
-No: 13 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 13 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -1899,7 +1899,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 8, 8, 2]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 4, 32]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6732082
-No: 14 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 14 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -2022,7 +2022,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 4, 32]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 4, 128]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 1)],None,7536735
-No: 15 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 15 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -2145,7 +2145,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 1, 4]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 128, 4]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,482121
-No: 16 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 16 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -2268,7 +2268,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 2, 1, 16]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 32, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2824525
-No: 17 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 17 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -2391,7 +2391,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 64, 1, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 8, 8]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4559286
-No: 18 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 18 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 571, in __call__
func, arg_info = _build_func_common(measure_input, self.runtime, **kwargs)
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 523, in _build_func_common
@@ -2514,7 +2514,7 @@ Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 854, in verify_pass
raise InstantiationError("Skipped because of invalid gpu kernel")
tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel [('tile_f', [-1, 1, 32, 16]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 1, 512]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9677544
-No: 19 GFLOPS: 0.00/95.31 result: Traceback (most recent call last):
+No: 19 GFLOPS: 0.00/93.66 result: Traceback (most recent call last):
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 721, in __call__
yield remote, remote.load_module(os.path.split(build_result.filename)[1])
File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 685, in run_through_rpc
@@ -2602,7 +2602,7 @@ tvm._ffi.base.TVMError: Traceback (most recent call last):
15: _PyEval_EvalFrameDefault
14: 0x0000000000537c30
13: _PyObject_FastCallKeywords
- 12: 0x00007f4ec66d1fa2
+ 12: 0x00007fe866cb3fa2
11: _ctypes_callproc
10: ffi_call
9: ffi_call_unix64
@@ -2667,7 +2667,7 @@ Traceback (most recent call last):
21: _PyFunction_FastCallKeywords
20: _PyEval_EvalFrameDefault
19: _PyFunction_FastCall [('tile_f', [-1, 8, 2, 16]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 1)],None,6390073
-No: 20 GFLOPS: 143.85/143.85 result: MeasureResult(costs=(0.0016093728999999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4286088943481445, timestamp=1653092284.9035137) [('tile_f', [-1, 1, 4, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9881539
+No: 20 GFLOPS: 144.60/144.60 result: MeasureResult(costs=(0.0016010082399999998,), error_no=MeasureErrorNo.NO_ERROR, all_cost=1.4342105388641357, timestamp=1653105986.3184521) [('tile_f', [-1, 1, 4, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9881539
</pre></div>
</div>
<p>Finally we can inspect the best config from log file, check correctness,
@@ -2706,7 +2706,7 @@ and measure running time.</p>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Best config:
[('tile_f', [-1, 1, 4, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 4, 1]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,9881539
-Time cost of this operator: 0.001960
+Time cost of this operator: 0.002024
</pre></div>
</div>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-tune-with-autotvm-tune-conv2d-cuda-py">
diff --git a/docs/how_to/work_with_microtvm/micro_autotune.html b/docs/how_to/work_with_microtvm/micro_autotune.html
index ce1421516..7017f8047 100644
--- a/docs/how_to/work_with_microtvm/micro_autotune.html
+++ b/docs/how_to/work_with_microtvm/micro_autotune.html
@@ -553,10 +553,10 @@ the tuned operator.</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build without Autotuning ##########
Node Name Ops Time(us) Time(%) Shape Inputs Outputs
--------- --- -------- ------- ----- ------ -------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 313.4 98.71 (1, 2, 10, 10, 3) 2 1
-tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 3.178 1.001 (1, 6, 10, 10) 1 1
-tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.917 0.289 (1, 1, 10, 10, 3) 1 1
-Total_time - 317.495 - - - -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 312.6 98.732 (1, 2, 10, 10, 3) 2 1
+tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 3.091 0.976 (1, 6, 10, 10) 1 1
+tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.925 0.292 (1, 1, 10, 10, 3) 1 1
+Total_time - 316.616 - - - -
</pre></div>
</div>
</div>
@@ -608,10 +608,10 @@ Total_time -
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>########## Build with Autotuning ##########
Node Name Ops Time(us) Time(%) Shape Inputs Outputs
--------- --- -------- ------- ----- ------ -------
-tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 225.7 98.764 (1, 1, 10, 10, 6) 2 1
-tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 1.99 0.871 (1, 6, 10, 10) 1 1
-tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.834 0.365 (1, 3, 10, 10, 1) 1 1
-Total_time - 228.524 - - - -
+tvmgen_default_fused_nn_contrib_conv2d_NCHWc tvmgen_default_fused_nn_contrib_conv2d_NCHWc 131.2 98.007 (1, 6, 10, 10, 1) 2 1
+tvmgen_default_fused_layout_transform_1 tvmgen_default_fused_layout_transform_1 1.767 1.32 (1, 6, 10, 10) 1 1
+tvmgen_default_fused_layout_transform tvmgen_default_fused_layout_transform 0.901 0.673 (1, 1, 10, 10, 3) 1 1
+Total_time - 133.868 - - - -
</pre></div>
</div>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-how-to-work-with-microtvm-micro-autotune-py">
diff --git a/docs/how_to/work_with_microtvm/sg_execution_times.html b/docs/how_to/work_with_microtvm/sg_execution_times.html
index a4f901329..fe59a1c1d 100644
--- a/docs/how_to/work_with_microtvm/sg_execution_times.html
+++ b/docs/how_to/work_with_microtvm/sg_execution_times.html
@@ -300,13 +300,13 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-work-with-microtvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:47.748</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
+<p><strong>00:46.345</strong> total execution time for <strong>how_to_work_with_microtvm</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:43.362</strong>: <a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">Autotuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></li>
-<li><p><strong>00:03.758</strong>: <a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">microTVM with TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></li>
-<li><p><strong>00:00.212</strong>: <a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></li>
-<li><p><strong>00:00.208</strong>: <a class="reference internal" href="micro_reference_vm.html#sphx-glr-how-to-work-with-microtvm-micro-reference-vm-py"><span class="std std-ref">microTVM Reference Virtual Machines</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_reference_vm.py</span></code>)</p></li>
-<li><p><strong>00:00.207</strong>: <a class="reference internal" href="micro_tvmc.html#sphx-glr-how-to-work-with-microtvm-micro-tvmc-py"><span class="std std-ref">Executing a Tiny Model with TVMC Micro</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tvmc.py</span></code>)</p></li>
+<li><p><strong>00:42.090</strong>: <a class="reference internal" href="micro_autotune.html#sphx-glr-how-to-work-with-microtvm-micro-autotune-py"><span class="std std-ref">Autotuning with microTVM</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_autotune.py</span></code>)</p></li>
+<li><p><strong>00:03.627</strong>: <a class="reference internal" href="micro_tflite.html#sphx-glr-how-to-work-with-microtvm-micro-tflite-py"><span class="std std-ref">microTVM with TFLite Models</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tflite.py</span></code>)</p></li>
+<li><p><strong>00:00.222</strong>: <a class="reference internal" href="micro_reference_vm.html#sphx-glr-how-to-work-with-microtvm-micro-reference-vm-py"><span class="std std-ref">microTVM Reference Virtual Machines</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_reference_vm.py</span></code>)</p></li>
+<li><p><strong>00:00.204</strong>: <a class="reference internal" href="micro_tvmc.html#sphx-glr-how-to-work-with-microtvm-micro-tvmc-py"><span class="std std-ref">Executing a Tiny Model with TVMC Micro</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_tvmc.py</span></code>)</p></li>
+<li><p><strong>00:00.202</strong>: <a class="reference internal" href="micro_ethosu.html#sphx-glr-how-to-work-with-microtvm-micro-ethosu-py"><span class="std std-ref">Running TVM on bare metal Arm(R) Cortex(R)-M55 CPU and Ethos(TM)-U55 NPU with CMSIS-NN</span></a> (<code class="docutils literal notranslate"><span class="pre">micro_ethosu.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/how_to/work_with_relay/sg_execution_times.html b/docs/how_to/work_with_relay/sg_execution_times.html
index 072140602..b337ceafa 100644
--- a/docs/how_to/work_with_relay/sg_execution_times.html
+++ b/docs/how_to/work_with_relay/sg_execution_times.html
@@ -300,11 +300,11 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-work-with-relay-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:09.549</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
+<p><strong>00:09.710</strong> total execution time for <strong>how_to_work_with_relay</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:07.543</strong>: <a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></li>
-<li><p><strong>00:01.781</strong>: <a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></li>
-<li><p><strong>00:00.225</strong>: <a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></li>
+<li><p><strong>00:07.104</strong>: <a class="reference internal" href="using_external_lib.html#sphx-glr-how-to-work-with-relay-using-external-lib-py"><span class="std std-ref">Using External Libraries in Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_external_lib.py</span></code>)</p></li>
+<li><p><strong>00:02.394</strong>: <a class="reference internal" href="build_gcn.html#sphx-glr-how-to-work-with-relay-build-gcn-py"><span class="std std-ref">Building a Graph Convolutional Network</span></a> (<code class="docutils literal notranslate"><span class="pre">build_gcn.py</span></code>)</p></li>
+<li><p><strong>00:00.212</strong>: <a class="reference internal" href="using_relay_viz.html#sphx-glr-how-to-work-with-relay-using-relay-viz-py"><span class="std std-ref">Use Relay Visualizer to Visualize Relay</span></a> (<code class="docutils literal notranslate"><span class="pre">using_relay_viz.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/how_to/work_with_schedules/sg_execution_times.html b/docs/how_to/work_with_schedules/sg_execution_times.html
index 0661c6cc6..a93840730 100644
--- a/docs/how_to/work_with_schedules/sg_execution_times.html
+++ b/docs/how_to/work_with_schedules/sg_execution_times.html
@@ -300,16 +300,16 @@
<div class="section" id="computation-times">
<span id="sphx-glr-how-to-work-with-schedules-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:05.957</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
+<p><strong>00:05.807</strong> total execution time for <strong>how_to_work_with_schedules</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:02.147</strong>: <a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></li>
-<li><p><strong>00:01.244</strong>: <a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></li>
-<li><p><strong>00:00.759</strong>: <a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></li>
-<li><p><strong>00:00.752</strong>: <a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></li>
-<li><p><strong>00:00.321</strong>: <a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></li>
-<li><p><strong>00:00.250</strong>: <a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></li>
-<li><p><strong>00:00.250</strong>: <a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></li>
-<li><p><strong>00:00.234</strong>: <a class="reference internal" href="tuple_inputs.html#sphx-glr-how-to-work-with-schedules-tuple-inputs-py"><span class="std std-ref">Compute and Reduce with Tuple Inputs</span></a> (<code class="docutils literal notranslate"><span class="pre">tuple_inputs.py</span></code>)</p></li>
+<li><p><strong>00:02.132</strong>: <a class="reference internal" href="intrin_math.html#sphx-glr-how-to-work-with-schedules-intrin-math-py"><span class="std std-ref">Intrinsics and Math Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">intrin_math.py</span></code>)</p></li>
+<li><p><strong>00:01.196</strong>: <a class="reference internal" href="tensorize.html#sphx-glr-how-to-work-with-schedules-tensorize-py"><span class="std std-ref">Use Tensorize to Leverage Hardware Intrinsics</span></a> (<code class="docutils literal notranslate"><span class="pre">tensorize.py</span></code>)</p></li>
+<li><p><strong>00:00.740</strong>: <a class="reference internal" href="reduction.html#sphx-glr-how-to-work-with-schedules-reduction-py"><span class="std std-ref">Reduction</span></a> (<code class="docutils literal notranslate"><span class="pre">reduction.py</span></code>)</p></li>
+<li><p><strong>00:00.726</strong>: <a class="reference internal" href="scan.html#sphx-glr-how-to-work-with-schedules-scan-py"><span class="std std-ref">Scan and Recurrent Kernel</span></a> (<code class="docutils literal notranslate"><span class="pre">scan.py</span></code>)</p></li>
+<li><p><strong>00:00.315</strong>: <a class="reference internal" href="extern_op.html#sphx-glr-how-to-work-with-schedules-extern-op-py"><span class="std std-ref">External Tensor Functions</span></a> (<code class="docutils literal notranslate"><span class="pre">extern_op.py</span></code>)</p></li>
+<li><p><strong>00:00.245</strong>: <a class="reference internal" href="schedule_primitives.html#sphx-glr-how-to-work-with-schedules-schedule-primitives-py"><span class="std std-ref">Schedule Primitives in TVM</span></a> (<code class="docutils literal notranslate"><span class="pre">schedule_primitives.py</span></code>)</p></li>
+<li><p><strong>00:00.230</strong>: <a class="reference internal" href="tedd.html#sphx-glr-how-to-work-with-schedules-tedd-py"><span class="std std-ref">Use Tensor Expression Debug Display (TEDD) for Visualization</span></a> (<code class="docutils literal notranslate"><span class="pre">tedd.py</span></code>)</p></li>
+<li><p><strong>00:00.223</strong>: <a class="reference internal" href="tuple_inputs.html#sphx-glr-how-to-work-with-schedules-tuple-inputs-py"><span class="std std-ref">Compute and Reduce with Tuple Inputs</span></a> (<code class="docutils literal notranslate"><span class="pre">tuple_inputs.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/how_to/work_with_schedules/tensorize.html b/docs/how_to/work_with_schedules/tensorize.html
index 83997ceac..fc4b6cfee 100644
--- a/docs/how_to/work_with_schedules/tensorize.html
+++ b/docs/how_to/work_with_schedules/tensorize.html
@@ -552,7 +552,7 @@ The importing needs to happen before the tensorized GEMV being executed.</p>
C: Buffer(C_2: Pointer(float32), float32, [524288], [])}
buffer_map = {A_1: A, B_1: B, C_1: C}
preflattened_buffer_map = {A_1: A_3: Buffer(A_2, float32, [1024, 64], []), B_1: B_3: Buffer(B_2, float32, [512, 64], []), C_1: C_3: Buffer(C_2, float32, [1024, 512], [])} {
- attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmpbxn_r8qb/input0.cc'\nsource_filename = \"/tmp/tmpbxn_r8qb/input0.cc\"\ntarget datalayout = \"e-m:e-i64:64-f80:128-n8:16:32:64-S128\"\ntarget triple = \"x86_64-pc-linux-gnu\"\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n %7 = allo [...]
+ attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmp4i8vuuix/input0.cc'\nsource_filename = \"/tmp/tmp4i8vuuix/input0.cc\"\ntarget datalayout = \"e-m:e-i64:64-f80:128-n8:16:32:64-S128\"\ntarget triple = \"x86_64-pc-linux-gnu\"\n\n; Function Attrs: noinline nounwind optnone uwtable\ndefine dso_local i32 @gemv_update(float*, float*, float*, i32, i32, i32) #0 {\n %7 = allo [...]
for (i, 0, 1024) {
for (j.outer: int32, 0, 32) {
@tir.call_extern("gemv_update", @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), C_2, ((i*512) + (j.outer*16)), 16, 2, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), A_2, (i*64), 64, 1, dtype=handle), @tir.tvm_access_ptr(@tir.type_annotation(, dtype=float32), B_2, (j.outer*1024), 1024, 1, dtype=handle), 16, 64, 64, dtype=int32)
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator-members.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator-members.html
index 0288c6810..b07d50f17 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator-members.html
@@ -81,19 +81,20 @@ $(function() {
<tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aed593996e4076632450de8fde776707c">GetDataPtr</a>(const ObjectRef &ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a2f706028c59f1c2d5a87ae58785b79c9">MutateComputeLocation</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#acb242cfc6875055d75f7ea7adcfa9c14">MutateParallel</a>(int64_t max_jobs_per_core)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a84ed21cbc627ff6dd49f983a05113696">MutateTileSize</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a5bedfb467944180740728c76ba39312f">MutateUnroll</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa07c1f6d66a438ea950637d13ed09471">ObjectRef</a>()=default</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a6a7dd7404edf1c26f8dbd9bd92d03a02">ObjectRef</a>(ObjectPtr< Object > data)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">explicit</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa1bd13a7185cb4b2b6bdde49416e8aa4">operator!=</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a3deeeac5827a88f375b8c6ae1039c219">operator-></a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4744bf4a1b48f202d41b51dc5e08e6ee">operator<</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#affdf1b8cdb36e140de7b3ad7064e4617">operator==</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#abe71c5097f4221c45091b86e5ecec259">PyMutator</a>(PyMutatorNode::FInitializeWithTuneContext f_initialize_with_tune_context, PyMutatorNode::FApply f_apply, PyMutatorNode::FAsString f_as_string)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae31a5b9f40781d60a2901994ead700e8">same_as</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a7c1529cf73f979a4c4fa12f8fcc3588c">TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS</a>(Mutator, ObjectRef, MutatorNode)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a0ae0da21d247cd87ea94fe3777c4405e">use_count</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a008b237e2c944cc25c123ef412dcd397">MutateThreadBinding</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a84ed21cbc627ff6dd49f983a05113696">MutateTileSize</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a5bedfb467944180740728c76ba39312f">MutateUnroll</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa07c1f6d66a438ea950637d13ed09471">ObjectRef</a>()=default</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a6a7dd7404edf1c26f8dbd9bd92d03a02">ObjectRef</a>(ObjectPtr< Object > data)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">explicit</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa1bd13a7185cb4b2b6bdde49416e8aa4">operator!=</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a3deeeac5827a88f375b8c6ae1039c219">operator-></a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4744bf4a1b48f202d41b51dc5e08e6ee">operator<</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#affdf1b8cdb36e140de7b3ad7064e4617">operator==</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#abe71c5097f4221c45091b86e5ecec259">PyMutator</a>(PyMutatorNode::FInitializeWithTuneContext f_initialize_with_tune_context, PyMutatorNode::FApply f_apply, PyMutatorNode::FAsString f_as_string)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae31a5b9f40781d60a2901994ead700e8">same_as</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a7c1529cf73f979a4c4fa12f8fcc3588c">TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS</a>(Mutator, ObjectRef, MutatorNode)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">tvm::meta_schedule::Mutator</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a0ae0da21d247cd87ea94fe3777c4405e">use_count</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
</table></div><!-- contents -->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator.html
index bb1b16d72..14ed96d8a 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator.html
@@ -78,13 +78,13 @@ $(function() {
<div class="dynheader">
Inheritance diagram for tvm::meta_schedule::Mutator:</div>
<div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg" width="218" height="551"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg" width="218" height="566"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
</div>
</div>
<div class="dynheader">
Collaboration diagram for tvm::meta_schedule::Mutator:</div>
<div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg" width="218" height="839"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg" width="218" height="854"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
</div>
</div>
<table class="memberdecls">
@@ -140,6 +140,9 @@ Static Public Member Functions</h2></td></tr>
<tr class="memitem:a2f706028c59f1c2d5a87ae58785b79c9"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a2f706028c59f1c2d5a87ae58785b79c9">MutateComputeLocation</a> ()</td></tr>
<tr class="memdesc:a2f706028c59f1c2d5a87ae58785b79c9"><td class="mdescLeft"> </td><td class="mdescRight">Create a <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html" title="Managed reference to MutatorNode. ">Mutator</a> that mutates the outcome of SampleComputeLocation. <a href="#a2f706028c59f1c2d5a87ae58785b79c9">More...</a><br /></td></tr>
<tr class="separator:a2f706028c59f1c2d5a87ae58785b79c9"><td class="memSeparator" colspan="2"> </td></tr>
+<tr class="memitem:a008b237e2c944cc25c123ef412dcd397"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a008b237e2c944cc25c123ef412dcd397">MutateThreadBinding</a> ()</td></tr>
+<tr class="memdesc:a008b237e2c944cc25c123ef412dcd397"><td class="mdescLeft"> </td><td class="mdescRight">Create a <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html" title="Managed reference to MutatorNode. ">Mutator</a> that mutates auto thread binding. <a href="#a008b237e2c944cc25c123ef412dcd397">More...</a><br /></td></tr>
+<tr class="separator:a008b237e2c944cc25c123ef412dcd397"><td class="memSeparator" colspan="2"> </td></tr>
<tr class="memitem:abe71c5097f4221c45091b86e5ecec259"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#abe71c5097f4221c45091b86e5ecec259">PyMutator</a> (<a class="el" href="classtvm_1_1meta__schedule_1_1PyMutatorNode.html#ad41ea15415e2d753941bd1441cff4384">PyMutatorNode::FInitializeWithTuneCo [...]
<tr class="memdesc:abe71c5097f4221c45091b86e5ecec259"><td class="mdescLeft"> </td><td class="mdescRight">Create a mutator with customized methods on the python-side. <a href="#abe71c5097f4221c45091b86e5ecec259">More...</a><br /></td></tr>
<tr class="separator:abe71c5097f4221c45091b86e5ecec259"><td class="memSeparator" colspan="2"> </td></tr>
@@ -238,6 +241,34 @@ Additional Inherited Members</h2></td></tr>
</dl>
<dl class="section return"><dt>Returns</dt><dd>The created mutator. </dd></dl>
+</div>
+</div>
+<a id="a008b237e2c944cc25c123ef412dcd397"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a008b237e2c944cc25c123ef412dcd397">◆ </a></span>MutateThreadBinding()</h2>
+
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+ <tr>
+ <td class="mlabels-left">
+ <table class="memname">
+ <tr>
+ <td class="memname">static <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html">Mutator</a> tvm::meta_schedule::Mutator::MutateThreadBinding </td>
+ <td>(</td>
+ <td class="paramname"></td><td>)</td>
+ <td></td>
+ </tr>
+ </table>
+ </td>
+ <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">static</span></span> </td>
+ </tr>
+</table>
+</div><div class="memdoc">
+
+<p>Create a <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html" title="Managed reference to MutatorNode. ">Mutator</a> that mutates auto thread binding. </p>
+<dl class="section return"><dt>Returns</dt><dd>The mutator created </dd></dl>
+
</div>
</div>
<a id="a84ed21cbc627ff6dd49f983a05113696"></a>
@@ -291,6 +322,7 @@ Additional Inherited Members</h2></td></tr>
</div><div class="memdoc">
<p>Create a <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html" title="Managed reference to MutatorNode. ">Mutator</a> that mutates auto unroll step. </p>
+<dl class="section return"><dt>Returns</dt><dd>The mutator created </dd></dl>
</div>
</div>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg
index 795c6f4e0..2592d5ee6 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__coll__graph.svg
@@ -4,92 +4,93 @@
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: tvm::meta_schedule::Mutator Pages: 1 -->
-<svg width="163pt" height="629pt"
- viewBox="0.00 0.00 163.00 629.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 625)">
+<svg width="163pt" height="640pt"
+ viewBox="0.00 0.00 163.00 640.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 636)">
<title>tvm::meta_schedule::Mutator</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-625 159,-625 159,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-636 159,-636 159,4 -4,4"/>
<!-- Node2 -->
<g id="node1" class="node">
<title>Node2</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-134.5 155,-134.5 155,-.5 0,-.5"/>
-<text text-anchor="start" x="8" y="-122.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
-<text text-anchor="middle" x="77.5" y="-111.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::Mutator</text>
-<polyline fill="none" stroke="#000000" points="0,-104.5 155,-104.5 "/>
-<text text-anchor="middle" x="77.5" y="-92.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="0,-85.5 155,-85.5 "/>
-<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
-<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
-<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateTileSize()</text>
-<text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateParallel()</text>
-<text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateUnroll()</text>
-<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateComputeLocation()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-145.5 155,-145.5 155,-.5 0,-.5"/>
+<text text-anchor="start" x="8" y="-133.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
+<text text-anchor="middle" x="77.5" y="-122.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::Mutator</text>
+<polyline fill="none" stroke="#000000" points="0,-115.5 155,-115.5 "/>
+<text text-anchor="middle" x="77.5" y="-103.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="0,-96.5 155,-96.5 "/>
+<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
+<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
+<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateTileSize()</text>
+<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateParallel()</text>
+<text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateUnroll()</text>
+<text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateComputeLocation()</text>
+<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateThreadBinding()</text>
<text text-anchor="start" x="8" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PyMutator()</text>
</g>
<!-- Node3 -->
<g id="node2" class="node">
<title>Node3</title>
<g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1ObjectRef.html" target="_top" xlink:title="Base class of all object reference. ">
-<polygon fill="#ffffff" stroke="#000000" points="10.5,-172.5 10.5,-394.5 144.5,-394.5 144.5,-172.5 10.5,-172.5"/>
-<text text-anchor="middle" x="77.5" y="-382.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
-<polyline fill="none" stroke="#000000" points="10.5,-375.5 144.5,-375.5 "/>
-<text text-anchor="start" x="18.5" y="-363.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<polyline fill="none" stroke="#000000" points="10.5,-356.5 144.5,-356.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="10.5,-183.5 10.5,-405.5 144.5,-405.5 144.5,-183.5 10.5,-183.5"/>
+<text text-anchor="middle" x="77.5" y="-393.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
+<polyline fill="none" stroke="#000000" points="10.5,-386.5 144.5,-386.5 "/>
+<text text-anchor="start" x="18.5" y="-374.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<polyline fill="none" stroke="#000000" points="10.5,-367.5 144.5,-367.5 "/>
+<text text-anchor="start" x="18.5" y="-355.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
<text text-anchor="start" x="18.5" y="-344.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="18.5" y="-333.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="18.5" y="-322.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
-<text text-anchor="start" x="18.5" y="-311.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
-<text text-anchor="start" x="18.5" y="-300.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
-<text text-anchor="start" x="18.5" y="-289.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
-<text text-anchor="start" x="18.5" y="-278.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
-<text text-anchor="start" x="18.5" y="-267.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="18.5" y="-256.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="18.5" y="-245.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="18.5" y="-234.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
-<text text-anchor="start" x="18.5" y="-223.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
-<text text-anchor="start" x="18.5" y="-212.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
-<text text-anchor="start" x="18.5" y="-201.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
-<text text-anchor="start" x="18.5" y="-190.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
-<text text-anchor="start" x="18.5" y="-179.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
+<text text-anchor="start" x="18.5" y="-333.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
+<text text-anchor="start" x="18.5" y="-322.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
+<text text-anchor="start" x="18.5" y="-311.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
+<text text-anchor="start" x="18.5" y="-300.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
+<text text-anchor="start" x="18.5" y="-289.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
+<text text-anchor="start" x="18.5" y="-278.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="18.5" y="-267.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="18.5" y="-256.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="18.5" y="-245.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
+<text text-anchor="start" x="18.5" y="-234.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
+<text text-anchor="start" x="18.5" y="-223.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
+<text text-anchor="start" x="18.5" y="-212.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
+<text text-anchor="start" x="18.5" y="-201.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
+<text text-anchor="start" x="18.5" y="-190.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
</a>
</g>
</g>
<!-- Node3->Node2 -->
<g id="edge1" class="edge">
<title>Node3->Node2</title>
-<path fill="none" stroke="#191970" d="M77.5,-162.1483C77.5,-152.7959 77.5,-143.5791 77.5,-134.7844"/>
-<polygon fill="none" stroke="#191970" points="74.0001,-162.3363 77.5,-172.3363 81.0001,-162.3364 74.0001,-162.3363"/>
+<path fill="none" stroke="#191970" d="M77.5,-173.2037C77.5,-163.8086 77.5,-154.5157 77.5,-145.5987"/>
+<polygon fill="none" stroke="#191970" points="74.0001,-173.4255 77.5,-183.4255 81.0001,-173.4256 74.0001,-173.4255"/>
</g>
<!-- Node4 -->
<g id="node3" class="node">
<title>Node4</title>
<g id="a_node3"><a xlink:href="classtvm_1_1runtime_1_1ObjectPtr.html" target="_top" xlink:title="{tvm::runtime::ObjectPtr\l\< tvm::runtime::Object \>\n||+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ~ObjectPtr()\l+ swap()\l+ get()\l+ operator-\>()\land 11 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="7.5,-442.5 7.5,-620.5 147.5,-620.5 147.5,-442.5 7.5,-442.5"/>
-<text text-anchor="start" x="15.5" y="-608.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
-<text text-anchor="middle" x="77.5" y="-597.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::Object ></text>
-<polyline fill="none" stroke="#000000" points="7.5,-590.5 147.5,-590.5 "/>
-<text text-anchor="middle" x="77.5" y="-578.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="7.5,-571.5 147.5,-571.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="7.5,-453.5 7.5,-631.5 147.5,-631.5 147.5,-453.5 7.5,-453.5"/>
+<text text-anchor="start" x="15.5" y="-619.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
+<text text-anchor="middle" x="77.5" y="-608.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::Object ></text>
+<polyline fill="none" stroke="#000000" points="7.5,-601.5 147.5,-601.5 "/>
+<text text-anchor="middle" x="77.5" y="-589.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="7.5,-582.5 147.5,-582.5 "/>
+<text text-anchor="start" x="15.5" y="-570.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="15.5" y="-559.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="15.5" y="-548.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="15.5" y="-537.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="15.5" y="-526.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="15.5" y="-515.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="15.5" y="-504.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="15.5" y="-493.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
-<text text-anchor="start" x="15.5" y="-482.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
-<text text-anchor="start" x="15.5" y="-471.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="15.5" y="-460.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="15.5" y="-449.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
+<text text-anchor="start" x="15.5" y="-504.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
+<text text-anchor="start" x="15.5" y="-493.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
+<text text-anchor="start" x="15.5" y="-482.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="15.5" y="-471.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="15.5" y="-460.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
</a>
</g>
</g>
<!-- Node4->Node3 -->
<g id="edge2" class="edge">
<title>Node4->Node3</title>
-<path fill="none" stroke="#404040" d="M77.5,-442.3167C77.5,-430.8765 77.5,-419.0062 77.5,-407.1402"/>
-<polygon fill="none" stroke="#404040" points="77.5001,-406.7944 73.5,-400.7944 77.5,-394.7944 81.5,-400.7943 77.5001,-406.7944"/>
-<text text-anchor="middle" x="97" y="-416" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
+<path fill="none" stroke="#404040" d="M77.5,-453.3167C77.5,-441.8765 77.5,-430.0062 77.5,-418.1402"/>
+<polygon fill="none" stroke="#404040" points="77.5001,-417.7944 73.5,-411.7944 77.5,-405.7944 81.5,-411.7943 77.5001,-417.7944"/>
+<text text-anchor="middle" x="97" y="-427" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
</g>
</g>
</svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg
index 966408508..1ebb06ae6 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Mutator__inherit__graph.svg
@@ -4,62 +4,63 @@
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: tvm::meta_schedule::Mutator Pages: 1 -->
-<svg width="163pt" height="413pt"
- viewBox="0.00 0.00 163.00 413.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 409)">
+<svg width="163pt" height="424pt"
+ viewBox="0.00 0.00 163.00 424.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 420)">
<title>tvm::meta_schedule::Mutator</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-409 159,-409 159,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-420 159,-420 159,4 -4,4"/>
<!-- Node0 -->
<g id="node1" class="node">
<title>Node0</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-134.5 155,-134.5 155,-.5 0,-.5"/>
-<text text-anchor="start" x="8" y="-122.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
-<text text-anchor="middle" x="77.5" y="-111.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::Mutator</text>
-<polyline fill="none" stroke="#000000" points="0,-104.5 155,-104.5 "/>
-<text text-anchor="middle" x="77.5" y="-92.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="0,-85.5 155,-85.5 "/>
-<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
-<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
-<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateTileSize()</text>
-<text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateParallel()</text>
-<text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateUnroll()</text>
-<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateComputeLocation()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-145.5 155,-145.5 155,-.5 0,-.5"/>
+<text text-anchor="start" x="8" y="-133.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
+<text text-anchor="middle" x="77.5" y="-122.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::Mutator</text>
+<polyline fill="none" stroke="#000000" points="0,-115.5 155,-115.5 "/>
+<text text-anchor="middle" x="77.5" y="-103.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="0,-96.5 155,-96.5 "/>
+<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
+<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
+<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateTileSize()</text>
+<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateParallel()</text>
+<text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateUnroll()</text>
+<text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateComputeLocation()</text>
+<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MutateThreadBinding()</text>
<text text-anchor="start" x="8" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PyMutator()</text>
</g>
<!-- Node1 -->
<g id="node2" class="node">
<title>Node1</title>
<g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1ObjectRef.html" target="_top" xlink:title="Base class of all object reference. ">
-<polygon fill="#ffffff" stroke="#000000" points="10.5,-171.5 10.5,-404.5 144.5,-404.5 144.5,-171.5 10.5,-171.5"/>
-<text text-anchor="middle" x="77.5" y="-392.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
-<polyline fill="none" stroke="#000000" points="10.5,-385.5 144.5,-385.5 "/>
-<text text-anchor="start" x="18.5" y="-373.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<text text-anchor="start" x="18.5" y="-362.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># data_</text>
-<polyline fill="none" stroke="#000000" points="10.5,-355.5 144.5,-355.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="10.5,-182.5 10.5,-415.5 144.5,-415.5 144.5,-182.5 10.5,-182.5"/>
+<text text-anchor="middle" x="77.5" y="-403.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
+<polyline fill="none" stroke="#000000" points="10.5,-396.5 144.5,-396.5 "/>
+<text text-anchor="start" x="18.5" y="-384.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<text text-anchor="start" x="18.5" y="-373.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># data_</text>
+<polyline fill="none" stroke="#000000" points="10.5,-366.5 144.5,-366.5 "/>
+<text text-anchor="start" x="18.5" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
<text text-anchor="start" x="18.5" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="18.5" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="18.5" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
-<text text-anchor="start" x="18.5" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
-<text text-anchor="start" x="18.5" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
-<text text-anchor="start" x="18.5" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
-<text text-anchor="start" x="18.5" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
-<text text-anchor="start" x="18.5" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="18.5" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="18.5" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="18.5" y="-233.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
-<text text-anchor="start" x="18.5" y="-222.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
-<text text-anchor="start" x="18.5" y="-211.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
-<text text-anchor="start" x="18.5" y="-200.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
-<text text-anchor="start" x="18.5" y="-189.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
-<text text-anchor="start" x="18.5" y="-178.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
+<text text-anchor="start" x="18.5" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
+<text text-anchor="start" x="18.5" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
+<text text-anchor="start" x="18.5" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
+<text text-anchor="start" x="18.5" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
+<text text-anchor="start" x="18.5" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
+<text text-anchor="start" x="18.5" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="18.5" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="18.5" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="18.5" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
+<text text-anchor="start" x="18.5" y="-233.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
+<text text-anchor="start" x="18.5" y="-222.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
+<text text-anchor="start" x="18.5" y="-211.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
+<text text-anchor="start" x="18.5" y="-200.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
+<text text-anchor="start" x="18.5" y="-189.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
</a>
</g>
</g>
<!-- Node1->Node0 -->
<g id="edge1" class="edge">
<title>Node1->Node0</title>
-<path fill="none" stroke="#191970" d="M77.5,-161.0779C77.5,-152.0753 77.5,-143.2246 77.5,-134.7733"/>
-<polygon fill="none" stroke="#191970" points="74.0001,-161.2933 77.5,-171.2933 81.0001,-161.2934 74.0001,-161.2933"/>
+<path fill="none" stroke="#191970" d="M77.5,-172.1146C77.5,-163.0752 77.5,-154.1562 77.5,-145.5936"/>
+<polygon fill="none" stroke="#191970" points="74.0001,-172.3589 77.5,-182.359 81.0001,-172.359 74.0001,-172.3589"/>
</g>
</g>
</svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc-members.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc-members.html
index 85f4a9bbb..1b3654339 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc-members.html
@@ -91,7 +91,7 @@ $(function() {
<tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#ad9ba0ccb7c8c2340ce64d8b0cb4d141c">RewriteParallelVectorizeUnroll</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a08348595d8c50afe0167a986e034d616">RewriteReductionBlock</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff">RewriteTensorize</a>(bool vectorize_init_loop=false)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a190932261c8574b7e85e804938f8ad0d">RewriteUnboundBlock</a>(int max_threadblock)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980">RewriteUnboundBlock</a>(int max_threadblocks)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae31a5b9f40781d60a2901994ead700e8">same_as</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a3f1d6e8bd5753810d8baa0cfb899581a">TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS</a>(Postproc, ObjectRef, PostprocNode)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></td><td class="entry"></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc.html
index 6edadc68e..b0e5c663f 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1Postproc.html
@@ -143,9 +143,9 @@ Static Public Member Functions</h2></td></tr>
<tr class="memitem:a08348595d8c50afe0167a986e034d616"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a08348595d8c50afe0167a986e034d616">RewriteReductionBlock</a> ()</td></tr>
<tr class="memdesc:a08348595d8c50afe0167a986e034d616"><td class="mdescLeft"> </td><td class="mdescRight">Create a postprocessor that rewrites reduction block by moving the init block out. <a href="#a08348595d8c50afe0167a986e034d616">More...</a><br /></td></tr>
<tr class="separator:a08348595d8c50afe0167a986e034d616"><td class="memSeparator" colspan="2"> </td></tr>
-<tr class="memitem:a190932261c8574b7e85e804938f8ad0d"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a190932261c8574b7e85e804938f8ad0d">RewriteUnboundBlock</a> (int max_threadblock)</td></tr>
-<tr class="memdesc:a190932261c8574b7e85e804938f8ad0d"><td class="mdescLeft"> </td><td class="mdescRight">Create a postprocessor that adds thread binding to unbound blocks. <a href="#a190932261c8574b7e85e804938f8ad0d">More...</a><br /></td></tr>
-<tr class="separator:a190932261c8574b7e85e804938f8ad0d"><td class="memSeparator" colspan="2"> </td></tr>
+<tr class="memitem:a1836b2278bc24fdc227c490896d92980"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980">RewriteUnboundBlock</a> (int max_threadblocks)</td></tr>
+<tr class="memdesc:a1836b2278bc24fdc227c490896d92980"><td class="mdescLeft"> </td><td class="mdescRight">Create a postprocessor that adds thread binding to unbound blocks. <a href="#a1836b2278bc24fdc227c490896d92980">More...</a><br /></td></tr>
+<tr class="separator:a1836b2278bc24fdc227c490896d92980"><td class="memSeparator" colspan="2"> </td></tr>
<tr class="memitem:a95db036cfced4c2575367a26a41498ff"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff">RewriteTensorize</a> (bool vectorize_init_loop=false)</td></tr>
<tr class="memdesc:a95db036cfced4c2575367a26a41498ff"><td class="mdescLeft"> </td><td class="mdescRight">Create a postprocessor that applies tensorization to annotated blocks. <a href="#a95db036cfced4c2575367a26a41498ff">More...</a><br /></td></tr>
<tr class="separator:a95db036cfced4c2575367a26a41498ff"><td class="memSeparator" colspan="2"> </td></tr>
@@ -386,8 +386,8 @@ Additional Inherited Members</h2></td></tr>
</div>
</div>
-<a id="a190932261c8574b7e85e804938f8ad0d"></a>
-<h2 class="memtitle"><span class="permalink"><a href="#a190932261c8574b7e85e804938f8ad0d">◆ </a></span>RewriteUnboundBlock()</h2>
+<a id="a1836b2278bc24fdc227c490896d92980"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a1836b2278bc24fdc227c490896d92980">◆ </a></span>RewriteUnboundBlock()</h2>
<div class="memitem">
<div class="memproto">
@@ -399,7 +399,7 @@ Additional Inherited Members</h2></td></tr>
<td class="memname">static <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html">Postproc</a> tvm::meta_schedule::Postproc::RewriteUnboundBlock </td>
<td>(</td>
<td class="paramtype">int </td>
- <td class="paramname"><em>max_threadblock</em></td><td>)</td>
+ <td class="paramname"><em>max_threadblocks</em></td><td>)</td>
<td></td>
</tr>
</table>
@@ -413,7 +413,7 @@ Additional Inherited Members</h2></td></tr>
<p>Create a postprocessor that adds thread binding to unbound blocks. </p>
<dl class="params"><dt>Parameters</dt><dd>
<table class="params">
- <tr><td class="paramname">max_threadblock</td><td>The max number of threadblocks in the cuda device. </td></tr>
+ <tr><td class="paramname">max_threadblocks</td><td>The max number of threadblocks in the cuda device. </td></tr>
</table>
</dd>
</dl>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule-members.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule-members.html
index f1ae10a1e..df8bad566 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule-members.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule-members.html
@@ -72,31 +72,32 @@ $(function() {
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a3e9b0901b6e01257b060a45e159cc37e">_type_is_nullable</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
<tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#ac88a36846b8653f9ad41218a44bec110">AddRFactor</a>(int max_jobs_per_core, Optional< Integer > max_innermost_factor)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
<tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a2d76fa1fb628ff276a284e61123589c5">as</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a73a8c07ad4fa26d5c3e28f33c2215f1d">AutoInline</a>(bool into_producer, bool into_consumer, bool inline_const_tensor, bool disallow_if_then_else, bool require_injective, bool require_ordered, Optional< Array< String >> disallow_op)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span cl [...]
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa5c355fbb7d2f7402ee360dba8a52cdd">ContainerType</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a14acfc5ea272e2e53f9ac3e1110e53ea">CrossThreadReduction</a>(Array< Integer > thread_extents)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ac261cdb80487fb29ac42b28678f8cbef">data_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a17d8d5ad92691f9e18e3e0ae8ef69e4f">defined</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#acd04bb22a6861e9952c344ee8547411f">DowncastNoCheck</a>(ObjectRef ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a22e5bb9d64dbc773bb9263b70882239e">FFIClearAfterMove</a>(ObjectRef *ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aadbc0886ffa80162ff31eefd0431ba09">get</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae423057ecf93c18714d17f53cd1d318f">get_mutable</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aed593996e4076632450de8fde776707c">GetDataPtr</a>(const ObjectRef &ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#aaa910aa414fd65947b08badf1ec7e3fa">MultiLevelTiling</a>(String structure, Optional< Array< String >> tile_binds, Optional< Integer > max_innermost_factor, Optional< Array< Integer >> vector_load_lens, Optional< Map< String, ObjectRef >> reuse_read, Optional< Map< String, ObjectRef >> reuse_write)</td><td class="entry"><a class="el" href="classt [...]
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a9e2f027aecba3832b89f0769acd145ef">MultiLevelTilingWithIntrin</a>(String intrin_name, String structure, Optional< Array< String >> tile_binds, Optional< Integer > max_innermost_factor, Optional< Array< Integer >> vector_load_lens, Optional< Map< String, ObjectRef >> reuse_read, Optional< Map< String, ObjectRef >> reuse_write)</td>< [...]
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa07c1f6d66a438ea950637d13ed09471">ObjectRef</a>()=default</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a6a7dd7404edf1c26f8dbd9bd92d03a02">ObjectRef</a>(ObjectPtr< Object > data)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">explicit</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa1bd13a7185cb4b2b6bdde49416e8aa4">operator!=</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a3deeeac5827a88f375b8c6ae1039c219">operator-></a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4744bf4a1b48f202d41b51dc5e08e6ee">operator<</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#affdf1b8cdb36e140de7b3ad7064e4617">operator==</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a0ef9b604081db7a8bf960f3fbfd3a804">ParallelizeVectorizeUnroll</a>(int max_jobs_per_core, int max_vectorize_extent, Array< Integer > unroll_max_steps, bool unroll_explicit)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a85c97c5518ed29d168499ed79f47b8c0">PyScheduleRule</a>(PyScheduleRuleNode::FInitializeWithTuneContext f_initialize_with_tune_context, PyScheduleRuleNode::FApply f_apply, PyScheduleRuleNode::FAsString f_as_string)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">stat [...]
- <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a1bf485537817533eaf711226f687778c">RandomComputeLocation</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae31a5b9f40781d60a2901994ead700e8">same_as</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a157a6c0605c6ee1851128dbece136d51">TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS</a>(ScheduleRule, ObjectRef, ScheduleRuleNode)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"></td></tr>
- <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
- <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a0ae0da21d247cd87ea94fe3777c4405e">use_count</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a4180c6e940445e79ada08325b2dba7a8">AutoBind</a>(int max_threadblocks, Array< Integer > thread_extents)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a73a8c07ad4fa26d5c3e28f33c2215f1d">AutoInline</a>(bool into_producer, bool into_consumer, bool inline_const_tensor, bool disallow_if_then_else, bool require_injective, bool require_ordered, Optional< Array< String >> disallow_op)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="en [...]
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa5c355fbb7d2f7402ee360dba8a52cdd">ContainerType</a> typedef</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a14acfc5ea272e2e53f9ac3e1110e53ea">CrossThreadReduction</a>(Array< Integer > thread_extents)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ac261cdb80487fb29ac42b28678f8cbef">data_</a></td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">protected</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a17d8d5ad92691f9e18e3e0ae8ef69e4f">defined</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#acd04bb22a6861e9952c344ee8547411f">DowncastNoCheck</a>(ObjectRef ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a22e5bb9d64dbc773bb9263b70882239e">FFIClearAfterMove</a>(ObjectRef *ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aadbc0886ffa80162ff31eefd0431ba09">get</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae423057ecf93c18714d17f53cd1d318f">get_mutable</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aed593996e4076632450de8fde776707c">GetDataPtr</a>(const ObjectRef &ref)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">protected</span><span class="mlabel">static</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#aaa910aa414fd65947b08badf1ec7e3fa">MultiLevelTiling</a>(String structure, Optional< Array< String >> tile_binds, Optional< Integer > max_innermost_factor, Optional< Array< Integer >> vector_load_lens, Optional< Map< String, ObjectRef >> reuse_read, Optional< Map< String, ObjectRef >> reuse_write)</td><td class="entry"><a class="el" [...]
+ <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a9e2f027aecba3832b89f0769acd145ef">MultiLevelTilingWithIntrin</a>(String intrin_name, String structure, Optional< Array< String >> tile_binds, Optional< Integer > max_innermost_factor, Optional< Array< Integer >> vector_load_lens, Optional< Map< String, ObjectRef >> reuse_read, Optional< Map< String, ObjectRef >> reuse_write)</td><td class="ent [...]
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa07c1f6d66a438ea950637d13ed09471">ObjectRef</a>()=default</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a6a7dd7404edf1c26f8dbd9bd92d03a02">ObjectRef</a>(ObjectPtr< Object > data)</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span><span class="mlabel">explicit</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#aa1bd13a7185cb4b2b6bdde49416e8aa4">operator!=</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a3deeeac5827a88f375b8c6ae1039c219">operator-></a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4744bf4a1b48f202d41b51dc5e08e6ee">operator<</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#affdf1b8cdb36e140de7b3ad7064e4617">operator==</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a0ef9b604081db7a8bf960f3fbfd3a804">ParallelizeVectorizeUnroll</a>(int max_jobs_per_core, int max_vectorize_extent, Array< Integer > unroll_max_steps, bool unroll_explicit)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a85c97c5518ed29d168499ed79f47b8c0">PyScheduleRule</a>(PyScheduleRuleNode::FInitializeWithTuneContext f_initialize_with_tune_context, PyScheduleRuleNode::FApply f_apply, PyScheduleRuleNode::FAsString f_as_string)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a1bf485537817533eaf711226f687778c">RandomComputeLocation</a>()</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"><span class="mlabel">static</span></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae31a5b9f40781d60a2901994ead700e8">same_as</a>(const ObjectRef &other) const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a157a6c0605c6ee1851128dbece136d51">TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS</a>(ScheduleRule, ObjectRef, ScheduleRuleNode)</td><td class="entry"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">tvm::meta_schedule::ScheduleRule</a></td><td class="entry"></td></tr>
+ <tr><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7">unique</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
+ <tr class="even"><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#a0ae0da21d247cd87ea94fe3777c4405e">use_count</a>() const</td><td class="entry"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">tvm::runtime::ObjectRef</a></td><td class="entry"><span class="mlabel">inline</span></td></tr>
</table></div><!-- contents -->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule.html b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule.html
index 2a684f6c0..563751ab0 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule.html
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule.html
@@ -78,13 +78,13 @@ $(function() {
<div class="dynheader">
Inheritance diagram for tvm::meta_schedule::ScheduleRule:</div>
<div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1ScheduleRule__inherit__graph.svg" width="226" height="595"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1ScheduleRule__inherit__graph.svg" width="226" height="610"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
</div>
</div>
<div class="dynheader">
Collaboration diagram for tvm::meta_schedule::ScheduleRule:</div>
<div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1ScheduleRule__coll__graph.svg" width="226" height="883"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="classtvm_1_1meta__schedule_1_1ScheduleRule__coll__graph.svg" width="226" height="898"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
</div>
</div>
<table class="memberdecls">
@@ -149,6 +149,9 @@ Static Public Member Functions</h2></td></tr>
<tr class="memitem:a0ef9b604081db7a8bf960f3fbfd3a804"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a0ef9b604081db7a8bf960f3fbfd3a804">ParallelizeVectorizeUnroll</a> (int max_jobs_per_core, int max_vectorize_extent, <a class="el" href="classtvm_1_1runtime_1_1Array.html">Arra [...]
<tr class="memdesc:a0ef9b604081db7a8bf960f3fbfd3a804"><td class="mdescLeft"> </td><td class="mdescRight">Mark parallelize, vectorize and unroll to the root block. The mark will be applied to each block in a follow-up post processor. <a href="#a0ef9b604081db7a8bf960f3fbfd3a804">More...</a><br /></td></tr>
<tr class="separator:a0ef9b604081db7a8bf960f3fbfd3a804"><td class="memSeparator" colspan="2"> </td></tr>
+<tr class="memitem:a4180c6e940445e79ada08325b2dba7a8"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a4180c6e940445e79ada08325b2dba7a8">AutoBind</a> (int max_threadblocks, <a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>< <a class="el" href="classtvm_1_1Int [...]
+<tr class="memdesc:a4180c6e940445e79ada08325b2dba7a8"><td class="mdescLeft"> </td><td class="mdescRight">Auto bind loops around the block to BlockIdx and ThreadIdx. <a href="#a4180c6e940445e79ada08325b2dba7a8">More...</a><br /></td></tr>
+<tr class="separator:a4180c6e940445e79ada08325b2dba7a8"><td class="memSeparator" colspan="2"> </td></tr>
<tr class="memitem:a85c97c5518ed29d168499ed79f47b8c0"><td class="memItemLeft" align="right" valign="top">static <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a85c97c5518ed29d168499ed79f47b8c0">PyScheduleRule</a> (<a class="el" href="classtvm_1_1meta__schedule_1_1PyScheduleRuleNode.html#a31b4c49eddba3427c203698ef7be6842">PyScheduleR [...]
<tr class="memdesc:a85c97c5518ed29d168499ed79f47b8c0"><td class="mdescLeft"> </td><td class="mdescRight">Create a schedule rule with customized methods on the python-side. <a href="#a85c97c5518ed29d168499ed79f47b8c0">More...</a><br /></td></tr>
<tr class="separator:a85c97c5518ed29d168499ed79f47b8c0"><td class="memSeparator" colspan="2"> </td></tr>
@@ -230,6 +233,52 @@ Additional Inherited Members</h2></td></tr>
</dl>
<dl class="section return"><dt>Returns</dt><dd>The schedule rule created </dd></dl>
+</div>
+</div>
+<a id="a4180c6e940445e79ada08325b2dba7a8"></a>
+<h2 class="memtitle"><span class="permalink"><a href="#a4180c6e940445e79ada08325b2dba7a8">◆ </a></span>AutoBind()</h2>
+
+<div class="memitem">
+<div class="memproto">
+<table class="mlabels">
+ <tr>
+ <td class="mlabels-left">
+ <table class="memname">
+ <tr>
+ <td class="memname">static <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html">ScheduleRule</a> tvm::meta_schedule::ScheduleRule::AutoBind </td>
+ <td>(</td>
+ <td class="paramtype">int </td>
+ <td class="paramname"><em>max_threadblocks</em>, </td>
+ </tr>
+ <tr>
+ <td class="paramkey"></td>
+ <td></td>
+ <td class="paramtype"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a>< <a class="el" href="classtvm_1_1Integer.html">Integer</a> > </td>
+ <td class="paramname"><em>thread_extents</em> </td>
+ </tr>
+ <tr>
+ <td></td>
+ <td>)</td>
+ <td></td><td></td>
+ </tr>
+ </table>
+ </td>
+ <td class="mlabels-right">
+<span class="mlabels"><span class="mlabel">static</span></span> </td>
+ </tr>
+</table>
+</div><div class="memdoc">
+
+<p>Auto bind loops around the block to BlockIdx and ThreadIdx. </p>
+<dl class="params"><dt>Parameters</dt><dd>
+ <table class="params">
+ <tr><td class="paramname">max_threadblocks</td><td>The maximum number of threadblock on GPU </td></tr>
+ <tr><td class="paramname">thread_extents</td><td>Candidates of thread axis extent. </td></tr>
+ </table>
+ </dd>
+</dl>
+<dl class="section return"><dt>Returns</dt><dd>The schedule rule created </dd></dl>
+
</div>
</div>
<a id="a73a8c07ad4fa26d5c3e28f33c2215f1d"></a>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule__coll__graph.svg b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule__coll__graph.svg
index c27111ff5..8670295ee 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule__coll__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule__coll__graph.svg
@@ -4,95 +4,96 @@
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: tvm::meta_schedule::ScheduleRule Pages: 1 -->
-<svg width="169pt" height="662pt"
- viewBox="0.00 0.00 169.00 662.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 658)">
+<svg width="169pt" height="673pt"
+ viewBox="0.00 0.00 169.00 673.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 669)">
<title>tvm::meta_schedule::ScheduleRule</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-658 165,-658 165,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-669 165,-669 165,4 -4,4"/>
<!-- Node2 -->
<g id="node1" class="node">
<title>Node2</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-167.5 161,-167.5 161,-.5 0,-.5"/>
-<text text-anchor="start" x="8" y="-155.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
-<text text-anchor="middle" x="80.5" y="-144.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::ScheduleRule</text>
-<polyline fill="none" stroke="#000000" points="0,-137.5 161,-137.5 "/>
-<text text-anchor="middle" x="80.5" y="-125.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="0,-118.5 161,-118.5 "/>
-<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
-<text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
-<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AutoInline()</text>
-<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MultiLevelTiling()</text>
-<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MultiLevelTilingWithIntrin()</text>
-<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AddRFactor()</text>
-<text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ CrossThreadReduction()</text>
-<text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RandomComputeLocation()</text>
-<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ParallelizeVectorizeUnroll()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-178.5 161,-178.5 161,-.5 0,-.5"/>
+<text text-anchor="start" x="8" y="-166.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
+<text text-anchor="middle" x="80.5" y="-155.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::ScheduleRule</text>
+<polyline fill="none" stroke="#000000" points="0,-148.5 161,-148.5 "/>
+<text text-anchor="middle" x="80.5" y="-136.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="0,-129.5 161,-129.5 "/>
+<text text-anchor="start" x="8" y="-117.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
+<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
+<text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AutoInline()</text>
+<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MultiLevelTiling()</text>
+<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MultiLevelTilingWithIntrin()</text>
+<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AddRFactor()</text>
+<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ CrossThreadReduction()</text>
+<text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RandomComputeLocation()</text>
+<text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ParallelizeVectorizeUnroll()</text>
+<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AutoBind()</text>
<text text-anchor="start" x="8" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PyScheduleRule()</text>
</g>
<!-- Node3 -->
<g id="node2" class="node">
<title>Node3</title>
<g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1ObjectRef.html" target="_top" xlink:title="Base class of all object reference. ">
-<polygon fill="#ffffff" stroke="#000000" points="13.5,-205.5 13.5,-427.5 147.5,-427.5 147.5,-205.5 13.5,-205.5"/>
-<text text-anchor="middle" x="80.5" y="-415.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
-<polyline fill="none" stroke="#000000" points="13.5,-408.5 147.5,-408.5 "/>
-<text text-anchor="start" x="21.5" y="-396.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<polyline fill="none" stroke="#000000" points="13.5,-389.5 147.5,-389.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="13.5,-216.5 13.5,-438.5 147.5,-438.5 147.5,-216.5 13.5,-216.5"/>
+<text text-anchor="middle" x="80.5" y="-426.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
+<polyline fill="none" stroke="#000000" points="13.5,-419.5 147.5,-419.5 "/>
+<text text-anchor="start" x="21.5" y="-407.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<polyline fill="none" stroke="#000000" points="13.5,-400.5 147.5,-400.5 "/>
+<text text-anchor="start" x="21.5" y="-388.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
<text text-anchor="start" x="21.5" y="-377.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="21.5" y="-366.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="21.5" y="-355.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
-<text text-anchor="start" x="21.5" y="-344.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
-<text text-anchor="start" x="21.5" y="-333.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
-<text text-anchor="start" x="21.5" y="-322.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
-<text text-anchor="start" x="21.5" y="-311.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
-<text text-anchor="start" x="21.5" y="-300.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="21.5" y="-289.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="21.5" y="-278.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="21.5" y="-267.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
-<text text-anchor="start" x="21.5" y="-256.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
-<text text-anchor="start" x="21.5" y="-245.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
-<text text-anchor="start" x="21.5" y="-234.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
-<text text-anchor="start" x="21.5" y="-223.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
-<text text-anchor="start" x="21.5" y="-212.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
+<text text-anchor="start" x="21.5" y="-366.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
+<text text-anchor="start" x="21.5" y="-355.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
+<text text-anchor="start" x="21.5" y="-344.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
+<text text-anchor="start" x="21.5" y="-333.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
+<text text-anchor="start" x="21.5" y="-322.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
+<text text-anchor="start" x="21.5" y="-311.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="21.5" y="-300.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="21.5" y="-289.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="21.5" y="-278.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
+<text text-anchor="start" x="21.5" y="-267.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
+<text text-anchor="start" x="21.5" y="-256.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
+<text text-anchor="start" x="21.5" y="-245.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
+<text text-anchor="start" x="21.5" y="-234.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
+<text text-anchor="start" x="21.5" y="-223.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
</a>
</g>
</g>
<!-- Node3->Node2 -->
<g id="edge1" class="edge">
<title>Node3->Node2</title>
-<path fill="none" stroke="#191970" d="M80.5,-195.2104C80.5,-185.8646 80.5,-176.5624 80.5,-167.5459"/>
-<polygon fill="none" stroke="#191970" points="77.0001,-195.3571 80.5,-205.3572 84.0001,-195.3572 77.0001,-195.3571"/>
+<path fill="none" stroke="#191970" d="M80.5,-206.1252C80.5,-196.8699 80.5,-187.6337 80.5,-178.6408"/>
+<polygon fill="none" stroke="#191970" points="77.0001,-206.1652 80.5,-216.1652 84.0001,-206.1652 77.0001,-206.1652"/>
</g>
<!-- Node4 -->
<g id="node3" class="node">
<title>Node4</title>
<g id="a_node3"><a xlink:href="classtvm_1_1runtime_1_1ObjectPtr.html" target="_top" xlink:title="{tvm::runtime::ObjectPtr\l\< tvm::runtime::Object \>\n||+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ObjectPtr()\l+ ~ObjectPtr()\l+ swap()\l+ get()\l+ operator-\>()\land 11 more...\l}">
-<polygon fill="#ffffff" stroke="#000000" points="10.5,-475.5 10.5,-653.5 150.5,-653.5 150.5,-475.5 10.5,-475.5"/>
-<text text-anchor="start" x="18.5" y="-641.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
-<text text-anchor="middle" x="80.5" y="-630.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::Object ></text>
-<polyline fill="none" stroke="#000000" points="10.5,-623.5 150.5,-623.5 "/>
-<text text-anchor="middle" x="80.5" y="-611.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="10.5,-604.5 150.5,-604.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="10.5,-486.5 10.5,-664.5 150.5,-664.5 150.5,-486.5 10.5,-486.5"/>
+<text text-anchor="start" x="18.5" y="-652.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectPtr</text>
+<text text-anchor="middle" x="80.5" y="-641.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">< tvm::runtime::Object ></text>
+<polyline fill="none" stroke="#000000" points="10.5,-634.5 150.5,-634.5 "/>
+<text text-anchor="middle" x="80.5" y="-622.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="10.5,-615.5 150.5,-615.5 "/>
+<text text-anchor="start" x="18.5" y="-603.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="18.5" y="-592.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="18.5" y="-581.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="18.5" y="-570.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="18.5" y="-559.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
<text text-anchor="start" x="18.5" y="-548.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="18.5" y="-537.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectPtr()</text>
-<text text-anchor="start" x="18.5" y="-526.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
-<text text-anchor="start" x="18.5" y="-515.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
-<text text-anchor="start" x="18.5" y="-504.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="18.5" y="-493.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="18.5" y="-482.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
+<text text-anchor="start" x="18.5" y="-537.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ~ObjectPtr()</text>
+<text text-anchor="start" x="18.5" y="-526.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ swap()</text>
+<text text-anchor="start" x="18.5" y="-515.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="18.5" y="-504.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="18.5" y="-493.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">and 11 more...</text>
</a>
</g>
</g>
<!-- Node4->Node3 -->
<g id="edge2" class="edge">
<title>Node4->Node3</title>
-<path fill="none" stroke="#404040" d="M80.5,-475.3167C80.5,-463.8765 80.5,-452.0062 80.5,-440.1402"/>
-<polygon fill="none" stroke="#404040" points="80.5001,-439.7944 76.5,-433.7944 80.5,-427.7944 84.5,-433.7943 80.5001,-439.7944"/>
-<text text-anchor="middle" x="100" y="-449" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
+<path fill="none" stroke="#404040" d="M80.5,-486.3167C80.5,-474.8765 80.5,-463.0062 80.5,-451.1402"/>
+<polygon fill="none" stroke="#404040" points="80.5001,-450.7944 76.5,-444.7944 80.5,-438.7944 84.5,-444.7943 80.5001,-450.7944"/>
+<text text-anchor="middle" x="100" y="-460" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> #data_</text>
</g>
</g>
</svg>
diff --git a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule__inherit__graph.svg b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule__inherit__graph.svg
index 1d32978ee..2027a882f 100644
--- a/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule__inherit__graph.svg
+++ b/docs/reference/api/doxygen/classtvm_1_1meta__schedule_1_1ScheduleRule__inherit__graph.svg
@@ -4,65 +4,66 @@
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: tvm::meta_schedule::ScheduleRule Pages: 1 -->
-<svg width="169pt" height="446pt"
- viewBox="0.00 0.00 169.00 446.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
-<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 442)">
+<svg width="169pt" height="457pt"
+ viewBox="0.00 0.00 169.00 457.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 453)">
<title>tvm::meta_schedule::ScheduleRule</title>
-<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-442 165,-442 165,4 -4,4"/>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-453 165,-453 165,4 -4,4"/>
<!-- Node0 -->
<g id="node1" class="node">
<title>Node0</title>
-<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-167.5 161,-167.5 161,-.5 0,-.5"/>
-<text text-anchor="start" x="8" y="-155.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
-<text text-anchor="middle" x="80.5" y="-144.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::ScheduleRule</text>
-<polyline fill="none" stroke="#000000" points="0,-137.5 161,-137.5 "/>
-<text text-anchor="middle" x="80.5" y="-125.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
-<polyline fill="none" stroke="#000000" points="0,-118.5 161,-118.5 "/>
-<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
-<text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
-<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AutoInline()</text>
-<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MultiLevelTiling()</text>
-<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MultiLevelTilingWithIntrin()</text>
-<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AddRFactor()</text>
-<text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ CrossThreadReduction()</text>
-<text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RandomComputeLocation()</text>
-<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ParallelizeVectorizeUnroll()</text>
+<polygon fill="#bfbfbf" stroke="#000000" points="0,-.5 0,-178.5 161,-178.5 161,-.5 0,-.5"/>
+<text text-anchor="start" x="8" y="-166.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::meta_schedule</text>
+<text text-anchor="middle" x="80.5" y="-155.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">::ScheduleRule</text>
+<polyline fill="none" stroke="#000000" points="0,-148.5 161,-148.5 "/>
+<text text-anchor="middle" x="80.5" y="-136.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"> </text>
+<polyline fill="none" stroke="#000000" points="0,-129.5 161,-129.5 "/>
+<text text-anchor="start" x="8" y="-117.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ TVM_DEFINE_MUTABLE</text>
+<text text-anchor="start" x="8" y="-106.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">_OBJECT_REF_METHODS()</text>
+<text text-anchor="start" x="8" y="-95.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AutoInline()</text>
+<text text-anchor="start" x="8" y="-84.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MultiLevelTiling()</text>
+<text text-anchor="start" x="8" y="-73.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ MultiLevelTilingWithIntrin()</text>
+<text text-anchor="start" x="8" y="-62.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AddRFactor()</text>
+<text text-anchor="start" x="8" y="-51.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ CrossThreadReduction()</text>
+<text text-anchor="start" x="8" y="-40.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ RandomComputeLocation()</text>
+<text text-anchor="start" x="8" y="-29.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ParallelizeVectorizeUnroll()</text>
+<text text-anchor="start" x="8" y="-18.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ AutoBind()</text>
<text text-anchor="start" x="8" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ PyScheduleRule()</text>
</g>
<!-- Node1 -->
<g id="node2" class="node">
<title>Node1</title>
<g id="a_node2"><a xlink:href="classtvm_1_1runtime_1_1ObjectRef.html" target="_top" xlink:title="Base class of all object reference. ">
-<polygon fill="#ffffff" stroke="#000000" points="13.5,-204.5 13.5,-437.5 147.5,-437.5 147.5,-204.5 13.5,-204.5"/>
-<text text-anchor="middle" x="80.5" y="-425.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
-<polyline fill="none" stroke="#000000" points="13.5,-418.5 147.5,-418.5 "/>
-<text text-anchor="start" x="21.5" y="-406.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
-<text text-anchor="start" x="21.5" y="-395.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># data_</text>
-<polyline fill="none" stroke="#000000" points="13.5,-388.5 147.5,-388.5 "/>
+<polygon fill="#ffffff" stroke="#000000" points="13.5,-215.5 13.5,-448.5 147.5,-448.5 147.5,-215.5 13.5,-215.5"/>
+<text text-anchor="middle" x="80.5" y="-436.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">tvm::runtime::ObjectRef</text>
+<polyline fill="none" stroke="#000000" points="13.5,-429.5 147.5,-429.5 "/>
+<text text-anchor="start" x="21.5" y="-417.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ _type_is_nullable</text>
+<text text-anchor="start" x="21.5" y="-406.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># data_</text>
+<polyline fill="none" stroke="#000000" points="13.5,-399.5 147.5,-399.5 "/>
+<text text-anchor="start" x="21.5" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
<text text-anchor="start" x="21.5" y="-376.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="21.5" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ ObjectRef()</text>
-<text text-anchor="start" x="21.5" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
-<text text-anchor="start" x="21.5" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
-<text text-anchor="start" x="21.5" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
-<text text-anchor="start" x="21.5" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
-<text text-anchor="start" x="21.5" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
-<text text-anchor="start" x="21.5" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
-<text text-anchor="start" x="21.5" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
-<text text-anchor="start" x="21.5" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
-<text text-anchor="start" x="21.5" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
-<text text-anchor="start" x="21.5" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
-<text text-anchor="start" x="21.5" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
-<text text-anchor="start" x="21.5" y="-233.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
-<text text-anchor="start" x="21.5" y="-222.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
-<text text-anchor="start" x="21.5" y="-211.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
+<text text-anchor="start" x="21.5" y="-365.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ same_as()</text>
+<text text-anchor="start" x="21.5" y="-354.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator==()</text>
+<text text-anchor="start" x="21.5" y="-343.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator!=()</text>
+<text text-anchor="start" x="21.5" y="-332.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator<()</text>
+<text text-anchor="start" x="21.5" y="-321.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ defined()</text>
+<text text-anchor="start" x="21.5" y="-310.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ get()</text>
+<text text-anchor="start" x="21.5" y="-299.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ operator->()</text>
+<text text-anchor="start" x="21.5" y="-288.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ unique()</text>
+<text text-anchor="start" x="21.5" y="-277.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ use_count()</text>
+<text text-anchor="start" x="21.5" y="-266.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000">+ as()</text>
+<text text-anchor="start" x="21.5" y="-255.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># get_mutable()</text>
+<text text-anchor="start" x="21.5" y="-244.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># DowncastNoCheck()</text>
+<text text-anchor="start" x="21.5" y="-233.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># FFIClearAfterMove()</text>
+<text text-anchor="start" x="21.5" y="-222.5" font-family="Helvetica,sans-Serif" font-size="10.00" fill="#000000"># GetDataPtr()</text>
</a>
</g>
</g>
<!-- Node1->Node0 -->
<g id="edge1" class="edge">
<title>Node1->Node0</title>
-<path fill="none" stroke="#191970" d="M80.5,-194.0814C80.5,-185.1022 80.5,-176.1877 80.5,-167.5456"/>
-<polygon fill="none" stroke="#191970" points="77.0001,-194.2358 80.5,-204.2358 84.0001,-194.2358 77.0001,-194.2358"/>
+<path fill="none" stroke="#191970" d="M80.5,-205.2106C80.5,-196.221 80.5,-187.2721 80.5,-178.5621"/>
+<polygon fill="none" stroke="#191970" points="77.0001,-205.3669 80.5,-215.3669 84.0001,-205.367 77.0001,-205.3669"/>
</g>
</g>
</svg>
diff --git a/docs/reference/api/doxygen/functions_a.html b/docs/reference/api/doxygen/functions_a.html
index 95927e9b5..25fd46041 100644
--- a/docs/reference/api/doxygen/functions_a.html
+++ b/docs/reference/api/doxygen/functions_a.html
@@ -448,7 +448,7 @@ $(function() {
: <a class="el" href="structtvm_1_1AttrError.html#a3285db0171872bc2fdde8243f6e801d9">tvm::AttrError</a>
</li>
<li>AttrInitEntry()
-: <a class="el" href="structtvm_1_1detail_1_1AttrInitEntry.html#af07c4a3a8f4663ac03ae238ab7b9d791">tvm::detail::AttrInitEntry< T ></a>
+: <a class="el" href="structtvm_1_1detail_1_1AttrInitEntry.html#ad68ac350b0d49e97caab8443cc8fb08b">tvm::detail::AttrInitEntry< T ></a>
</li>
<li>AttrInitVisitor()
: <a class="el" href="classtvm_1_1detail_1_1AttrInitVisitor.html#ac3c800c9249fee195db2a5fa473fe960">tvm::detail::AttrInitVisitor< FFind ></a>
@@ -523,6 +523,9 @@ $(function() {
<li>auto_unroll_max_step
: <a class="el" href="structtvm_1_1auto__scheduler_1_1StageAttributes.html#a7bd83956ace4ae7f5112b85a2416adf7">tvm::auto_scheduler::StageAttributes</a>
</li>
+<li>AutoBind()
+: <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a4180c6e940445e79ada08325b2dba7a8">tvm::meta_schedule::ScheduleRule</a>
+</li>
<li>AutoInline()
: <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a73a8c07ad4fa26d5c3e28f33c2215f1d">tvm::meta_schedule::ScheduleRule</a>
</li>
diff --git a/docs/reference/api/doxygen/functions_func_a.html b/docs/reference/api/doxygen/functions_func_a.html
index 11dff64c1..85542703b 100644
--- a/docs/reference/api/doxygen/functions_func_a.html
+++ b/docs/reference/api/doxygen/functions_func_a.html
@@ -236,7 +236,7 @@ $(function() {
: <a class="el" href="classtvm_1_1meta__schedule_1_1ArgInfo.html#af2be498a4c11d470e2cd1f1474cc4839">tvm::meta_schedule::ArgInfo</a>
</li>
<li>Array()
-: <a class="el" href="classtvm_1_1runtime_1_1Array.html#a87edbe5dfbdd7deda742bd187ebee234">tvm::runtime::Array< T, typename ></a>
+: <a class="el" href="classtvm_1_1runtime_1_1Array.html#af8b7450aea8633a51d3e8c75ed9fe2be">tvm::runtime::Array< T, typename ></a>
</li>
<li>ArrayAccessor()
: <a class="el" href="classtvm_1_1runtime_1_1metadata_1_1ArrayAccessor.html#aa66d3f34e83f90c133bc1df9b0c3acd2">tvm::runtime::metadata::ArrayAccessor< C, Ref ></a>
@@ -279,7 +279,7 @@ $(function() {
, <a class="el" href="classtvm_1_1TypeReporterNode.html#aa974c8cddd300c1345f91f91da837087">tvm::TypeReporterNode</a>
</li>
<li>AssignTypedLambda()
-: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#a617bb54ee3fbe9704131229efd0d903c">tvm::runtime::TypedPackedFunc< R(Args...)></a>
+: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#a11985d7fdcf9ac60ff34faef89382284">tvm::runtime::TypedPackedFunc< R(Args...)></a>
</li>
<li>AsTable()
: <a class="el" href="classtvm_1_1runtime_1_1profiling_1_1ReportNode.html#a0e4562c552d973853fb01d7bc501d591">tvm::runtime::profiling::ReportNode</a>
@@ -288,7 +288,7 @@ $(function() {
: <a class="el" href="classtvm_1_1runtime_1_1ArrayNode.html#a410953f5ac23f6ca5caa257def8b08bd">tvm::runtime::ArrayNode</a>
, <a class="el" href="classtvm_1_1runtime_1_1DenseMapNode.html#a6071908cdeb00617d3b28a70d05ac649">tvm::runtime::DenseMapNode</a>
, <a class="el" href="classtvm_1_1runtime_1_1Map.html#a7fbfe0e01b0fa54e151bd481956dcfec">tvm::runtime::Map< K, V, typename, typename ></a>
-, <a class="el" href="classtvm_1_1runtime_1_1MapNode.html#a49edd4ddc34a4e0b097c34560b9b3b4e">tvm::runtime::MapNode</a>
+, <a class="el" href="classtvm_1_1runtime_1_1MapNode.html#a29503ec61af7a5bb9b030a00cfdff01a">tvm::runtime::MapNode</a>
, <a class="el" href="classtvm_1_1runtime_1_1ShapeTuple.html#a07d50937020663f46ce9b1f31f066a7a">tvm::runtime::ShapeTuple</a>
, <a class="el" href="classtvm_1_1runtime_1_1SmallMapNode.html#a0593c84ceb05afb1a3f87045a3dc3a59">tvm::runtime::SmallMapNode</a>
, <a class="el" href="classtvm_1_1runtime_1_1String.html#aaeda6a88310d41a22ce884fb1570b0d2">tvm::runtime::String</a>
@@ -300,7 +300,7 @@ $(function() {
: <a class="el" href="structtvm_1_1AttrError.html#a3285db0171872bc2fdde8243f6e801d9">tvm::AttrError</a>
</li>
<li>AttrInitEntry()
-: <a class="el" href="structtvm_1_1detail_1_1AttrInitEntry.html#ad68ac350b0d49e97caab8443cc8fb08b">tvm::detail::AttrInitEntry< T ></a>
+: <a class="el" href="structtvm_1_1detail_1_1AttrInitEntry.html#af07c4a3a8f4663ac03ae238ab7b9d791">tvm::detail::AttrInitEntry< T ></a>
</li>
<li>AttrInitVisitor()
: <a class="el" href="classtvm_1_1detail_1_1AttrInitVisitor.html#ac3c800c9249fee195db2a5fa473fe960">tvm::detail::AttrInitVisitor< FFind ></a>
@@ -329,6 +329,9 @@ $(function() {
<li>AttrTriggerNonDefaultEntry()
: <a class="el" href="structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html#a572356cfd8d20c258b03f7a5c62d3909">tvm::detail::AttrTriggerNonDefaultEntry< T ></a>
</li>
+<li>AutoBind()
+: <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a4180c6e940445e79ada08325b2dba7a8">tvm::meta_schedule::ScheduleRule</a>
+</li>
<li>AutoInline()
: <a class="el" href="classtvm_1_1meta__schedule_1_1ScheduleRule.html#a73a8c07ad4fa26d5c3e28f33c2215f1d">tvm::meta_schedule::ScheduleRule</a>
</li>
diff --git a/docs/reference/api/doxygen/functions_func_m.html b/docs/reference/api/doxygen/functions_func_m.html
index 0269e574b..d35a838c6 100644
--- a/docs/reference/api/doxygen/functions_func_m.html
+++ b/docs/reference/api/doxygen/functions_func_m.html
@@ -164,7 +164,7 @@ $(function() {
: <a class="el" href="classtvm_1_1arith_1_1ModularSet.html#a9f54896d98169246c6a24cc338fde500">tvm::arith::ModularSet</a>
</li>
<li>Module()
-: <a class="el" href="classtvm_1_1runtime_1_1Module.html#abd1380b3f813c2b6acefca3aaef425f4">tvm::runtime::Module</a>
+: <a class="el" href="classtvm_1_1runtime_1_1Module.html#abfbc619b3b3166d63ec52e399c24bed9">tvm::runtime::Module</a>
</li>
<li>Move()
: <a class="el" href="structtvm_1_1runtime_1_1vm_1_1Instruction.html#a162dc8d73dc2306f066c3ee013ff096f">tvm::runtime::vm::Instruction</a>
@@ -206,6 +206,9 @@ $(function() {
<li>MutateParallel()
: <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#acb242cfc6875055d75f7ea7adcfa9c14">tvm::meta_schedule::Mutator</a>
</li>
+<li>MutateThreadBinding()
+: <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a008b237e2c944cc25c123ef412dcd397">tvm::meta_schedule::Mutator</a>
+</li>
<li>MutateTileSize()
: <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a84ed21cbc627ff6dd49f983a05113696">tvm::meta_schedule::Mutator</a>
</li>
diff --git a/docs/reference/api/doxygen/functions_func_r.html b/docs/reference/api/doxygen/functions_func_r.html
index 54cff1388..28055e3d6 100644
--- a/docs/reference/api/doxygen/functions_func_r.html
+++ b/docs/reference/api/doxygen/functions_func_r.html
@@ -297,7 +297,7 @@ $(function() {
: <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff">tvm::meta_schedule::Postproc</a>
</li>
<li>RewriteUnboundBlock()
-: <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a190932261c8574b7e85e804938f8ad0d">tvm::meta_schedule::Postproc</a>
+: <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980">tvm::meta_schedule::Postproc</a>
</li>
<li>rfactor()
: <a class="el" href="classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0">tvm::auto_scheduler::State</a>
diff --git a/docs/reference/api/doxygen/functions_func_s.html b/docs/reference/api/doxygen/functions_func_s.html
index d982adcd1..43e480d38 100644
--- a/docs/reference/api/doxygen/functions_func_s.html
+++ b/docs/reference/api/doxygen/functions_func_s.html
@@ -706,7 +706,7 @@ $(function() {
: <a class="el" href="classtvm_1_1runtime_1_1DeviceAPI.html#ac29b9295c432a87658392872c644864f">tvm::runtime::DeviceAPI</a>
</li>
<li>String()
-: <a class="el" href="classtvm_1_1runtime_1_1String.html#a68df7bab89fca339e3918438dd80300d">tvm::runtime::String</a>
+: <a class="el" href="classtvm_1_1runtime_1_1String.html#acf549b3c43142639879e0fc31ea5cd77">tvm::runtime::String</a>
</li>
<li>StringImm()
: <a class="el" href="classtvm_1_1tir_1_1StringImm.html#a0f2830290e055f677c5d5dea98aab726">tvm::tir::StringImm</a>
diff --git a/docs/reference/api/doxygen/functions_func_v.html b/docs/reference/api/doxygen/functions_func_v.html
index 884282a23..f8d3f54cc 100644
--- a/docs/reference/api/doxygen/functions_func_v.html
+++ b/docs/reference/api/doxygen/functions_func_v.html
@@ -425,7 +425,7 @@ $(function() {
<li>VisitType_()
: <a class="el" href="classtvm_1_1TypeFunctor_3_01R_07const_01Type_01_6n_00_01Args_8_8_8_08_4.html#ac94cab8aea5c2a9afb439d7417f30a20">tvm::TypeFunctor< R(const Type &n, Args...)></a>
, <a class="el" href="classtvm_1_1TypeMutator.html#a18a04668d3fb464d957f3a26a4274104">tvm::TypeMutator</a>
-, <a class="el" href="classtvm_1_1TypeVisitor.html#ae699be9a6ed94a635c315506e0c2a6d2">tvm::TypeVisitor</a>
+, <a class="el" href="classtvm_1_1TypeVisitor.html#a8f548b8def48ea4f11a3eafa04d74d96">tvm::TypeVisitor</a>
</li>
<li>VisitTypeDefault_()
: <a class="el" href="classtvm_1_1TypeFunctor_3_01R_07const_01Type_01_6n_00_01Args_8_8_8_08_4.html#a91553f9e04c39b3821a70ae4f7b0c597">tvm::TypeFunctor< R(const Type &n, Args...)></a>
diff --git a/docs/reference/api/doxygen/functions_m.html b/docs/reference/api/doxygen/functions_m.html
index 8add9f9f0..cccff13aa 100644
--- a/docs/reference/api/doxygen/functions_m.html
+++ b/docs/reference/api/doxygen/functions_m.html
@@ -306,7 +306,7 @@ $(function() {
: <a class="el" href="classtvm_1_1DiagnosticContextNode.html#adea7e38a6e47cbab7fb5639f208aa536">tvm::DiagnosticContextNode</a>
</li>
<li>Module()
-: <a class="el" href="classtvm_1_1runtime_1_1Module.html#abfbc619b3b3166d63ec52e399c24bed9">tvm::runtime::Module</a>
+: <a class="el" href="classtvm_1_1runtime_1_1Module.html#abd1380b3f813c2b6acefca3aaef425f4">tvm::runtime::Module</a>
, <a class="el" href="classtvm_1_1runtime_1_1ModuleNode.html#a21f639900c480510650969df9c74d17d">tvm::runtime::ModuleNode</a>
</li>
<li>module_handle
@@ -365,6 +365,9 @@ $(function() {
<li>MutateParallel()
: <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#acb242cfc6875055d75f7ea7adcfa9c14">tvm::meta_schedule::Mutator</a>
</li>
+<li>MutateThreadBinding()
+: <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a008b237e2c944cc25c123ef412dcd397">tvm::meta_schedule::Mutator</a>
+</li>
<li>MutateTileSize()
: <a class="el" href="classtvm_1_1meta__schedule_1_1Mutator.html#a84ed21cbc627ff6dd49f983a05113696">tvm::meta_schedule::Mutator</a>
</li>
diff --git a/docs/reference/api/doxygen/functions_r.html b/docs/reference/api/doxygen/functions_r.html
index 975bda8b4..e75f5904e 100644
--- a/docs/reference/api/doxygen/functions_r.html
+++ b/docs/reference/api/doxygen/functions_r.html
@@ -482,7 +482,7 @@ $(function() {
: <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff">tvm::meta_schedule::Postproc</a>
</li>
<li>RewriteUnboundBlock()
-: <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a190932261c8574b7e85e804938f8ad0d">tvm::meta_schedule::Postproc</a>
+: <a class="el" href="classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980">tvm::meta_schedule::Postproc</a>
</li>
<li>rfactor()
: <a class="el" href="classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0">tvm::auto_scheduler::State</a>
diff --git a/docs/reference/api/doxygen/functions_s.html b/docs/reference/api/doxygen/functions_s.html
index 9747549c3..1a15bb157 100644
--- a/docs/reference/api/doxygen/functions_s.html
+++ b/docs/reference/api/doxygen/functions_s.html
@@ -808,7 +808,7 @@ $(function() {
</li>
<li>Span()
: <a class="el" href="classtvm_1_1Span.html#a5216631b639e8c802263d87d3fe9e5f6">tvm::Span</a>
-, <a class="el" href="classtvm_1_1support_1_1Span.html#a77653730a2542edf93b7c4413a72f3ec">tvm::support::Span< T, W ></a>
+, <a class="el" href="classtvm_1_1support_1_1Span.html#a3c22dd06856e7029e7107adf38eb72f5">tvm::support::Span< T, W ></a>
</li>
<li>span
: <a class="el" href="classtvm_1_1tir_1_1BufferNode.html#a13fc164e1b65cee741b4895df6316a4a">tvm::tir::BufferNode</a>
@@ -952,7 +952,7 @@ $(function() {
: <a class="el" href="classtvm_1_1tir_1_1StmtSRefNode.html#afc61714fbac246f72d02d0729fb9ba2d">tvm::tir::StmtSRefNode</a>
</li>
<li>StmtNode()
-: <a class="el" href="classtvm_1_1tir_1_1StmtNode.html#a67693c4e97ae49890ea74605fe1b1f74">tvm::tir::StmtNode</a>
+: <a class="el" href="classtvm_1_1tir_1_1StmtNode.html#a79e21b14d3ab57209577bf4a8f694a87">tvm::tir::StmtNode</a>
</li>
<li>StmtSRef()
: <a class="el" href="classtvm_1_1tir_1_1StmtSRef.html#a31687ace5dc4fe487ffb87d658d86412">tvm::tir::StmtSRef</a>
@@ -1050,7 +1050,7 @@ $(function() {
, <a class="el" href="classtvm_1_1tir_1_1BufferNode.html#ac18ddd10b79a30ae57d3a8283686259d">tvm::tir::BufferNode</a>
</li>
<li>String()
-: <a class="el" href="classtvm_1_1runtime_1_1String.html#a68df7bab89fca339e3918438dd80300d">tvm::runtime::String</a>
+: <a class="el" href="classtvm_1_1runtime_1_1String.html#a02fca36e3ff55cc1e83635b02a11fca3">tvm::runtime::String</a>
, <a class="el" href="classtvm_1_1runtime_1_1StringObj_1_1FromStd.html#a7fb804f7dc96dd9f705c84095f37f1ca">tvm::runtime::StringObj::FromStd</a>
, <a class="el" href="classtvm_1_1runtime_1_1StringObj.html#a7fb804f7dc96dd9f705c84095f37f1ca">tvm::runtime::StringObj</a>
</li>
diff --git a/docs/reference/api/doxygen/functions_t.html b/docs/reference/api/doxygen/functions_t.html
index c0617e3e2..a89cfd570 100644
--- a/docs/reference/api/doxygen/functions_t.html
+++ b/docs/reference/api/doxygen/functions_t.html
@@ -1198,7 +1198,7 @@ $(function() {
, <a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::ObjectPtr< T ></a>
, <a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::ObjectRef</a>
, <a class="el" href="classtvm_1_1runtime_1_1TVMPODValue__.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::TVMPODValue_</a>
-, <a class="el" href="classtvm_1_1runtime_1_1TVMRetValue.html#ac4a3850c0989e7c2d5cd8e0f096d0997">tvm::runtime::TVMRetValue</a>
+, <a class="el" href="classtvm_1_1runtime_1_1TVMRetValue.html#a77455a8fe7d27b90a01a64f1cd28e9ec">tvm::runtime::TVMRetValue</a>
, <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#ae0ea8b4adc6dab8c74086bceaef6b3e1">tvm::runtime::TypedPackedFunc< R(Args...)></a>
</li>
<li>type
@@ -1270,7 +1270,7 @@ $(function() {
: <a class="el" href="classtvm_1_1TypedEnvFunc_3_01R_07Args_8_8_8_08_4.html#a0d72a6fa7263821c14bcd37837998ed9">tvm::TypedEnvFunc< R(Args...)></a>
</li>
<li>TypedPackedFunc()
-: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#aa3663a440db7a6951abd767109b9bf90">tvm::runtime::TypedPackedFunc< R(Args...)></a>
+: <a class="el" href="classtvm_1_1runtime_1_1TypedPackedFunc_3_01R_07Args_8_8_8_08_4.html#afd8ee9dd9648c19b468bb4b0b00e8e4e">tvm::runtime::TypedPackedFunc< R(Args...)></a>
</li>
<li>TypeIndex2Key()
: <a class="el" href="classtvm_1_1runtime_1_1Object.html#a817ba6c23b7ee1821c48a75edf255a30">tvm::runtime::Object</a>
diff --git a/docs/reference/api/doxygen/functions_v.html b/docs/reference/api/doxygen/functions_v.html
index 3ec3dfd52..999e53adc 100644
--- a/docs/reference/api/doxygen/functions_v.html
+++ b/docs/reference/api/doxygen/functions_v.html
@@ -551,7 +551,7 @@ $(function() {
<li>VisitStmt_()
: <a class="el" href="classtvm_1_1tir_1_1StmtFunctor_3_01R_07const_01Stmt_01_6n_00_01Args_8_8_8_01args_08_4.html#a3d1c16d316eac87f2dcaee67152bea84">tvm::tir::StmtFunctor< R(const Stmt &n, Args... args)></a>
, <a class="el" href="classtvm_1_1tir_1_1StmtMutator.html#a60b18d6d6bfcb692ab4a369465a175a3">tvm::tir::StmtMutator</a>
-, <a class="el" href="classtvm_1_1tir_1_1StmtVisitor.html#afcb1a0ec03b7a7da4304c5b790b27210">tvm::tir::StmtVisitor</a>
+, <a class="el" href="classtvm_1_1tir_1_1StmtVisitor.html#a0e6ca99ff599eea59b322df49b1c3449">tvm::tir::StmtVisitor</a>
</li>
<li>VisitStmtDefault_()
: <a class="el" href="classtvm_1_1tir_1_1StmtFunctor_3_01R_07const_01Stmt_01_6n_00_01Args_8_8_8_01args_08_4.html#ae51b328e2b59a50bed7112a93dba1aae">tvm::tir::StmtFunctor< R(const Stmt &n, Args... args)></a>
@@ -565,9 +565,9 @@ $(function() {
, <a class="el" href="classtvm_1_1TypeMutator.html#a84e824911927d98e20a338eab8b75a45">tvm::TypeMutator</a>
</li>
<li>VisitType_()
-: <a class="el" href="classtvm_1_1TypeFunctor_3_01R_07const_01Type_01_6n_00_01Args_8_8_8_08_4.html#a05485baebc1e25710714f65b68124f73">tvm::TypeFunctor< R(const Type &n, Args...)></a>
-, <a class="el" href="classtvm_1_1TypeMutator.html#ad4ad7209f8789568e5e57870f0b758f0">tvm::TypeMutator</a>
-, <a class="el" href="classtvm_1_1TypeVisitor.html#adb2f5c5f8e3fbe5b62ce8527cd59a30b">tvm::TypeVisitor</a>
+: <a class="el" href="classtvm_1_1TypeFunctor_3_01R_07const_01Type_01_6n_00_01Args_8_8_8_08_4.html#a0e715c54558934e4504c366ff803d8e1">tvm::TypeFunctor< R(const Type &n, Args...)></a>
+, <a class="el" href="classtvm_1_1TypeMutator.html#ac694fbe28eb7026d30c5ca5fa2fb4a1a">tvm::TypeMutator</a>
+, <a class="el" href="classtvm_1_1TypeVisitor.html#a11378b4db6f704c04a97bec1c8ea8261">tvm::TypeVisitor</a>
</li>
<li>VisitTypeDefault_()
: <a class="el" href="classtvm_1_1TypeFunctor_3_01R_07const_01Type_01_6n_00_01Args_8_8_8_08_4.html#a91553f9e04c39b3821a70ae4f7b0c597">tvm::TypeFunctor< R(const Type &n, Args...)></a>
@@ -589,7 +589,7 @@ $(function() {
: <a class="el" href="structtvm_1_1runtime_1_1vm_1_1VMFrame.html#a8f8c990ee4fa7cb7472f5440f2ca3bde">tvm::runtime::vm::VMFrame</a>
</li>
<li>VMFunction()
-: <a class="el" href="structtvm_1_1runtime_1_1vm_1_1VMFunction.html#aea763069fe1dd6849ce0d1ec336931e0">tvm::runtime::vm::VMFunction</a>
+: <a class="el" href="structtvm_1_1runtime_1_1vm_1_1VMFunction.html#af9d2bdcf19642c21bc4909b9e9b6196d">tvm::runtime::vm::VMFunction</a>
</li>
<li>Void()
: <a class="el" href="classtvm_1_1runtime_1_1DataType.html#ab8dc0832aff8fd7421884c0fe20a3bfd">tvm::runtime::DataType</a>
diff --git a/docs/reference/api/doxygen/ir_2attrs_8h.html b/docs/reference/api/doxygen/ir_2attrs_8h.html
index ffe628fca..1257dd1ba 100644
--- a/docs/reference/api/doxygen/ir_2attrs_8h.html
+++ b/docs/reference/api/doxygen/ir_2attrs_8h.html
@@ -179,7 +179,7 @@ Macros</h2></td></tr>
<tr class="memitem:ac869a7c3d7169282810ce7819918314a"><td class="memItemLeft" align="right" valign="top">#define </td><td class="memItemRight" valign="bottom"><a class="el" href="ir_2attrs_8h.html#ac869a7c3d7169282810ce7819918314a">TVM_DECLARE_ATTRS</a>(ClassName, TypeKey)</td></tr>
<tr class="memdesc:ac869a7c3d7169282810ce7819918314a"><td class="mdescLeft"> </td><td class="mdescRight">Declare an attribute function. <a href="#ac869a7c3d7169282810ce7819918314a">More...</a><br /></td></tr>
<tr class="separator:ac869a7c3d7169282810ce7819918314a"><td class="memSeparator" colspan="2"> </td></tr>
-<tr class="memitem:a578da113eb199bad72e26c03ad24832f"><td class="memItemLeft" align="right" valign="top">#define </td><td class="memItemRight" valign="bottom"><a class="el" href="ir_2attrs_8h.html#a578da113eb199bad72e26c03ad24832f">TVM_ATTR_FIELD</a>(FieldName)   __fvisit__(#FieldName, &FieldName)</td></tr>
+<tr class="memitem:a578da113eb199bad72e26c03ad24832f"><td class="memItemLeft" align="right" valign="top">#define </td><td class="memItemRight" valign="bottom"><a class="el" href="ir_2attrs_8h.html#a578da113eb199bad72e26c03ad24832f">TVM_ATTR_FIELD</a>(FieldName)   _tvm_fvisit(#FieldName, &FieldName)</td></tr>
<tr class="memdesc:a578da113eb199bad72e26c03ad24832f"><td class="mdescLeft"> </td><td class="mdescRight">Declare an attribute field. <a href="#a578da113eb199bad72e26c03ad24832f">More...</a><br /></td></tr>
<tr class="separator:a578da113eb199bad72e26c03ad24832f"><td class="memSeparator" colspan="2"> </td></tr>
</table><table class="memberdecls">
@@ -252,7 +252,7 @@ Functions</h2></td></tr>
<td>(</td>
<td class="paramtype"> </td>
<td class="paramname">FieldName</td><td>)</td>
- <td>   __fvisit__(#FieldName, &FieldName)</td>
+ <td>   _tvm_fvisit(#FieldName, &FieldName)</td>
</tr>
</table>
</div><div class="memdoc">
@@ -292,7 +292,7 @@ Functions</h2></td></tr>
</tr>
</table>
</div><div class="memdoc">
-<b>Value:</b><div class="fragment"><div class="line"><span class="keyword">static</span> constexpr <span class="keyword">const</span> <span class="keywordtype">char</span>* _type_key = TypeKey; \</div><div class="line"> TVM_DECLARE_FINAL_OBJECT_INFO(ClassName, ::<a class="code" href="classtvm_1_1BaseAttrsNode.html">tvm::BaseAttrsNode</a>) \</div><div class="line"> template <typename FVisit> \</div><div class="line"> void __VisitAt [...]
+<b>Value:</b><div class="fragment"><div class="line"><span class="keyword">static</span> constexpr <span class="keyword">const</span> <span class="keywordtype">char</span>* _type_key = TypeKey; \</div><div class="line"> TVM_DECLARE_FINAL_OBJECT_INFO(ClassName, ::<a class="code" href="classtvm_1_1BaseAttrsNode.html">tvm::BaseAttrsNode</a>) \</div><div class="line"> template <typename FVisit> \</div><div class="line"> void _tvm_Visi [...]
</div><!-- fragment -->
<p>Declare an attribute function. </p>
<dl class="params"><dt>Parameters</dt><dd>
diff --git a/docs/reference/api/doxygen/ir_2attrs_8h_source.html b/docs/reference/api/doxygen/ir_2attrs_8h_source.html
index 5f36e80af..1765a41b8 100644
--- a/docs/reference/api/doxygen/ir_2attrs_8h_source.html
+++ b/docs/reference/api/doxygen/ir_2attrs_8h_source.html
@@ -66,7 +66,7 @@ $(function() {
<div class="title">attrs.h</div> </div>
</div><!--header-->
<div class="contents">
-<a href="ir_2attrs_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or more c [...]
+<a href="ir_2attrs_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or more c [...]
<div class="ttc" id="classtvm_1_1DictAttrsNode_html_ad80fb7d4b9f4e08bd0f15e409af2ac80"><div class="ttname"><a href="classtvm_1_1DictAttrsNode.html#ad80fb7d4b9f4e08bd0f15e409af2ac80">tvm::DictAttrsNode::dict</a></div><div class="ttdeci">Map< String, ObjectRef > dict</div><div class="ttdoc">internal attrs map </div><div class="ttdef"><b>Definition:</b> attrs.h:204</div></div>
<div class="ttc" id="structtvm_1_1AttrError_html_a3285db0171872bc2fdde8243f6e801d9"><div class="ttname"><a href="structtvm_1_1AttrError.html#a3285db0171872bc2fdde8243f6e801d9">tvm::AttrError::AttrError</a></div><div class="ttdeci">AttrError(std::string msg)</div><div class="ttdoc">constructor </div><div class="ttdef"><b>Definition:</b> attrs.h:100</div></div>
<div class="ttc" id="structtvm_1_1detail_1_1AttrInitEntry_html_a5608a2a457a397bf11f2be2776ec0653"><div class="ttname"><a href="structtvm_1_1detail_1_1AttrInitEntry.html#a5608a2a457a397bf11f2be2776ec0653">tvm::detail::AttrInitEntry::set_lower_bound</a></div><div class="ttdeci">TSelf & set_lower_bound(const T &begin)</div><div class="ttdef"><b>Definition:</b> attrs.h:540</div></div>
diff --git a/docs/reference/api/doxygen/mutator_8h_source.html b/docs/reference/api/doxygen/mutator_8h_source.html
index d09f3f81d..7b49a93fd 100644
--- a/docs/reference/api/doxygen/mutator_8h_source.html
+++ b/docs/reference/api/doxygen/mutator_8h_source.html
@@ -66,7 +66,7 @@ $(function() {
<div class="title">mutator.h</div> </div>
</div><!--header-->
<div class="contents">
-<a href="mutator_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or more con [...]
+<a href="mutator_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or more con [...]
<div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
<div class="ttc" id="classtvm_1_1meta__schedule_1_1MutatorNode_html_aa81faa50840d255a832cf6fdf078f8dd"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1MutatorNode.html#aa81faa50840d255a832cf6fdf078f8dd">tvm::meta_schedule::MutatorNode::Apply</a></div><div class="ttdeci">virtual Optional< tir::Trace > Apply(const tir::Trace &trace, support::LinearCongruentialEngine::TRandState *rand_state)=0</div><div class="ttdoc">Apply the mutator function to the given trace. </div></div>
<div class="ttc" id="classtvm_1_1meta__schedule_1_1MutatorNode_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1MutatorNode.html">tvm::meta_schedule::MutatorNode</a></div><div class="ttdoc">Mutator is designed to mutate the trace to explore the design space. </div><div class="ttdef"><b>Definition:</b> mutator.h:31</div></div>
diff --git a/docs/reference/api/doxygen/postproc_8h_source.html b/docs/reference/api/doxygen/postproc_8h_source.html
index ea9065939..f43c1f4f0 100644
--- a/docs/reference/api/doxygen/postproc_8h_source.html
+++ b/docs/reference/api/doxygen/postproc_8h_source.html
@@ -66,7 +66,7 @@ $(function() {
<div class="title">postproc.h</div> </div>
</div><!--header-->
<div class="contents">
-<a href="postproc_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or more co [...]
+<a href="postproc_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or more co [...]
<div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
<div class="ttc" id="classtvm_1_1meta__schedule_1_1Postproc_html"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1Postproc.html">tvm::meta_schedule::Postproc</a></div><div class="ttdoc">Managed reference to PostprocNode. </div><div class="ttdef"><b>Definition:</b> postproc.h:110</div></div>
<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyPostprocNode_html_a745d8654ab1a9cde5d24d4a9c40a68f2"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyPostprocNode.html#a745d8654ab1a9cde5d24d4a9c40a68f2">tvm::meta_schedule::PyPostprocNode::f_initialize_with_tune_context</a></div><div class="ttdeci">FInitializeWithTuneContext f_initialize_with_tune_context</div><div class="ttdoc">The packed function to the InitializeWithTuneContext function. </div><div class="ttdef"><b>Def [...]
diff --git a/docs/reference/api/doxygen/schedule__rule_8h_source.html b/docs/reference/api/doxygen/schedule__rule_8h_source.html
index 76a2dedb6..c2addaebc 100644
--- a/docs/reference/api/doxygen/schedule__rule_8h_source.html
+++ b/docs/reference/api/doxygen/schedule__rule_8h_source.html
@@ -66,7 +66,7 @@ $(function() {
<div class="title">schedule_rule.h</div> </div>
</div><!--header-->
<div class="contents">
-<a href="schedule__rule_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or m [...]
+<a href="schedule__rule_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span> <span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span> <span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span> <span class="comment"> * or m [...]
<div class="ttc" id="classtvm_1_1meta__schedule_1_1ScheduleRuleNode_html_a5de55e66ecb7a81ce105d37a41ce45e7"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1ScheduleRuleNode.html#a5de55e66ecb7a81ce105d37a41ce45e7">tvm::meta_schedule::ScheduleRuleNode::InitializeWithTuneContext</a></div><div class="ttdeci">virtual void InitializeWithTuneContext(const TuneContext &context)=0</div><div class="ttdoc">Initialize the design space generator with tuning context. </div></div>
<div class="ttc" id="classtvm_1_1meta__schedule_1_1PyScheduleRuleNode_html_a18486ea5d8d3e9c35adc22f1a265fe5a"><div class="ttname"><a href="classtvm_1_1meta__schedule_1_1PyScheduleRuleNode.html#a18486ea5d8d3e9c35adc22f1a265fe5a">tvm::meta_schedule::PyScheduleRuleNode::f_initialize_with_tune_context</a></div><div class="ttdeci">FInitializeWithTuneContext f_initialize_with_tune_context</div><div class="ttdoc">The packed function to the InitializeWithTuneContext function. </div><div class="t [...]
<div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdoc">runtime implementation for LibTorch/TorchScript. </div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
diff --git a/docs/reference/api/doxygen/search/all_11.js b/docs/reference/api/doxygen/search/all_11.js
index b0a2c74a9..1936549c2 100644
--- a/docs/reference/api/doxygen/search/all_11.js
+++ b/docs/reference/api/doxygen/search/all_11.js
@@ -35,7 +35,7 @@ var searchData=
['page_5fallocator_2eh',['page_allocator.h',['../page__allocator_8h.html',1,'']]],
['pagememorymanagercreate',['PageMemoryManagerCreate',['../page__allocator_8h.html#a720dbc7474ac13b93fafb974cfc20bc7',1,'page_allocator.h']]],
['papi_2eh',['papi.h',['../papi_8h.html',1,'']]],
- ['parallel',['Parallel',['../classtvm_1_1tir_1_1ScheduleNode.html#a553dc17c0b49b175cd16881c81b6c789',1,'tvm::tir::ScheduleNode::Parallel()'],['../classtvm_1_1auto__scheduler_1_1State.html#a2376f0180bc5b5dd4b456f2a75d4a366',1,'tvm::auto_scheduler::State::parallel()'],['../classtvm_1_1te_1_1Stage.html#a60a6be10a1a96cb594c1399efabafef3',1,'tvm::te::Stage::parallel()']]],
+ ['parallel',['parallel',['../classtvm_1_1auto__scheduler_1_1State.html#a2376f0180bc5b5dd4b456f2a75d4a366',1,'tvm::auto_scheduler::State::parallel()'],['../classtvm_1_1te_1_1Stage.html#a60a6be10a1a96cb594c1399efabafef3',1,'tvm::te::Stage::parallel()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a553dc17c0b49b175cd16881c81b6c789',1,'tvm::tir::ScheduleNode::Parallel()']]],
['parallel_5ffor',['parallel_for',['../namespacetvm_1_1support.html#a8bf1225e8bb1db575578ca2d645fb23c',1,'tvm::support']]],
['parallel_5ffor_2eh',['parallel_for.h',['../parallel__for_8h.html',1,'']]],
['parallel_5ffor_5fdynamic',['parallel_for_dynamic',['../namespacetvm_1_1support.html#afe4271363c794f1644ce7af5c2266530',1,'tvm::support']]],
@@ -163,7 +163,7 @@ var searchData=
['predict_5ffunc',['predict_func',['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#aa051c804bc592d7f4f1a5b5710f73595',1,'tvm::auto_scheduler::PythonBasedModelNode']]],
['predict_5fstage_5ffunc',['predict_stage_func',['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#a380809fbb5d4d68b9ec744e3a5015fe6',1,'tvm::auto_scheduler::PythonBasedModelNode']]],
['predictstages',['PredictStages',['../classtvm_1_1auto__scheduler_1_1CostModelNode.html#a213222251099444874698d2e9ff18adc',1,'tvm::auto_scheduler::CostModelNode::PredictStages()'],['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#a1f9975c4bdd61793b806663a61a9a703',1,'tvm::auto_scheduler::PythonBasedModelNode::PredictStages()']]],
- ['prefetch',['Prefetch',['../classtvm_1_1tir_1_1Prefetch.html',1,'tvm::tir::Prefetch'],['../classtvm_1_1te_1_1Stage.html#a611327890918fb641a8e65396ab9c5f6',1,'tvm::te::Stage::prefetch()'],['../classtvm_1_1tir_1_1Prefetch.html#af462f85dad4268685e3113b6b009d1b2',1,'tvm::tir::Prefetch::Prefetch()'],['../namespacetvm_1_1tir_1_1builtin.html#a543f1fc334d2bc830add972895a03f17',1,'tvm::tir::builtin::prefetch()']]],
+ ['prefetch',['Prefetch',['../classtvm_1_1tir_1_1Prefetch.html',1,'tvm::tir::Prefetch'],['../classtvm_1_1tir_1_1Prefetch.html#af462f85dad4268685e3113b6b009d1b2',1,'tvm::tir::Prefetch::Prefetch()'],['../classtvm_1_1te_1_1Stage.html#a611327890918fb641a8e65396ab9c5f6',1,'tvm::te::Stage::prefetch()'],['../namespacetvm_1_1tir_1_1builtin.html#a543f1fc334d2bc830add972895a03f17',1,'tvm::tir::builtin::prefetch()']]],
['prefetch_5fdata',['prefetch_data',['../classtvm_1_1te_1_1IterVarAttrNode.html#a0cd129334ac1bc8d6461fb06be67e731',1,'tvm::te::IterVarAttrNode']]],
['prefetch_5foffset',['prefetch_offset',['../classtvm_1_1te_1_1IterVarAttrNode.html#a2a4a8e201e6caefeecffd4a7647866fd',1,'tvm::te::IterVarAttrNode']]],
['prefetch_5fscope',['prefetch_scope',['../namespacetvm_1_1tir_1_1attr.html#ac95fbd1c09a60b10c7a5d07f6c4b68a6',1,'tvm::tir::attr']]],
diff --git a/docs/reference/api/doxygen/search/all_13.js b/docs/reference/api/doxygen/search/all_13.js
index 68bdbbc8d..f3d4caede 100644
--- a/docs/reference/api/doxygen/search/all_13.js
+++ b/docs/reference/api/doxygen/search/all_13.js
@@ -81,7 +81,7 @@ var searchData=
['registerconfigoption',['RegisterConfigOption',['../classtvm_1_1transform_1_1PassContext.html#a6f1d1040cc97320414b4690203f87919',1,'tvm::transform::PassContext']]],
['registergenericfunc',['RegisterGenericFunc',['../classtvm_1_1GenericFunc.html#a909acecbf2f34f847a34e587a4570dce',1,'tvm::GenericFunc']]],
['registerorget',['RegisterOrGet',['../classtvm_1_1OpRegEntry.html#a39a4d3e7f905eb4e29ca464bcedb05bd',1,'tvm::OpRegEntry::RegisterOrGet()'],['../classtvm_1_1relay_1_1ExecutorRegEntry.html#a03347a2b68269b853a7c0399994951ef',1,'tvm::relay::ExecutorRegEntry::RegisterOrGet()'],['../classtvm_1_1relay_1_1RuntimeRegEntry.html#ae8b479159ccd8b35b75950fcda58dd9d',1,'tvm::relay::RuntimeRegEntry::RegisterOrGet()'],['../classtvm_1_1TargetTagRegEntry.html#a07e0631600484dc0985ca62b1620461c',1,'tvm::T [...]
- ['registry',['Registry',['../classtvm_1_1ReflectionVTable_1_1Registry.html',1,'tvm::ReflectionVTable::Registry'],['../classtvm_1_1runtime_1_1Registry.html',1,'tvm::runtime::Registry'],['../structTVMMutableFuncRegistry.html#acc1fcd6554c627c1bf3b3c00e1120e9b',1,'TVMMutableFuncRegistry::registry()'],['../structTVMModule.html#a6db21005b9e983207b341e65af4c4ab7',1,'TVMModule::registry()'],['../classtvm_1_1ReflectionVTable_1_1Registry.html#ac8f4637640aa9dffed745303a4cfa827',1,'tvm::Reflection [...]
+ ['registry',['Registry',['../classtvm_1_1ReflectionVTable_1_1Registry.html',1,'tvm::ReflectionVTable::Registry'],['../classtvm_1_1runtime_1_1Registry.html',1,'tvm::runtime::Registry'],['../classtvm_1_1ReflectionVTable_1_1Registry.html#ac8f4637640aa9dffed745303a4cfa827',1,'tvm::ReflectionVTable::Registry::Registry()'],['../structTVMMutableFuncRegistry.html#acc1fcd6554c627c1bf3b3c00e1120e9b',1,'TVMMutableFuncRegistry::registry()'],['../structTVMModule.html#a6db21005b9e983207b341e65af4c4a [...]
['registry_2eh',['registry.h',['../registry_8h.html',1,'']]],
['regname',['RegName',['../namespacetvm_1_1runtime_1_1vm.html#a3bbbf700719e9dc3dda2bc25210c18ae',1,'tvm::runtime::vm']]],
['reinterpret',['reinterpret',['../namespacetvm_1_1tir_1_1builtin.html#a7b555bc5cca2f5e7b26c1037bc0001ce',1,'tvm::tir::builtin::reinterpret()'],['../namespacetvm.html#a34084606675cd2c73c6b0f10e1618280',1,'tvm::reinterpret()'],['../namespacetvm_1_1topi.html#a25239505894bdae140e53f4abc146f92',1,'tvm::topi::reinterpret()']]],
@@ -113,7 +113,7 @@ var searchData=
['rendererrors',['RenderErrors',['../classtvm_1_1ErrorReporter.html#a54699ec5f538bd207b5aa4e3f55181c6',1,'tvm::ErrorReporter']]],
['renewdefs',['RenewDefs',['../namespacetvm_1_1tir.html#a2e639c81d1c6875ead7764ab8a7cd553',1,'tvm::tir']]],
['renormalizesplitpattern',['RenormalizeSplitPattern',['../namespacetvm_1_1tir_1_1transform.html#a5c670c9efcd740f2f168b62e624c8c57',1,'tvm::tir::transform']]],
- ['reorder',['reorder',['../classtvm_1_1auto__scheduler_1_1State.html#a16e95966b46977eff629a5f4f1564533',1,'tvm::auto_scheduler::State::reorder()'],['../classtvm_1_1te_1_1Stage.html#ad96cd240a92df9cafae89cdf2a7e302e',1,'tvm::te::Stage::reorder()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a059229fe0e254961da406807a97f7a3d',1,'tvm::tir::ScheduleNode::Reorder()']]],
+ ['reorder',['Reorder',['../classtvm_1_1tir_1_1ScheduleNode.html#a059229fe0e254961da406807a97f7a3d',1,'tvm::tir::ScheduleNode::Reorder()'],['../classtvm_1_1auto__scheduler_1_1State.html#a16e95966b46977eff629a5f4f1564533',1,'tvm::auto_scheduler::State::reorder()'],['../classtvm_1_1te_1_1Stage.html#ad96cd240a92df9cafae89cdf2a7e302e',1,'tvm::te::Stage::reorder()']]],
['reorderstep',['ReorderStep',['../classtvm_1_1auto__scheduler_1_1ReorderStep.html',1,'tvm::auto_scheduler::ReorderStep'],['../classtvm_1_1auto__scheduler_1_1ReorderStep.html#a83b9dab5f38d5a4d42c6424ba437bc10',1,'tvm::auto_scheduler::ReorderStep::ReorderStep(int stage_id, const Array< Integer > &after_ids)'],['../classtvm_1_1auto__scheduler_1_1ReorderStep.html#a9586534afef3e0f57ab31e8374e70792',1,'tvm::auto_scheduler::ReorderStep::ReorderStep(dmlc::JSONReader *reader)']]],
['reorderstepnode',['ReorderStepNode',['../classtvm_1_1auto__scheduler_1_1ReorderStepNode.html',1,'tvm::auto_scheduler']]],
['reorg',['reorg',['../namespacetvm_1_1topi_1_1vision.html#a1014df582489005202c4218e51792314',1,'tvm::topi::vision']]],
@@ -138,7 +138,7 @@ var searchData=
['required_5fpass',['required_pass',['../classtvm_1_1transform_1_1PassContextNode.html#a029074685b6cfcc0431098697f2bc927',1,'tvm::transform::PassContextNode']]],
['requires_5fpadding',['requires_padding',['../structtvm_1_1arith_1_1PaddedIterMapResult.html#abe6ae9224d44ecade7d219901234ebd0',1,'tvm::arith::PaddedIterMapResult']]],
['reserve',['reserve',['../classtvm_1_1runtime_1_1Array.html#a1a7727b86efaf35c58a5198ab1c139c8',1,'tvm::runtime::Array']]],
- ['reset',['reset',['../classtvm_1_1runtime_1_1NDArray.html#af2a8ccab95d432d1ecad7a389e11bcd3',1,'tvm::runtime::NDArray::reset()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#ac4461465ba0e785794794e0405c96590',1,'tvm::runtime::ObjectPtr::reset()'],['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a73b14ea360a9902c291d5bf6e97636cd',1,'tvm::auto_scheduler::ProgramMeasurerNode::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html#ae6279154fe70e9eb85937b51e70a4bf8',1, [...]
+ ['reset',['Reset',['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a73b14ea360a9902c291d5bf6e97636cd',1,'tvm::auto_scheduler::ProgramMeasurerNode::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html#ae6279154fe70e9eb85937b51e70a4bf8',1,'tvm::runtime::micro_rpc::Unframer::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#a44ff9650ecca8785e33c25c369d2570a',1,'tvm::runtime::micro_rpc::Framer::Reset()'],['../classtvm_1_1tir_1_1StmtSRefNode.html#a0a81 [...]
['reset_5fattr',['reset_attr',['../classtvm_1_1OpRegEntry.html#a67628f8d3d6dea5b0a47e462c06b7790',1,'tvm::OpRegEntry']]],
['resetthreadpool',['ResetThreadPool',['../namespacetvm_1_1runtime_1_1threading.html#aafdb21c00248ff146b614a7e888b4fd7',1,'tvm::runtime::threading']]],
['reshape',['reshape',['../namespacetvm_1_1topi.html#a3aad65f2505802109ba7d05359ce9005',1,'tvm::topi']]],
@@ -152,7 +152,7 @@ var searchData=
['resize2dattrs',['Resize2DAttrs',['../structtvm_1_1relay_1_1Resize2DAttrs.html',1,'tvm::relay']]],
['resize3dattrs',['Resize3DAttrs',['../structtvm_1_1relay_1_1Resize3DAttrs.html',1,'tvm::relay']]],
['resolvedependency',['ResolveDependency',['../classtvm_1_1transform_1_1SequentialNode.html#a5549edf77e0a64bd6fcb692603967b8e',1,'tvm::transform::SequentialNode']]],
- ['result',['result',['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#ae0d33229af059c727db2abd3616660e0',1,'tvm::runtime::vm::Instruction::result()'],['../classtvm_1_1tir_1_1CommReducerNode.html#a7030917568a088215da423fc56882814',1,'tvm::tir::CommReducerNode::result()'],['../classtvm_1_1meta__schedule_1_1RunnerFutureNode.html#a1b5438c21c436ce7a864487583fd32b2',1,'tvm::meta_schedule::RunnerFutureNode::Result()']]],
+ ['result',['Result',['../classtvm_1_1meta__schedule_1_1RunnerFutureNode.html#a1b5438c21c436ce7a864487583fd32b2',1,'tvm::meta_schedule::RunnerFutureNode::Result()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#ae0d33229af059c727db2abd3616660e0',1,'tvm::runtime::vm::Instruction::result()'],['../classtvm_1_1tir_1_1CommReducerNode.html#a7030917568a088215da423fc56882814',1,'tvm::tir::CommReducerNode::result()']]],
['result_5f',['result_',['../classtvm_1_1detail_1_1AttrsSEqualVisitor.html#aeda3a91f0b2d1a7a9a075828954ff77f',1,'tvm::detail::AttrsSEqualVisitor']]],
['result_5ftype',['result_type',['../classtvm_1_1TypeFunctor_3_01R_07const_01Type_01_6n_00_01Args_8_8_8_08_4.html#a24d4a3522ee6c4cdeed80dcdcc1424ad',1,'tvm::TypeFunctor< R(const Type &n, Args...)>::result_type()'],['../classtvm_1_1NodeFunctor_3_01R_07const_01ObjectRef_01_6n_00_01Args_8_8_8_08_4.html#ac7f687cb7dda02407b578a6683fa708a',1,'tvm::NodeFunctor< R(const ObjectRef &n, Args...)>::result_type()'],['../classtvm_1_1relay_1_1ExprFunctor_3_01R_07const_01Expr_01_6n [...]
['resulttype',['ResultType',['../structtvm_1_1runtime_1_1Array_1_1ValueConverter.html#a0db77cfd8032391d76dffc88eae8e09b',1,'tvm::runtime::Array::ValueConverter']]],
@@ -182,9 +182,9 @@ var searchData=
['rewritereductionblock',['RewriteReductionBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a08348595d8c50afe0167a986e034d616',1,'tvm::meta_schedule::Postproc']]],
['rewritesimplifier',['RewriteSimplifier',['../classtvm_1_1arith_1_1RewriteSimplifier.html',1,'tvm::arith']]],
['rewritetensorize',['RewriteTensorize',['../classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff',1,'tvm::meta_schedule::Postproc']]],
- ['rewriteunboundblock',['RewriteUnboundBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a190932261c8574b7e85e804938f8ad0d',1,'tvm::meta_schedule::Postproc']]],
+ ['rewriteunboundblock',['RewriteUnboundBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980',1,'tvm::meta_schedule::Postproc']]],
['rewriteunsafeselect',['RewriteUnsafeSelect',['../namespacetvm_1_1tir_1_1transform.html#a4fe43327c4454dd05b6e925577443f49',1,'tvm::tir::transform']]],
- ['rfactor',['rfactor',['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()']]],
+ ['rfactor',['RFactor',['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()'],['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()']]],
['rfactorstep',['RfactorStep',['../classtvm_1_1auto__scheduler_1_1RfactorStep.html',1,'tvm::auto_scheduler::RfactorStep'],['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a26e6f85b55307f18fab4469e3bd4be0c',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(int stage_id, int iter_id, int factor_iter_id)'],['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a95575c21441177634178245ab562cb4f',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(dmlc::JSONReader *reader)']]],
['rfactorstepnode',['RfactorStepNode',['../classtvm_1_1auto__scheduler_1_1RfactorStepNode.html',1,'tvm::auto_scheduler']]],
['rhs',['rhs',['../classtvm_1_1relay_1_1ClauseNode.html#a93217eeea15c1f7c1a659da3da86d3bd',1,'tvm::relay::ClauseNode::rhs()'],['../classtvm_1_1tir_1_1CommReducerNode.html#a2902b0d55dd823febc6941fae9f32337',1,'tvm::tir::CommReducerNode::rhs()']]],
diff --git a/docs/reference/api/doxygen/search/all_14.js b/docs/reference/api/doxygen/search/all_14.js
index 76d5246c0..3d341dab7 100644
--- a/docs/reference/api/doxygen/search/all_14.js
+++ b/docs/reference/api/doxygen/search/all_14.js
@@ -254,7 +254,7 @@ var searchData=
['spacegeneratornode',['SpaceGeneratorNode',['../classtvm_1_1meta__schedule_1_1SpaceGeneratorNode.html',1,'tvm::meta_schedule']]],
['spacegeneratorunion',['SpaceGeneratorUnion',['../classtvm_1_1meta__schedule_1_1SpaceGenerator.html#aa13f2244870b18f3e9788d41a400636e',1,'tvm::meta_schedule::SpaceGenerator']]],
['spacetobatchndattrs',['SpaceToBatchNDAttrs',['../structtvm_1_1relay_1_1SpaceToBatchNDAttrs.html',1,'tvm::relay']]],
- ['span',['Span',['../classtvm_1_1Span.html',1,'tvm::Span'],['../classtvm_1_1support_1_1Span.html',1,'tvm::support::Span< T, W >'],['../classtvm_1_1Span.html#a5216631b639e8c802263d87d3fe9e5f6',1,'tvm::Span::Span()'],['../classtvm_1_1support_1_1Span.html#a77653730a2542edf93b7c4413a72f3ec',1,'tvm::support::Span::Span(T *begin, int num_elements)'],['../classtvm_1_1support_1_1Span.html#a3c22dd06856e7029e7107adf38eb72f5',1,'tvm::support::Span::Span(T *begin, T *end)'],['../classtvm_1_1 [...]
+ ['span',['Span',['../classtvm_1_1Span.html',1,'tvm::Span'],['../classtvm_1_1support_1_1Span.html',1,'tvm::support::Span< T, W >'],['../classtvm_1_1AffineTypeNode.html#aa45c91e3c8ebcff609d10f6a921f3fa2',1,'tvm::AffineTypeNode::span()'],['../classtvm_1_1DiagnosticNode.html#af5469f228f87711ad8bd3f4f78f3bb54',1,'tvm::DiagnosticNode::span()'],['../classtvm_1_1DiagnosticBuilder.html#a52d9cc3cb33e655c5d82af47daa74c66',1,'tvm::DiagnosticBuilder::span()'],['../classtvm_1_1CompileError.htm [...]
['span_2eh',['span.h',['../ir_2span_8h.html',1,'(Global Namespace)'],['../support_2span_8h.html',1,'(Global Namespace)']]],
['spannode',['SpanNode',['../classtvm_1_1SpanNode.html',1,'tvm::SpanNode'],['../namespacetvm_1_1relay.html#a7d0fa6578e97d0d64b08865f94f04827',1,'tvm::relay::SpanNode()']]],
['sparse_5flhs',['sparse_lhs',['../structtvm_1_1relay_1_1SparseDenseAttrs.html#ae52d5465cb3421f342607abcc1cb1d5c',1,'tvm::relay::SparseDenseAttrs']]],
@@ -269,7 +269,7 @@ var searchData=
['specialize',['Specialize',['../namespacetvm_1_1tir.html#a69b6f1b0014dc6e7dd390cff746e9782',1,'tvm::tir']]],
['specializedcondition',['SpecializedCondition',['../classtvm_1_1te_1_1SpecializedCondition.html',1,'tvm::te::SpecializedCondition'],['../classtvm_1_1te_1_1SpecializedCondition.html#a48d119ee1c6033929a5592cfc2592e60',1,'tvm::te::SpecializedCondition::SpecializedCondition()']]],
['specializedconditionnode',['SpecializedConditionNode',['../classtvm_1_1te_1_1SpecializedConditionNode.html',1,'tvm::te']]],
- ['split',['Split',['../classtvm_1_1te_1_1Split.html',1,'tvm::te::Split'],['../classtvm_1_1auto__scheduler_1_1State.html#a5815f21fc90ba7cc379c2410c05ab54c',1,'tvm::auto_scheduler::State::split()'],['../classtvm_1_1te_1_1Stage.html#a5a7cd562be59b68a187ad97085a3425d',1,'tvm::te::Stage::split()'],['../classtvm_1_1te_1_1Split.html#a328e0c093ce5b41ebaf33e0e80592764',1,'tvm::te::Split::Split()'],['../classtvm_1_1tir_1_1Layout.html#ad7657af7789fe040d3224c0149976bb4',1,'tvm::tir::Layout::Split( [...]
+ ['split',['Split',['../classtvm_1_1te_1_1Split.html',1,'tvm::te::Split'],['../classtvm_1_1te_1_1Split.html#a328e0c093ce5b41ebaf33e0e80592764',1,'tvm::te::Split::Split()'],['../classtvm_1_1tir_1_1Layout.html#ad7657af7789fe040d3224c0149976bb4',1,'tvm::tir::Layout::Split()'],['../classtvm_1_1tir_1_1ScheduleNode.html#af8a330c32b06dc16c8835c76177ffa11',1,'tvm::tir::ScheduleNode::Split()'],['../classtvm_1_1auto__scheduler_1_1State.html#a5815f21fc90ba7cc379c2410c05ab54c',1,'tvm::auto_schedule [...]
['split_5fby_5fnparts',['split_by_nparts',['../classtvm_1_1te_1_1Stage.html#a51432f38d9ec4792a2525023179ae604',1,'tvm::te::Stage']]],
['split_5fsections',['split_sections',['../namespacetvm_1_1topi.html#acc643e2ed166fa2ed82a95853e145619',1,'tvm::topi']]],
['splitargs',['SplitArgs',['../namespacetvm_1_1relay_1_1transform.html#a2425d757b896168a109498e8d34ba960',1,'tvm::relay::transform']]],
@@ -316,7 +316,7 @@ var searchData=
['startmessage',['StartMessage',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#acd512b977c6dd888f90c4fd6d2b9500f',1,'tvm::runtime::micro_rpc::Session']]],
['startpacket',['StartPacket',['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#ade10d3bd3a26e3b7af881ae134e9a998',1,'tvm::runtime::micro_rpc::Framer']]],
['startsession',['StartSession',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#a15d3f9ecb8b22bf2d330f6f0a16c5239',1,'tvm::runtime::micro_rpc::Session']]],
- ['state',['State',['../classtvm_1_1auto__scheduler_1_1State.html',1,'tvm::auto_scheduler::State'],['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()'],['../classtvm_1_1auto__scheduler_1_1MeasureInputNode.html#afb23aaf6133189687d2541ec6e1352f4',1,'tvm::auto_scheduler::MeasureInputNode::state()'],['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()']]],
+ ['state',['State',['../classtvm_1_1auto__scheduler_1_1State.html',1,'tvm::auto_scheduler::State'],['../classtvm_1_1auto__scheduler_1_1MeasureInputNode.html#afb23aaf6133189687d2541ec6e1352f4',1,'tvm::auto_scheduler::MeasureInputNode::state()'],['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()'],['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()']]],
['state_2eh',['state.h',['../state_8h.html',1,'']]],
['state_5fplaceholder',['state_placeholder',['../classtvm_1_1te_1_1ScanOpNode.html#a69105f6a84dd4fb912a16bfaa68aebf6',1,'tvm::te::ScanOpNode']]],
['statenode',['StateNode',['../classtvm_1_1auto__scheduler_1_1StateNode.html',1,'tvm::auto_scheduler']]],
@@ -346,9 +346,9 @@ var searchData=
['stmtsref',['StmtSRef',['../classtvm_1_1tir_1_1StmtSRef.html',1,'tvm::tir::StmtSRef'],['../classtvm_1_1tir_1_1StmtSRef.html#a31687ace5dc4fe487ffb87d658d86412',1,'tvm::tir::StmtSRef::StmtSRef()']]],
['stmtsrefnode',['StmtSRefNode',['../classtvm_1_1tir_1_1StmtSRefNode.html',1,'tvm::tir']]],
['stmtvisitor',['StmtVisitor',['../classtvm_1_1tir_1_1StmtVisitor.html',1,'tvm::tir']]],
- ['stop',['Stop',['../classtvm_1_1runtime_1_1TimerNode.html#a67eb764f2c9e3fb7c2708f01c0c35683',1,'tvm::runtime::TimerNode::Stop()'],['../classtvm_1_1runtime_1_1profiling_1_1MetricCollectorNode.html#aca9679dd49dfbc886b9dc99539cbf0e6',1,'tvm::runtime::profiling::MetricCollectorNode::Stop()'],['../classtvm_1_1runtime_1_1profiling_1_1Profiler.html#aa2000d8cd1970b5d29139ab1831394f0',1,'tvm::runtime::profiling::Profiler::Stop()'],['../structtvm_1_1relay_1_1ArangeAttrs.html#a1eadf1f3964ca83dad [...]
+ ['stop',['stop',['../structtvm_1_1relay_1_1ArangeAttrs.html#a1eadf1f3964ca83dade8edeae7d6d7cf',1,'tvm::relay::ArangeAttrs::stop()'],['../classtvm_1_1runtime_1_1TimerNode.html#a67eb764f2c9e3fb7c2708f01c0c35683',1,'tvm::runtime::TimerNode::Stop()'],['../classtvm_1_1runtime_1_1profiling_1_1MetricCollectorNode.html#aca9679dd49dfbc886b9dc99539cbf0e6',1,'tvm::runtime::profiling::MetricCollectorNode::Stop()'],['../classtvm_1_1runtime_1_1profiling_1_1Profiler.html#aa2000d8cd1970b5d29139ab18313 [...]
['stopcall',['StopCall',['../classtvm_1_1runtime_1_1profiling_1_1Profiler.html#ad5e6a8e8c9d915c80f494138eedfec3f',1,'tvm::runtime::profiling::Profiler']]],
- ['storage',['Storage',['../classtvm_1_1runtime_1_1vm_1_1Storage.html',1,'tvm::runtime::vm::Storage'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a3412cabd3b4f42f106f56fc22257f6ca',1,'tvm::runtime::vm::Instruction::storage()'],['../classtvm_1_1runtime_1_1vm_1_1Storage.html#aff0c1264864e6205cfa468f069f62f55',1,'tvm::runtime::vm::Storage::Storage()']]],
+ ['storage',['Storage',['../classtvm_1_1runtime_1_1vm_1_1Storage.html',1,'tvm::runtime::vm::Storage'],['../classtvm_1_1runtime_1_1vm_1_1Storage.html#aff0c1264864e6205cfa468f069f62f55',1,'tvm::runtime::vm::Storage::Storage()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a3412cabd3b4f42f106f56fc22257f6ca',1,'tvm::runtime::vm::Instruction::storage()']]],
['storage_5falign',['storage_align',['../classtvm_1_1auto__scheduler_1_1State.html#ab006690418e43cc9b7ad021c02657ed6',1,'tvm::auto_scheduler::State::storage_align()'],['../classtvm_1_1te_1_1Stage.html#aa73e3a269d84c3b4f0a1994371d67bab',1,'tvm::te::Stage::storage_align()']]],
['storage_5falignment',['storage_alignment',['../namespacetvm_1_1tir_1_1attr.html#af27d464f2065dc5f77408df7b94d4bb6',1,'tvm::tir::attr']]],
['storage_5fid',['storage_id',['../structTVMGraphExecutorGraphAttr.html#a8a0d6d05adcffbf499aafb6a6700c400',1,'TVMGraphExecutorGraphAttr']]],
diff --git a/docs/reference/api/doxygen/search/all_15.js b/docs/reference/api/doxygen/search/all_15.js
index 2326fc4e3..1824ebc80 100644
--- a/docs/reference/api/doxygen/search/all_15.js
+++ b/docs/reference/api/doxygen/search/all_15.js
@@ -64,7 +64,7 @@ var searchData=
['te',['te',['../namespacetvm_1_1te.html',1,'tvm']]],
['tempexpr',['TempExpr',['../classtvm_1_1relay_1_1TempExpr.html',1,'tvm::relay']]],
['tempexprnode',['TempExprNode',['../classtvm_1_1relay_1_1TempExprNode.html',1,'tvm::relay']]],
- ['tensor',['Tensor',['../classtvm_1_1te_1_1Tensor.html',1,'tvm::te::Tensor'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a22de469ea5521ba12e14f1e8181bae56',1,'tvm::runtime::vm::Instruction::tensor()'],['../classtvm_1_1te_1_1Tensor.html#afc8d8e74d1c840359661b39514d6fecf',1,'tvm::te::Tensor::Tensor()']]],
+ ['tensor',['Tensor',['../classtvm_1_1te_1_1Tensor.html',1,'tvm::te::Tensor'],['../classtvm_1_1te_1_1Tensor.html#afc8d8e74d1c840359661b39514d6fecf',1,'tvm::te::Tensor::Tensor()'],['../structtvm_1_1runtime_1_1vm_1_1Instruction.html#a22de469ea5521ba12e14f1e8181bae56',1,'tvm::runtime::vm::Instruction::tensor()']]],
['tensor_2eh',['tensor.h',['../tensor_8h.html',1,'']]],
['tensor_5fintrin',['tensor_intrin',['../classtvm_1_1te_1_1IterVarAttrNode.html#a6a0d96bbebfd716f851b2ad01738cb3f',1,'tvm::te::IterVarAttrNode']]],
['tensor_5fintrin_2eh',['tensor_intrin.h',['../tensor__intrin_8h.html',1,'']]],
@@ -146,7 +146,7 @@ var searchData=
['touchtask',['TouchTask',['../classtvm_1_1meta__schedule_1_1TaskSchedulerNode.html#af6fa276674945d3432c129bdf9cea599',1,'tvm::meta_schedule::TaskSchedulerNode::TouchTask()'],['../classtvm_1_1meta__schedule_1_1PyTaskSchedulerNode.html#a7de09f81c8aceb580b43107f266e6b40',1,'tvm::meta_schedule::PyTaskSchedulerNode::TouchTask()']]],
['tovar',['ToVar',['../classtvm_1_1tir_1_1AnyNode.html#ae01ebbba2378afb6509a22de97f8fb30',1,'tvm::tir::AnyNode']]],
['tparent',['TParent',['../classtvm_1_1OpAttrMap.html#a316480ca7450209650fc1a62f7ce4a14',1,'tvm::OpAttrMap::TParent()'],['../classtvm_1_1TargetKindAttrMap.html#a37eb6bfb0d881cf897147b17ff7d3265',1,'tvm::TargetKindAttrMap::TParent()']]],
- ['trace',['Trace',['../classtvm_1_1tir_1_1Trace.html',1,'tvm::tir::Trace'],['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html#a8cc2d64f796593a1a774eef259f17b29',1,'tvm::meta_schedule::TuningRecordNode::trace()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a953bca4123b5a758adfdcd65634a5f3b',1,'tvm::tir::ScheduleNode::trace()'],['../classtvm_1_1tir_1_1Trace.html#a8e09abffd0b9b1afac7b832cf16c142d',1,'tvm::tir::Trace::Trace()'],['../classtvm_1_1tir_1_1Trace.html#af79bccf1bde25efea387bb [...]
+ ['trace',['Trace',['../classtvm_1_1tir_1_1Trace.html',1,'tvm::tir::Trace'],['../classtvm_1_1tir_1_1Trace.html#a8e09abffd0b9b1afac7b832cf16c142d',1,'tvm::tir::Trace::Trace()'],['../classtvm_1_1tir_1_1Trace.html#af79bccf1bde25efea387bb1b82dacaa6',1,'tvm::tir::Trace::Trace(Array< Instruction > insts, Map< Instruction, ObjectRef > decisions)'],['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html#a8cc2d64f796593a1a774eef259f17b29',1,'tvm::meta_schedule::TuningRecordNode::tra [...]
['trace_2eh',['trace.h',['../trace_8h.html',1,'']]],
['traced',['Traced',['../classtvm_1_1tir_1_1Schedule.html#a295d432b86621101f67b20fadb367b91',1,'tvm::tir::Schedule']]],
['tracenode',['TraceNode',['../classtvm_1_1tir_1_1TraceNode.html',1,'tvm::tir']]],
@@ -184,7 +184,7 @@ var searchData=
['tuningoptionsnode',['TuningOptionsNode',['../classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html',1,'tvm::auto_scheduler']]],
['tuningrecord',['TuningRecord',['../classtvm_1_1meta__schedule_1_1TuningRecord.html',1,'tvm::meta_schedule::TuningRecord'],['../classtvm_1_1meta__schedule_1_1TuningRecord.html#a8495d5cbf2d11eaca3a5c1f6a25e5ea7',1,'tvm::meta_schedule::TuningRecord::TuningRecord()']]],
['tuningrecordnode',['TuningRecordNode',['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html',1,'tvm::meta_schedule']]],
- ['tuple',['Tuple',['../classtvm_1_1relay_1_1Tuple.html',1,'tvm::relay::Tuple'],['../classtvm_1_1relay_1_1TupleGetItemPatternNode.html#a1fdd79b2fbbf3d7a14cea7e7efc38574',1,'tvm::relay::TupleGetItemPatternNode::tuple()'],['../classtvm_1_1relay_1_1TupleGetItemNode.html#aade4882f84d828975c689b5c6b1b68e6',1,'tvm::relay::TupleGetItemNode::tuple()'],['../classtvm_1_1relay_1_1Tuple.html#a284e236318986fd385a02aa68bd3e938',1,'tvm::relay::Tuple::Tuple()'],['../classtvm_1_1runtime_1_1ADT.html#a871 [...]
+ ['tuple',['Tuple',['../classtvm_1_1relay_1_1Tuple.html',1,'tvm::relay::Tuple'],['../classtvm_1_1relay_1_1Tuple.html#a284e236318986fd385a02aa68bd3e938',1,'tvm::relay::Tuple::Tuple()'],['../classtvm_1_1runtime_1_1ADT.html#a871e902541f0a7e550e74ae0c621994c',1,'tvm::runtime::ADT::Tuple()'],['../classtvm_1_1relay_1_1TupleGetItemPatternNode.html#a1fdd79b2fbbf3d7a14cea7e7efc38574',1,'tvm::relay::TupleGetItemPatternNode::tuple()'],['../classtvm_1_1relay_1_1TupleGetItemNode.html#aade4882f84d828 [...]
['tupleaffinetype',['TupleAffineType',['../classtvm_1_1TupleAffineType.html',1,'tvm::TupleAffineType'],['../classtvm_1_1TupleAffineType.html#afced247570984fed7386c147d02efb79',1,'tvm::TupleAffineType::TupleAffineType()']]],
['tupleaffinetypenode',['TupleAffineTypeNode',['../classtvm_1_1TupleAffineTypeNode.html',1,'tvm']]],
['tuplegetitem',['TupleGetItem',['../classtvm_1_1relay_1_1TupleGetItem.html',1,'tvm::relay::TupleGetItem'],['../classtvm_1_1relay_1_1TupleGetItem.html#a744f50341d00e504ae4d677723433b7c',1,'tvm::relay::TupleGetItem::TupleGetItem()']]],
diff --git a/docs/reference/api/doxygen/search/all_16.js b/docs/reference/api/doxygen/search/all_16.js
index fe85be015..e5333f884 100644
--- a/docs/reference/api/doxygen/search/all_16.js
+++ b/docs/reference/api/doxygen/search/all_16.js
@@ -15,15 +15,15 @@ var searchData=
['unionregion',['UnionRegion',['../namespacetvm_1_1arith.html#ad27c4f216e41eb8e81296fb7ec4b9453',1,'tvm::arith']]],
['unionregionlowerbound',['UnionRegionLowerBound',['../namespacetvm_1_1arith.html#a4c3dedfa4cba4ad39c953eb51eb83e4d',1,'tvm::arith']]],
['unipolar',['unipolar',['../structtvm_1_1relay_1_1BinaryConv2DAttrs.html#a7e0ad68dce226079b769a678aa01dc49',1,'tvm::relay::BinaryConv2DAttrs::unipolar()'],['../structtvm_1_1relay_1_1BinaryDenseAttrs.html#af21cdb9dac67ab9ecea5a19642658d8a',1,'tvm::relay::BinaryDenseAttrs::unipolar()']]],
- ['unique',['unique',['../classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7',1,'tvm::runtime::Object::unique()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#af95c6c6fcd89da0f62b93f1167b72314',1,'tvm::runtime::ObjectPtr::unique()'],['../classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7',1,'tvm::runtime::ObjectRef::unique()'],['../classtvm_1_1VirtualDeviceCache.html#a25ba1351484aa58a2cc7cef8f8e4423c',1,'tvm::VirtualDeviceCache::Unique()']]],
+ ['unique',['Unique',['../classtvm_1_1VirtualDeviceCache.html#a25ba1351484aa58a2cc7cef8f8e4423c',1,'tvm::VirtualDeviceCache::Unique()'],['../classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7',1,'tvm::runtime::Object::unique()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#af95c6c6fcd89da0f62b93f1167b72314',1,'tvm::runtime::ObjectPtr::unique()'],['../classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7',1,'tvm::runtime::ObjectRef::unique()']]],
['uniqueattrs',['UniqueAttrs',['../structtvm_1_1relay_1_1UniqueAttrs.html',1,'tvm::relay']]],
['unit_5fbits',['unit_bits',['../classtvm_1_1MemoryInfoNode.html#aa935f1ee9d8d2f06633ca4b3c44f7725',1,'tvm::MemoryInfoNode']]],
['units',['units',['../structtvm_1_1relay_1_1BinaryDenseAttrs.html#a5373b2f2aac19653ae21aec74c69cdb0',1,'tvm::relay::BinaryDenseAttrs::units()'],['../structtvm_1_1relay_1_1MatmulAttrs.html#a5893df9ad99c6717c4e6cb440d60c6a1',1,'tvm::relay::MatmulAttrs::units()'],['../structtvm_1_1relay_1_1DenseAttrs.html#a497487f7ccced8c7492a5ed03f78fa8f',1,'tvm::relay::DenseAttrs::units()'],['../structtvm_1_1relay_1_1DensePackAttrs.html#aa0096c26c832166de13881a032ba3fbf',1,'tvm::relay::DensePackAttrs:: [...]
['unmatchedcases',['UnmatchedCases',['../namespacetvm_1_1relay.html#aa3a8cace40f8056fd6412f39c3eaa605',1,'tvm::relay']]],
['unravel_5findex',['unravel_index',['../namespacetvm_1_1topi.html#a8811a02532bbe3047986bf1a8449ac0e',1,'tvm::topi']]],
- ['unroll',['unroll',['../classtvm_1_1auto__scheduler_1_1State.html#aa68a9d2e226bae38a36e4be4af1d1ae4',1,'tvm::auto_scheduler::State::unroll()'],['../classtvm_1_1te_1_1Stage.html#af83ad8672660403504f472228b044b33',1,'tvm::te::Stage::unroll()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a84ec742f6295f59390592a6d0d90a552',1,'tvm::tir::ScheduleNode::Unroll()']]],
+ ['unroll',['Unroll',['../classtvm_1_1tir_1_1ScheduleNode.html#a84ec742f6295f59390592a6d0d90a552',1,'tvm::tir::ScheduleNode::Unroll()'],['../classtvm_1_1auto__scheduler_1_1State.html#aa68a9d2e226bae38a36e4be4af1d1ae4',1,'tvm::auto_scheduler::State::unroll()'],['../classtvm_1_1te_1_1Stage.html#af83ad8672660403504f472228b044b33',1,'tvm::te::Stage::unroll()']]],
['unrollloop',['UnrollLoop',['../namespacetvm_1_1tir_1_1transform.html#ab2f279e91071fa96a1edb24fa004ea6a',1,'tvm::tir::transform']]],
- ['update',['update',['../classtvm_1_1te_1_1ScanOpNode.html#ace2bf7e43cd4197324ec6363626fc60a',1,'tvm::te::ScanOpNode::update()'],['../classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html#a5ae0699196c4bbc754bbdd4c3a6c7ca7',1,'tvm::arith::ConstIntBoundAnalyzer::Update()'],['../classtvm_1_1arith_1_1ModularSetAnalyzer.html#a04156fac580981f3005af3b8e676720d',1,'tvm::arith::ModularSetAnalyzer::Update()'],['../classtvm_1_1arith_1_1RewriteSimplifier.html#a5e6752c0702dc2d3e4235797d9d3ac7b',1,'tvm::a [...]
+ ['update',['Update',['../classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html#a5ae0699196c4bbc754bbdd4c3a6c7ca7',1,'tvm::arith::ConstIntBoundAnalyzer::Update()'],['../classtvm_1_1arith_1_1ModularSetAnalyzer.html#a04156fac580981f3005af3b8e676720d',1,'tvm::arith::ModularSetAnalyzer::Update()'],['../classtvm_1_1arith_1_1RewriteSimplifier.html#a5e6752c0702dc2d3e4235797d9d3ac7b',1,'tvm::arith::RewriteSimplifier::Update()'],['../classtvm_1_1arith_1_1CanonicalSimplifier.html#a790c032e12c7d93e9e940 [...]
['update_5ffunc',['update_func',['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#ade9364c152a36501d4f24fa4f0111519',1,'tvm::auto_scheduler::PythonBasedModelNode']]],
['updatecostmodel',['UpdateCostModel',['../classtvm_1_1meta__schedule_1_1MeasureCallback.html#afdf5503c6e6f53767de132d91a7b53f9',1,'tvm::meta_schedule::MeasureCallback']]],
['updateiters',['UpdateIters',['../classtvm_1_1auto__scheduler_1_1AttachMap.html#ab45b991ef2bcfb1bc191601aac42e778',1,'tvm::auto_scheduler::AttachMap']]],
diff --git a/docs/reference/api/doxygen/search/all_17.js b/docs/reference/api/doxygen/search/all_17.js
index 078743e83..abbdc70ac 100644
--- a/docs/reference/api/doxygen/search/all_17.js
+++ b/docs/reference/api/doxygen/search/all_17.js
@@ -30,7 +30,7 @@ var searchData=
['vector_5funit_5fbytes',['vector_unit_bytes',['../classtvm_1_1auto__scheduler_1_1HardwareParamsNode.html#a6f2dd9161fdb3233417a9912c8854434',1,'tvm::auto_scheduler::HardwareParamsNode']]],
['vectorcombine',['vectorcombine',['../namespacetvm_1_1tir_1_1builtin.html#a30dff65bc2c142b57fae7f60e378ff43',1,'tvm::tir::builtin']]],
['vectorhigh',['vectorhigh',['../namespacetvm_1_1tir_1_1builtin.html#a45bf65ca7ca01d2016e0b609117d7e25',1,'tvm::tir::builtin']]],
- ['vectorize',['Vectorize',['../classtvm_1_1tir_1_1ScheduleNode.html#ab4a8cd91959ceab22855ec338978bcee',1,'tvm::tir::ScheduleNode::Vectorize()'],['../classtvm_1_1auto__scheduler_1_1State.html#a97b8a21210d63bea241dbab085d89b53',1,'tvm::auto_scheduler::State::vectorize()'],['../classtvm_1_1te_1_1Stage.html#a44d33e3920106e75dc7c68272f880812',1,'tvm::te::Stage::vectorize()']]],
+ ['vectorize',['vectorize',['../classtvm_1_1auto__scheduler_1_1State.html#a97b8a21210d63bea241dbab085d89b53',1,'tvm::auto_scheduler::State::vectorize()'],['../classtvm_1_1te_1_1Stage.html#a44d33e3920106e75dc7c68272f880812',1,'tvm::te::Stage::vectorize()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ab4a8cd91959ceab22855ec338978bcee',1,'tvm::tir::ScheduleNode::Vectorize()']]],
['vectorizeloop',['VectorizeLoop',['../namespacetvm_1_1tir_1_1transform.html#af3cecb50a8b8fc8021f6a87bc27587da',1,'tvm::tir::transform']]],
['vectorizer',['Vectorizer',['../classtvm_1_1tir_1_1BufferLoadNode.html#a842a72b9d02a9f8541b512478932fece',1,'tvm::tir::BufferLoadNode']]],
['vectorjacobianproduct',['VectorJacobianProduct',['../namespacetvm_1_1te.html#a547183f5a311af53ab598faba423fd64',1,'tvm::te']]],
diff --git a/docs/reference/api/doxygen/search/all_18.js b/docs/reference/api/doxygen/search/all_18.js
index 069c85b6c..e9393a8f0 100644
--- a/docs/reference/api/doxygen/search/all_18.js
+++ b/docs/reference/api/doxygen/search/all_18.js
@@ -27,7 +27,7 @@ var searchData=
['withfields',['WithFields',['../namespacetvm_1_1relay.html#acd80501d29e4d951be6746c79934a70c',1,'tvm::relay::WithFields(Clause clause, Optional< Pattern > opt_lhs=Optional< Pattern >(), Optional< Expr > opt_rhs=Optional< Expr >())'],['../namespacetvm_1_1relay.html#adb39b46f86b66a5e7252f6d9102deb7b',1,'tvm::relay::WithFields(Match match, Optional< Expr > opt_data=Optional< Expr >(), Optional< Array< Clause >> opt_clauses=Optional< Arra [...]
['withhost',['WithHost',['../classtvm_1_1Target.html#a509ce63995f082c80742ea5ca6ac112f',1,'tvm::Target']]],
['withoutattr',['WithoutAttr',['../namespacetvm.html#a7e2bc626db8be997b1562c79df3d9e11',1,'tvm']]],
- ['workload',['Workload',['../classtvm_1_1meta__schedule_1_1Workload.html',1,'tvm::meta_schedule::Workload'],['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html#a42c87f1ec62dae6806c3fe9629c5e7f0',1,'tvm::meta_schedule::TuningRecordNode::workload()'],['../classtvm_1_1meta__schedule_1_1Workload.html#a21ccf9c956b82d50a2579f1c0f592fd0',1,'tvm::meta_schedule::Workload::Workload(IRModule mod)'],['../classtvm_1_1meta__schedule_1_1Workload.html#a8880877517679c82ae63520e28d5e1d8',1,'tvm::me [...]
+ ['workload',['Workload',['../classtvm_1_1meta__schedule_1_1Workload.html',1,'tvm::meta_schedule::Workload'],['../classtvm_1_1meta__schedule_1_1Workload.html#a21ccf9c956b82d50a2579f1c0f592fd0',1,'tvm::meta_schedule::Workload::Workload(IRModule mod)'],['../classtvm_1_1meta__schedule_1_1Workload.html#a8880877517679c82ae63520e28d5e1d8',1,'tvm::meta_schedule::Workload::Workload(IRModule mod, THashCode shash)'],['../classtvm_1_1meta__schedule_1_1TuningRecordNode.html#a42c87f1ec62dae6806c3fe9 [...]
['workload_5fkey',['workload_key',['../classtvm_1_1auto__scheduler_1_1SearchTaskNode.html#a20045d677ba2bc5c5ce461e78543b3e2',1,'tvm::auto_scheduler::SearchTaskNode']]],
['workloadequal',['WorkloadEqual',['../structtvm_1_1meta__schedule_1_1WorkloadEqual.html',1,'tvm::meta_schedule']]],
['workloadhash',['WorkloadHash',['../structtvm_1_1meta__schedule_1_1WorkloadHash.html',1,'tvm::meta_schedule']]],
diff --git a/docs/reference/api/doxygen/search/all_2.js b/docs/reference/api/doxygen/search/all_2.js
index fad110616..3580c6101 100644
--- a/docs/reference/api/doxygen/search/all_2.js
+++ b/docs/reference/api/doxygen/search/all_2.js
@@ -406,6 +406,7 @@ var searchData=
['auto_5fscheduler_5flog_5fversion',['AUTO_SCHEDULER_LOG_VERSION',['../namespacetvm_1_1auto__scheduler.html#a04029f3b293fba232218203167b2ef63',1,'tvm::auto_scheduler']]],
['auto_5fscheduler_5frewritten_5flayout',['auto_scheduler_rewritten_layout',['../structtvm_1_1relay_1_1Conv2DAttrs.html#a746bf2d7d6cba18f148976c157d37ee6',1,'tvm::relay::Conv2DAttrs::auto_scheduler_rewritten_layout()'],['../structtvm_1_1relay_1_1Conv2DWinogradAttrs.html#a6022bae6fb6094e482da7c1bef8a8786',1,'tvm::relay::Conv2DWinogradAttrs::auto_scheduler_rewritten_layout()'],['../structtvm_1_1relay_1_1Conv3DAttrs.html#a1e8cd06bfb663505e3c861899604e9a6',1,'tvm::relay::Conv3DAttrs::auto_ [...]
['auto_5funroll_5fmax_5fstep',['auto_unroll_max_step',['../structtvm_1_1auto__scheduler_1_1StageAttributes.html#a7bd83956ace4ae7f5112b85a2416adf7',1,'tvm::auto_scheduler::StageAttributes']]],
+ ['autobind',['AutoBind',['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a4180c6e940445e79ada08325b2dba7a8',1,'tvm::meta_schedule::ScheduleRule']]],
['autodiff_2eh',['autodiff.h',['../autodiff_8h.html',1,'']]],
['autoinline',['AutoInline',['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a73a8c07ad4fa26d5c3e28f33c2215f1d',1,'tvm::meta_schedule::ScheduleRule']]],
['autoinlinebroarcast',['AutoInlineBroarcast',['../namespacetvm_1_1te.html#a5b53b71371b86f6309d58ddf1f90a2f2',1,'tvm::te']]],
diff --git a/docs/reference/api/doxygen/search/all_e.js b/docs/reference/api/doxygen/search/all_e.js
index e92cda9c0..0c885ce9b 100644
--- a/docs/reference/api/doxygen/search/all_e.js
+++ b/docs/reference/api/doxygen/search/all_e.js
@@ -62,7 +62,7 @@ var searchData=
['matmulattrs',['MatmulAttrs',['../structtvm_1_1relay_1_1MatmulAttrs.html',1,'tvm::relay']]],
['matrix_5fset_5fdiag',['matrix_set_diag',['../namespacetvm_1_1topi.html#aead477c6c9d4f4589d22b8acff82040c',1,'tvm::topi']]],
['matrixsetdiagattrs',['MatrixSetDiagAttrs',['../structtvm_1_1relay_1_1MatrixSetDiagAttrs.html',1,'tvm::relay']]],
- ['max',['Max',['../classtvm_1_1tir_1_1Max.html',1,'tvm::tir::Max'],['../classtvm_1_1arith_1_1IntSet.html#ac215840d3e9fb2817f1e5648e31317c5',1,'tvm::arith::IntSet::max()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#a2c5ea87b1155aa7810e0beb3b69b955b',1,'tvm::support::LinearCongruentialEngine::max()'],['../classtvm_1_1tir_1_1Max.html#a7dff11b4dea01bfc7a03eacd077f0729',1,'tvm::tir::Max::Max()'],['../namespacetvm.html#a0df5ca82d2c566f628ebb2f1e84a3fcb',1,'tvm::max(PrimExpr a, [...]
+ ['max',['Max',['../classtvm_1_1tir_1_1Max.html',1,'tvm::tir::Max'],['../classtvm_1_1tir_1_1Max.html#a7dff11b4dea01bfc7a03eacd077f0729',1,'tvm::tir::Max::Max()'],['../classtvm_1_1arith_1_1IntSet.html#ac215840d3e9fb2817f1e5648e31317c5',1,'tvm::arith::IntSet::max()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#a2c5ea87b1155aa7810e0beb3b69b955b',1,'tvm::support::LinearCongruentialEngine::max()'],['../namespacetvm.html#a0df5ca82d2c566f628ebb2f1e84a3fcb',1,'tvm::max(PrimExpr a, [...]
['max_5fcontinuous_5ferror',['max_continuous_error',['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#abdc38da91bcdf77be765c1e3d5af3648',1,'tvm::auto_scheduler::ProgramMeasurerNode']]],
['max_5fdisplacement',['max_displacement',['../structtvm_1_1relay_1_1CorrelationAttrs.html#ad1d16e2ba537736c8baee2553e1e32bf',1,'tvm::relay::CorrelationAttrs']]],
['max_5ffunctions',['max_functions',['../structTVMMutableFuncRegistry.html#a41745f8e0f73f8e4fb2074f5b154b49c',1,'TVMMutableFuncRegistry']]],
@@ -157,7 +157,7 @@ var searchData=
['microtvmruntimegetoutput',['MicroTVMRuntimeGetOutput',['../microtvm__runtime_8h.html#a76129be7b6de972791a3f9a1b312acfa',1,'microtvm_runtime.h']]],
['microtvmruntimerun',['MicroTVMRuntimeRun',['../microtvm__runtime_8h.html#ac43a544f675dd716e8c279c3e41f6e45',1,'microtvm_runtime.h']]],
['microtvmruntimesetinput',['MicroTVMRuntimeSetInput',['../microtvm__runtime_8h.html#aa593edc600f4356f2b560702aa01b113',1,'microtvm_runtime.h']]],
- ['min',['Min',['../classtvm_1_1tir_1_1Min.html',1,'tvm::tir::Min'],['../classtvm_1_1tir_1_1Min.html#a3a4403aec40029a5206e22cd334e356b',1,'tvm::tir::Min::Min()'],['../classtvm_1_1RangeNode.html#a43d2fb12bb61cf05936a1972d0158b49',1,'tvm::RangeNode::min()'],['../classtvm_1_1tir_1_1ForNode.html#a1d1aa2006328bd84e4911f6d43ceca5c',1,'tvm::tir::ForNode::min()'],['../classtvm_1_1arith_1_1IntSet.html#ae5517de2862e93a801224eed98a57001',1,'tvm::arith::IntSet::min()'],['../classtvm_1_1support_1_1L [...]
+ ['min',['Min',['../classtvm_1_1tir_1_1Min.html',1,'tvm::tir::Min'],['../classtvm_1_1RangeNode.html#a43d2fb12bb61cf05936a1972d0158b49',1,'tvm::RangeNode::min()'],['../classtvm_1_1tir_1_1ForNode.html#a1d1aa2006328bd84e4911f6d43ceca5c',1,'tvm::tir::ForNode::min()'],['../classtvm_1_1arith_1_1IntSet.html#ae5517de2862e93a801224eed98a57001',1,'tvm::arith::IntSet::min()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#aec5f11b588fa3a12294a46c945c34411',1,'tvm::support::LinearCongrue [...]
['min_5frepeat_5fms',['min_repeat_ms',['../classtvm_1_1auto__scheduler_1_1ProgramRunnerNode.html#a39a865216db9ed6f57dfb22160cae1ff',1,'tvm::auto_scheduler::ProgramRunnerNode']]],
['min_5fvalue',['min_value',['../classtvm_1_1arith_1_1ConstIntBoundNode.html#a0761897bf16ab73b848bf360e9b195a3',1,'tvm::arith::ConstIntBoundNode::min_value()'],['../namespacetvm.html#a3b37fa55ea93d6868751a2441996b072',1,'tvm::min_value()']]],
['minimum',['minimum',['../namespacetvm_1_1topi.html#a7ac1dc0d99ce93090a4cdf90ab19d4b8',1,'tvm::topi::minimum(const tvm::PrimExpr &a, const tvm::PrimExpr &b)'],['../namespacetvm_1_1topi.html#a0e19dc06a2b1ecbb83b0942fdf836169',1,'tvm::topi::minimum(const tvm::te::Tensor &A, const tvm::te::Tensor &B, std::string name="T_" "minimum", std::string tag=kBroadcast)'],['../namespacetvm_1_1topi.html#a28d4ef4b3426bff237215ce356dd5681',1,'tvm::topi::minimum(con [...]
@@ -201,6 +201,7 @@ var searchData=
['mutatebyapply',['MutateByApply',['../classtvm_1_1runtime_1_1Array.html#a127d022f391a566b51abf16ce4bd74af',1,'tvm::runtime::Array']]],
['mutatecomputelocation',['MutateComputeLocation',['../classtvm_1_1meta__schedule_1_1Mutator.html#a2f706028c59f1c2d5a87ae58785b79c9',1,'tvm::meta_schedule::Mutator']]],
['mutateparallel',['MutateParallel',['../classtvm_1_1meta__schedule_1_1Mutator.html#acb242cfc6875055d75f7ea7adcfa9c14',1,'tvm::meta_schedule::Mutator']]],
+ ['mutatethreadbinding',['MutateThreadBinding',['../classtvm_1_1meta__schedule_1_1Mutator.html#a008b237e2c944cc25c123ef412dcd397',1,'tvm::meta_schedule::Mutator']]],
['mutatetilesize',['MutateTileSize',['../classtvm_1_1meta__schedule_1_1Mutator.html#a84ed21cbc627ff6dd49f983a05113696',1,'tvm::meta_schedule::Mutator']]],
['mutateunroll',['MutateUnroll',['../classtvm_1_1meta__schedule_1_1Mutator.html#a5bedfb467944180740728c76ba39312f',1,'tvm::meta_schedule::Mutator']]],
['mutator',['Mutator',['../classtvm_1_1meta__schedule_1_1Mutator.html',1,'tvm::meta_schedule']]],
diff --git a/docs/reference/api/doxygen/search/functions_1.js b/docs/reference/api/doxygen/search/functions_1.js
index 6d5642cf8..5da4190f3 100644
--- a/docs/reference/api/doxygen/search/functions_1.js
+++ b/docs/reference/api/doxygen/search/functions_1.js
@@ -109,6 +109,7 @@ var searchData=
['attrswithdefaultvalues',['AttrsWithDefaultValues',['../namespacetvm.html#a2e3193a20ee748b08d5a528275859dbe',1,'tvm']]],
['attrtriggernondefaultentry',['AttrTriggerNonDefaultEntry',['../structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html#a572356cfd8d20c258b03f7a5c62d3909',1,'tvm::detail::AttrTriggerNonDefaultEntry']]],
['auto_5fscheduler_5flayout_5ftransform',['auto_scheduler_layout_transform',['../namespacetvm_1_1topi.html#a8e10f74deef4f22a9dc4b0a0b4370b08',1,'tvm::topi']]],
+ ['autobind',['AutoBind',['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a4180c6e940445e79ada08325b2dba7a8',1,'tvm::meta_schedule::ScheduleRule']]],
['autoinline',['AutoInline',['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a73a8c07ad4fa26d5c3e28f33c2215f1d',1,'tvm::meta_schedule::ScheduleRule']]],
['autoinlinebroarcast',['AutoInlineBroarcast',['../namespacetvm_1_1te.html#a5b53b71371b86f6309d58ddf1f90a2f2',1,'tvm::te']]],
['autoinlineelemwise',['AutoInlineElemWise',['../namespacetvm_1_1te.html#a26ae0c9351036d0f7ca362e3c857d24a',1,'tvm::te']]],
diff --git a/docs/reference/api/doxygen/search/functions_10.js b/docs/reference/api/doxygen/search/functions_10.js
index df5e189ef..a240f30ba 100644
--- a/docs/reference/api/doxygen/search/functions_10.js
+++ b/docs/reference/api/doxygen/search/functions_10.js
@@ -9,7 +9,7 @@ var searchData=
['packimportstollvm',['PackImportsToLLVM',['../namespacetvm_1_1codegen.html#ab2cd2a65bac4b26427a8ca0abe4e0bd6',1,'tvm::codegen']]],
['pad',['Pad',['../namespacetvm_1_1topi.html#a97c798d0a0ec20a95d351618b83d5121',1,'tvm::topi::Pad(const Array< PrimExpr > shape, int odim)'],['../namespacetvm_1_1topi.html#a3305d377f96cd20c23032eeada2756d5',1,'tvm::topi::pad(const tvm::te::Tensor &t, const tvm::Array< tvm::PrimExpr > &pad_before, tvm::Array< tvm::PrimExpr > pad_after=tvm::Array< tvm::PrimExpr >(), PrimExpr pad_value=PrimExpr(), std::string name="T_pad", std::string tag=kElement [...]
['pagememorymanagercreate',['PageMemoryManagerCreate',['../page__allocator_8h.html#a720dbc7474ac13b93fafb974cfc20bc7',1,'page_allocator.h']]],
- ['parallel',['Parallel',['../classtvm_1_1tir_1_1ScheduleNode.html#a553dc17c0b49b175cd16881c81b6c789',1,'tvm::tir::ScheduleNode::Parallel()'],['../classtvm_1_1auto__scheduler_1_1State.html#a2376f0180bc5b5dd4b456f2a75d4a366',1,'tvm::auto_scheduler::State::parallel()'],['../classtvm_1_1te_1_1Stage.html#a60a6be10a1a96cb594c1399efabafef3',1,'tvm::te::Stage::parallel()']]],
+ ['parallel',['parallel',['../classtvm_1_1auto__scheduler_1_1State.html#a2376f0180bc5b5dd4b456f2a75d4a366',1,'tvm::auto_scheduler::State::parallel()'],['../classtvm_1_1te_1_1Stage.html#a60a6be10a1a96cb594c1399efabafef3',1,'tvm::te::Stage::parallel()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a553dc17c0b49b175cd16881c81b6c789',1,'tvm::tir::ScheduleNode::Parallel()']]],
['parallel_5ffor',['parallel_for',['../namespacetvm_1_1support.html#a8bf1225e8bb1db575578ca2d645fb23c',1,'tvm::support']]],
['parallel_5ffor_5fdynamic',['parallel_for_dynamic',['../namespacetvm_1_1support.html#afe4271363c794f1644ce7af5c2266530',1,'tvm::support']]],
['parallelizevectorizeunroll',['ParallelizeVectorizeUnroll',['../classtvm_1_1meta__schedule_1_1ScheduleRule.html#a0ef9b604081db7a8bf960f3fbfd3a804',1,'tvm::meta_schedule::ScheduleRule']]],
@@ -67,7 +67,7 @@ var searchData=
['pragmastep',['PragmaStep',['../classtvm_1_1auto__scheduler_1_1PragmaStep.html#a9f3ec96f3e561a14d8d9235c4d46e2eb',1,'tvm::auto_scheduler::PragmaStep::PragmaStep(int stage_id, int iter_id, String pragma_type)'],['../classtvm_1_1auto__scheduler_1_1PragmaStep.html#a7692c2a9934af1f36b218840034a88d5',1,'tvm::auto_scheduler::PragmaStep::PragmaStep(dmlc::JSONReader *reader)']]],
['predict',['Predict',['../classtvm_1_1auto__scheduler_1_1CostModelNode.html#aa337ec72401a957a68b6eb4a96472a2c',1,'tvm::auto_scheduler::CostModelNode::Predict()'],['../classtvm_1_1auto__scheduler_1_1RandomModelNode.html#a09f1d81fd9d9f93fca5f2008ab6054ba',1,'tvm::auto_scheduler::RandomModelNode::Predict()'],['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#af16befe722e718fea23727469fecea1c',1,'tvm::auto_scheduler::PythonBasedModelNode::Predict()'],['../classtvm_1_1meta__sche [...]
['predictstages',['PredictStages',['../classtvm_1_1auto__scheduler_1_1CostModelNode.html#a213222251099444874698d2e9ff18adc',1,'tvm::auto_scheduler::CostModelNode::PredictStages()'],['../classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html#a1f9975c4bdd61793b806663a61a9a703',1,'tvm::auto_scheduler::PythonBasedModelNode::PredictStages()']]],
- ['prefetch',['prefetch',['../classtvm_1_1te_1_1Stage.html#a611327890918fb641a8e65396ab9c5f6',1,'tvm::te::Stage::prefetch()'],['../classtvm_1_1tir_1_1Prefetch.html#af462f85dad4268685e3113b6b009d1b2',1,'tvm::tir::Prefetch::Prefetch()'],['../namespacetvm_1_1tir_1_1builtin.html#a543f1fc334d2bc830add972895a03f17',1,'tvm::tir::builtin::prefetch()']]],
+ ['prefetch',['Prefetch',['../classtvm_1_1tir_1_1Prefetch.html#af462f85dad4268685e3113b6b009d1b2',1,'tvm::tir::Prefetch::Prefetch()'],['../classtvm_1_1te_1_1Stage.html#a611327890918fb641a8e65396ab9c5f6',1,'tvm::te::Stage::prefetch()'],['../namespacetvm_1_1tir_1_1builtin.html#a543f1fc334d2bc830add972895a03f17',1,'tvm::tir::builtin::prefetch()']]],
['prefetchnode',['PrefetchNode',['../classtvm_1_1tir_1_1PrefetchNode.html#acaaa5e89462c7edf3019df4283ec74db',1,'tvm::tir::PrefetchNode::PrefetchNode()=default'],['../classtvm_1_1tir_1_1PrefetchNode.html#a73ef244c364b9c7efaee36e6bec746e7',1,'tvm::tir::PrefetchNode::PrefetchNode(Buffer buffer, Array< Range > bounds, Span span=Span())']]],
['preloadmeasuredstates',['PreloadMeasuredStates',['../classtvm_1_1auto__scheduler_1_1PreloadMeasuredStates.html#a67daf1ccd25a208fdf8d001f9a31d86b',1,'tvm::auto_scheduler::PreloadMeasuredStates::PreloadMeasuredStates()'],['../classtvm_1_1auto__scheduler_1_1SearchPolicyNode.html#abc2529d0b1cd485876e48037dd19dde1',1,'tvm::auto_scheduler::SearchPolicyNode::PreloadMeasuredStates()']]],
['prelu',['prelu',['../namespacetvm_1_1topi.html#a315c34bbe2bf1be4c778acae08c906fc',1,'tvm::topi']]],
diff --git a/docs/reference/api/doxygen/search/functions_12.js b/docs/reference/api/doxygen/search/functions_12.js
index e6df78d46..20de8cb06 100644
--- a/docs/reference/api/doxygen/search/functions_12.js
+++ b/docs/reference/api/doxygen/search/functions_12.js
@@ -50,7 +50,7 @@ var searchData=
['rendererrors',['RenderErrors',['../classtvm_1_1ErrorReporter.html#a54699ec5f538bd207b5aa4e3f55181c6',1,'tvm::ErrorReporter']]],
['renewdefs',['RenewDefs',['../namespacetvm_1_1tir.html#a2e639c81d1c6875ead7764ab8a7cd553',1,'tvm::tir']]],
['renormalizesplitpattern',['RenormalizeSplitPattern',['../namespacetvm_1_1tir_1_1transform.html#a5c670c9efcd740f2f168b62e624c8c57',1,'tvm::tir::transform']]],
- ['reorder',['reorder',['../classtvm_1_1auto__scheduler_1_1State.html#a16e95966b46977eff629a5f4f1564533',1,'tvm::auto_scheduler::State::reorder()'],['../classtvm_1_1te_1_1Stage.html#ad96cd240a92df9cafae89cdf2a7e302e',1,'tvm::te::Stage::reorder()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a059229fe0e254961da406807a97f7a3d',1,'tvm::tir::ScheduleNode::Reorder()']]],
+ ['reorder',['Reorder',['../classtvm_1_1tir_1_1ScheduleNode.html#a059229fe0e254961da406807a97f7a3d',1,'tvm::tir::ScheduleNode::Reorder()'],['../classtvm_1_1auto__scheduler_1_1State.html#a16e95966b46977eff629a5f4f1564533',1,'tvm::auto_scheduler::State::reorder()'],['../classtvm_1_1te_1_1Stage.html#ad96cd240a92df9cafae89cdf2a7e302e',1,'tvm::te::Stage::reorder()']]],
['reorderstep',['ReorderStep',['../classtvm_1_1auto__scheduler_1_1ReorderStep.html#a83b9dab5f38d5a4d42c6424ba437bc10',1,'tvm::auto_scheduler::ReorderStep::ReorderStep(int stage_id, const Array< Integer > &after_ids)'],['../classtvm_1_1auto__scheduler_1_1ReorderStep.html#a9586534afef3e0f57ab31e8374e70792',1,'tvm::auto_scheduler::ReorderStep::ReorderStep(dmlc::JSONReader *reader)']]],
['reorg',['reorg',['../namespacetvm_1_1topi_1_1vision.html#a1014df582489005202c4218e51792314',1,'tvm::topi::vision']]],
['repeat',['repeat',['../namespacetvm_1_1topi.html#afe9f6d9103b2dfbc601bfd2304a4e687',1,'tvm::topi']]],
@@ -63,7 +63,7 @@ var searchData=
['reportat',['ReportAt',['../classtvm_1_1ErrorReporter.html#a3e1c300e60077c38bc9540dddcd1a019',1,'tvm::ErrorReporter::ReportAt(const GlobalVar &global, const ObjectRef &node, std::stringstream &err)'],['../classtvm_1_1ErrorReporter.html#a04384ff3175673b4ff08fe46abca281c',1,'tvm::ErrorReporter::ReportAt(const GlobalVar &global, const ObjectRef &node, const CompileError &err)']]],
['reprprinter',['ReprPrinter',['../classtvm_1_1ReprPrinter.html#a05b878a528f2dec33e28278b17ddeb6b',1,'tvm::ReprPrinter']]],
['reserve',['reserve',['../classtvm_1_1runtime_1_1Array.html#a1a7727b86efaf35c58a5198ab1c139c8',1,'tvm::runtime::Array']]],
- ['reset',['reset',['../classtvm_1_1runtime_1_1NDArray.html#af2a8ccab95d432d1ecad7a389e11bcd3',1,'tvm::runtime::NDArray::reset()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#ac4461465ba0e785794794e0405c96590',1,'tvm::runtime::ObjectPtr::reset()'],['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a73b14ea360a9902c291d5bf6e97636cd',1,'tvm::auto_scheduler::ProgramMeasurerNode::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html#ae6279154fe70e9eb85937b51e70a4bf8',1, [...]
+ ['reset',['Reset',['../classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html#a73b14ea360a9902c291d5bf6e97636cd',1,'tvm::auto_scheduler::ProgramMeasurerNode::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html#ae6279154fe70e9eb85937b51e70a4bf8',1,'tvm::runtime::micro_rpc::Unframer::Reset()'],['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#a44ff9650ecca8785e33c25c369d2570a',1,'tvm::runtime::micro_rpc::Framer::Reset()'],['../classtvm_1_1tir_1_1StmtSRefNode.html#a0a81 [...]
['reset_5fattr',['reset_attr',['../classtvm_1_1OpRegEntry.html#a67628f8d3d6dea5b0a47e462c06b7790',1,'tvm::OpRegEntry']]],
['resetthreadpool',['ResetThreadPool',['../namespacetvm_1_1runtime_1_1threading.html#aafdb21c00248ff146b614a7e888b4fd7',1,'tvm::runtime::threading']]],
['reshape',['reshape',['../namespacetvm_1_1topi.html#a3aad65f2505802109ba7d05359ce9005',1,'tvm::topi']]],
@@ -85,9 +85,9 @@ var searchData=
['rewritepatterns',['RewritePatterns',['../namespacetvm_1_1relay.html#ad9fd478e0f590938f8eb15e1bc45dbec',1,'tvm::relay']]],
['rewritereductionblock',['RewriteReductionBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a08348595d8c50afe0167a986e034d616',1,'tvm::meta_schedule::Postproc']]],
['rewritetensorize',['RewriteTensorize',['../classtvm_1_1meta__schedule_1_1Postproc.html#a95db036cfced4c2575367a26a41498ff',1,'tvm::meta_schedule::Postproc']]],
- ['rewriteunboundblock',['RewriteUnboundBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a190932261c8574b7e85e804938f8ad0d',1,'tvm::meta_schedule::Postproc']]],
+ ['rewriteunboundblock',['RewriteUnboundBlock',['../classtvm_1_1meta__schedule_1_1Postproc.html#a1836b2278bc24fdc227c490896d92980',1,'tvm::meta_schedule::Postproc']]],
['rewriteunsafeselect',['RewriteUnsafeSelect',['../namespacetvm_1_1tir_1_1transform.html#a4fe43327c4454dd05b6e925577443f49',1,'tvm::tir::transform']]],
- ['rfactor',['rfactor',['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()']]],
+ ['rfactor',['RFactor',['../classtvm_1_1tir_1_1ScheduleNode.html#ab185c8eac1065290d84d58e7f4617232',1,'tvm::tir::ScheduleNode::RFactor()'],['../classtvm_1_1auto__scheduler_1_1State.html#a21c27b06d439267f8b981fa05c5f48a0',1,'tvm::auto_scheduler::State::rfactor()'],['../classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862',1,'tvm::te::Schedule::rfactor()']]],
['rfactorstep',['RfactorStep',['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a26e6f85b55307f18fab4469e3bd4be0c',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(int stage_id, int iter_id, int factor_iter_id)'],['../classtvm_1_1auto__scheduler_1_1RfactorStep.html#a95575c21441177634178245ab562cb4f',1,'tvm::auto_scheduler::RfactorStep::RfactorStep(dmlc::JSONReader *reader)']]],
['right_5fshift',['right_shift',['../namespacetvm.html#ae8ecc0382685a855187bede0c97d93e6',1,'tvm::right_shift(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.html#af49dde9dfdeea62e8ad3a6d8db53de0b',1,'tvm::right_shift(const PrimExpr &a, int b, Span span=Span())'],['../namespacetvm.html#a98ff4361d0a24570f8dc32d03cde972a',1,'tvm::right_shift(int a, const PrimExpr &b, Span span=Span())'],['../namespacetvm_1_1topi.html#a9673b9caffb46404b566c3f04a492dfe',1,'tvm::topi:: [...]
['rocblas_5fbatch_5fmatmul',['rocblas_batch_matmul',['../namespacetvm_1_1topi_1_1contrib.html#abf1113dd429e1285752b48f62fe12848',1,'tvm::topi::contrib']]],
diff --git a/docs/reference/api/doxygen/search/functions_13.js b/docs/reference/api/doxygen/search/functions_13.js
index dbfd71e46..0a5b095a8 100644
--- a/docs/reference/api/doxygen/search/functions_13.js
+++ b/docs/reference/api/doxygen/search/functions_13.js
@@ -146,7 +146,7 @@ var searchData=
['sparse_5fto_5fdense',['sparse_to_dense',['../namespacetvm_1_1topi.html#a877e6fdffb6b6c051c29602ec6fe995c',1,'tvm::topi']]],
['specialize',['Specialize',['../namespacetvm_1_1tir.html#a69b6f1b0014dc6e7dd390cff746e9782',1,'tvm::tir']]],
['specializedcondition',['SpecializedCondition',['../classtvm_1_1te_1_1SpecializedCondition.html#a48d119ee1c6033929a5592cfc2592e60',1,'tvm::te::SpecializedCondition']]],
- ['split',['split',['../classtvm_1_1auto__scheduler_1_1State.html#a5815f21fc90ba7cc379c2410c05ab54c',1,'tvm::auto_scheduler::State::split()'],['../classtvm_1_1te_1_1Stage.html#a5a7cd562be59b68a187ad97085a3425d',1,'tvm::te::Stage::split()'],['../classtvm_1_1te_1_1Split.html#a328e0c093ce5b41ebaf33e0e80592764',1,'tvm::te::Split::Split()'],['../classtvm_1_1tir_1_1Layout.html#ad7657af7789fe040d3224c0149976bb4',1,'tvm::tir::Layout::Split()'],['../classtvm_1_1tir_1_1ScheduleNode.html#af8a330c3 [...]
+ ['split',['Split',['../classtvm_1_1te_1_1Split.html#a328e0c093ce5b41ebaf33e0e80592764',1,'tvm::te::Split::Split()'],['../classtvm_1_1tir_1_1Layout.html#ad7657af7789fe040d3224c0149976bb4',1,'tvm::tir::Layout::Split()'],['../classtvm_1_1tir_1_1ScheduleNode.html#af8a330c32b06dc16c8835c76177ffa11',1,'tvm::tir::ScheduleNode::Split()'],['../classtvm_1_1auto__scheduler_1_1State.html#a5815f21fc90ba7cc379c2410c05ab54c',1,'tvm::auto_scheduler::State::split()'],['../classtvm_1_1te_1_1Stage.html#a [...]
['split_5fby_5fnparts',['split_by_nparts',['../classtvm_1_1te_1_1Stage.html#a51432f38d9ec4792a2525023179ae604',1,'tvm::te::Stage']]],
['split_5fsections',['split_sections',['../namespacetvm_1_1topi.html#acc643e2ed166fa2ed82a95853e145619',1,'tvm::topi']]],
['splitargs',['SplitArgs',['../namespacetvm_1_1relay_1_1transform.html#a2425d757b896168a109498e8d34ba960',1,'tvm::relay::transform']]],
@@ -167,7 +167,7 @@ var searchData=
['startmessage',['StartMessage',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#acd512b977c6dd888f90c4fd6d2b9500f',1,'tvm::runtime::micro_rpc::Session']]],
['startpacket',['StartPacket',['../classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html#ade10d3bd3a26e3b7af881ae134e9a998',1,'tvm::runtime::micro_rpc::Framer']]],
['startsession',['StartSession',['../classtvm_1_1runtime_1_1micro__rpc_1_1Session.html#a15d3f9ecb8b22bf2d330f6f0a16c5239',1,'tvm::runtime::micro_rpc::Session']]],
- ['state',['State',['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()'],['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()']]],
+ ['state',['state',['../classtvm_1_1tir_1_1ScheduleNode.html#abb3612c2598fa2d3ee0e6e3fc3de8a26',1,'tvm::tir::ScheduleNode::state()'],['../classtvm_1_1auto__scheduler_1_1State.html#a9e8198b1f51b42cfbbee4b9f42160749',1,'tvm::auto_scheduler::State::State()']]],
['stats',['Stats',['../classtvm_1_1runtime_1_1vm_1_1Executable.html#a5445bd71aa14ec97552fa099dc3bd787',1,'tvm::runtime::vm::Executable']]],
['stepapplytoschedule',['StepApplyToSchedule',['../namespacetvm_1_1auto__scheduler.html#ac58f7548a94b92f801b2b9a6f65bd785',1,'tvm::auto_scheduler']]],
['stepapplytostate',['StepApplyToState',['../namespacetvm_1_1auto__scheduler.html#a6909bc5a99d1cc8372201e9392717832',1,'tvm::auto_scheduler']]],
diff --git a/docs/reference/api/doxygen/search/functions_14.js b/docs/reference/api/doxygen/search/functions_14.js
index 8096f7b2a..01a3377a5 100644
--- a/docs/reference/api/doxygen/search/functions_14.js
+++ b/docs/reference/api/doxygen/search/functions_14.js
@@ -49,7 +49,7 @@ var searchData=
['totupletype',['ToTupleType',['../namespacetvm_1_1relay.html#ae6757a008816e31cce4109e8dfc2bc16',1,'tvm::relay']]],
['touchtask',['TouchTask',['../classtvm_1_1meta__schedule_1_1TaskSchedulerNode.html#af6fa276674945d3432c129bdf9cea599',1,'tvm::meta_schedule::TaskSchedulerNode::TouchTask()'],['../classtvm_1_1meta__schedule_1_1PyTaskSchedulerNode.html#a7de09f81c8aceb580b43107f266e6b40',1,'tvm::meta_schedule::PyTaskSchedulerNode::TouchTask()']]],
['tovar',['ToVar',['../classtvm_1_1tir_1_1AnyNode.html#ae01ebbba2378afb6509a22de97f8fb30',1,'tvm::tir::AnyNode']]],
- ['trace',['trace',['../classtvm_1_1tir_1_1ScheduleNode.html#a953bca4123b5a758adfdcd65634a5f3b',1,'tvm::tir::ScheduleNode::trace()'],['../classtvm_1_1tir_1_1Trace.html#a8e09abffd0b9b1afac7b832cf16c142d',1,'tvm::tir::Trace::Trace()'],['../classtvm_1_1tir_1_1Trace.html#af79bccf1bde25efea387bb1b82dacaa6',1,'tvm::tir::Trace::Trace(Array< Instruction > insts, Map< Instruction, ObjectRef > decisions)']]],
+ ['trace',['Trace',['../classtvm_1_1tir_1_1Trace.html#a8e09abffd0b9b1afac7b832cf16c142d',1,'tvm::tir::Trace::Trace()'],['../classtvm_1_1tir_1_1Trace.html#af79bccf1bde25efea387bb1b82dacaa6',1,'tvm::tir::Trace::Trace(Array< Instruction > insts, Map< Instruction, ObjectRef > decisions)'],['../classtvm_1_1tir_1_1ScheduleNode.html#a953bca4123b5a758adfdcd65634a5f3b',1,'tvm::tir::ScheduleNode::trace()']]],
['traced',['Traced',['../classtvm_1_1tir_1_1Schedule.html#a295d432b86621101f67b20fadb367b91',1,'tvm::tir::Schedule']]],
['transform',['Transform',['../classtvm_1_1te_1_1Transform.html#a51422cc2290f6b87fe61edb0db691125',1,'tvm::te::Transform']]],
['transform_5flayout',['transform_layout',['../classtvm_1_1te_1_1Stage.html#acec77eca6c9a4f1738a7c119d7ac2c2c',1,'tvm::te::Stage']]],
diff --git a/docs/reference/api/doxygen/search/functions_15.js b/docs/reference/api/doxygen/search/functions_15.js
index 86d8ad114..1cf9b4d81 100644
--- a/docs/reference/api/doxygen/search/functions_15.js
+++ b/docs/reference/api/doxygen/search/functions_15.js
@@ -12,10 +12,10 @@ var searchData=
['unionlowerbound',['UnionLowerBound',['../namespacetvm_1_1arith.html#ab22d7fd95abb5fa372843a40e19d80c5',1,'tvm::arith']]],
['unionregion',['UnionRegion',['../namespacetvm_1_1arith.html#ad27c4f216e41eb8e81296fb7ec4b9453',1,'tvm::arith']]],
['unionregionlowerbound',['UnionRegionLowerBound',['../namespacetvm_1_1arith.html#a4c3dedfa4cba4ad39c953eb51eb83e4d',1,'tvm::arith']]],
- ['unique',['unique',['../classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7',1,'tvm::runtime::Object::unique()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#af95c6c6fcd89da0f62b93f1167b72314',1,'tvm::runtime::ObjectPtr::unique()'],['../classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7',1,'tvm::runtime::ObjectRef::unique()'],['../classtvm_1_1VirtualDeviceCache.html#a25ba1351484aa58a2cc7cef8f8e4423c',1,'tvm::VirtualDeviceCache::Unique()']]],
+ ['unique',['Unique',['../classtvm_1_1VirtualDeviceCache.html#a25ba1351484aa58a2cc7cef8f8e4423c',1,'tvm::VirtualDeviceCache::Unique()'],['../classtvm_1_1runtime_1_1Object.html#afd548730a6139d19fe24473ad66026d7',1,'tvm::runtime::Object::unique()'],['../classtvm_1_1runtime_1_1ObjectPtr.html#af95c6c6fcd89da0f62b93f1167b72314',1,'tvm::runtime::ObjectPtr::unique()'],['../classtvm_1_1runtime_1_1ObjectRef.html#a4e7cdb1574b93a59e784d70aa47b8da7',1,'tvm::runtime::ObjectRef::unique()']]],
['unmatchedcases',['UnmatchedCases',['../namespacetvm_1_1relay.html#aa3a8cace40f8056fd6412f39c3eaa605',1,'tvm::relay']]],
['unravel_5findex',['unravel_index',['../namespacetvm_1_1topi.html#a8811a02532bbe3047986bf1a8449ac0e',1,'tvm::topi']]],
- ['unroll',['unroll',['../classtvm_1_1auto__scheduler_1_1State.html#aa68a9d2e226bae38a36e4be4af1d1ae4',1,'tvm::auto_scheduler::State::unroll()'],['../classtvm_1_1te_1_1Stage.html#af83ad8672660403504f472228b044b33',1,'tvm::te::Stage::unroll()'],['../classtvm_1_1tir_1_1ScheduleNode.html#a84ec742f6295f59390592a6d0d90a552',1,'tvm::tir::ScheduleNode::Unroll()']]],
+ ['unroll',['Unroll',['../classtvm_1_1tir_1_1ScheduleNode.html#a84ec742f6295f59390592a6d0d90a552',1,'tvm::tir::ScheduleNode::Unroll()'],['../classtvm_1_1auto__scheduler_1_1State.html#aa68a9d2e226bae38a36e4be4af1d1ae4',1,'tvm::auto_scheduler::State::unroll()'],['../classtvm_1_1te_1_1Stage.html#af83ad8672660403504f472228b044b33',1,'tvm::te::Stage::unroll()']]],
['unrollloop',['UnrollLoop',['../namespacetvm_1_1tir_1_1transform.html#ab2f279e91071fa96a1edb24fa004ea6a',1,'tvm::tir::transform']]],
['update',['Update',['../classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html#a5ae0699196c4bbc754bbdd4c3a6c7ca7',1,'tvm::arith::ConstIntBoundAnalyzer::Update()'],['../classtvm_1_1arith_1_1ModularSetAnalyzer.html#a04156fac580981f3005af3b8e676720d',1,'tvm::arith::ModularSetAnalyzer::Update()'],['../classtvm_1_1arith_1_1RewriteSimplifier.html#a5e6752c0702dc2d3e4235797d9d3ac7b',1,'tvm::arith::RewriteSimplifier::Update()'],['../classtvm_1_1arith_1_1CanonicalSimplifier.html#a790c032e12c7d93e9e940 [...]
['updatecostmodel',['UpdateCostModel',['../classtvm_1_1meta__schedule_1_1MeasureCallback.html#afdf5503c6e6f53767de132d91a7b53f9',1,'tvm::meta_schedule::MeasureCallback']]],
diff --git a/docs/reference/api/doxygen/search/functions_16.js b/docs/reference/api/doxygen/search/functions_16.js
index 84f8f0525..68cfe3bd0 100644
--- a/docs/reference/api/doxygen/search/functions_16.js
+++ b/docs/reference/api/doxygen/search/functions_16.js
@@ -8,7 +8,7 @@ var searchData=
['vector',['Vector',['../classtvm_1_1arith_1_1IntSet.html#a29b6f1e60f4b328fcfabc514e0c10f17',1,'tvm::arith::IntSet']]],
['vectorcombine',['vectorcombine',['../namespacetvm_1_1tir_1_1builtin.html#a30dff65bc2c142b57fae7f60e378ff43',1,'tvm::tir::builtin']]],
['vectorhigh',['vectorhigh',['../namespacetvm_1_1tir_1_1builtin.html#a45bf65ca7ca01d2016e0b609117d7e25',1,'tvm::tir::builtin']]],
- ['vectorize',['Vectorize',['../classtvm_1_1tir_1_1ScheduleNode.html#ab4a8cd91959ceab22855ec338978bcee',1,'tvm::tir::ScheduleNode::Vectorize()'],['../classtvm_1_1auto__scheduler_1_1State.html#a97b8a21210d63bea241dbab085d89b53',1,'tvm::auto_scheduler::State::vectorize()'],['../classtvm_1_1te_1_1Stage.html#a44d33e3920106e75dc7c68272f880812',1,'tvm::te::Stage::vectorize()']]],
+ ['vectorize',['vectorize',['../classtvm_1_1auto__scheduler_1_1State.html#a97b8a21210d63bea241dbab085d89b53',1,'tvm::auto_scheduler::State::vectorize()'],['../classtvm_1_1te_1_1Stage.html#a44d33e3920106e75dc7c68272f880812',1,'tvm::te::Stage::vectorize()'],['../classtvm_1_1tir_1_1ScheduleNode.html#ab4a8cd91959ceab22855ec338978bcee',1,'tvm::tir::ScheduleNode::Vectorize()']]],
['vectorizeloop',['VectorizeLoop',['../namespacetvm_1_1tir_1_1transform.html#af3cecb50a8b8fc8021f6a87bc27587da',1,'tvm::tir::transform']]],
['vectorjacobianproduct',['VectorJacobianProduct',['../namespacetvm_1_1te.html#a547183f5a311af53ab598faba423fd64',1,'tvm::te']]],
['vectorlow',['vectorlow',['../namespacetvm_1_1tir_1_1builtin.html#a7ed64a9fb0a7f575fc63e1e0395e96a6',1,'tvm::tir::builtin']]],
diff --git a/docs/reference/api/doxygen/search/functions_d.js b/docs/reference/api/doxygen/search/functions_d.js
index da9ade001..af1497d2a 100644
--- a/docs/reference/api/doxygen/search/functions_d.js
+++ b/docs/reference/api/doxygen/search/functions_d.js
@@ -31,7 +31,7 @@ var searchData=
['matchrange',['MatchRange',['../classtvm_1_1arith_1_1IntSet.html#a2f2999336fbba4f436b66bdddce5c57a',1,'tvm::arith::IntSet']]],
['matmul',['matmul',['../namespacetvm_1_1topi.html#adae7dcb7e951109ba72192202d182994',1,'tvm::topi']]],
['matrix_5fset_5fdiag',['matrix_set_diag',['../namespacetvm_1_1topi.html#aead477c6c9d4f4589d22b8acff82040c',1,'tvm::topi']]],
- ['max',['max',['../classtvm_1_1arith_1_1IntSet.html#ac215840d3e9fb2817f1e5648e31317c5',1,'tvm::arith::IntSet::max()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#a2c5ea87b1155aa7810e0beb3b69b955b',1,'tvm::support::LinearCongruentialEngine::max()'],['../classtvm_1_1tir_1_1Max.html#a7dff11b4dea01bfc7a03eacd077f0729',1,'tvm::tir::Max::Max()'],['../namespacetvm.html#a0df5ca82d2c566f628ebb2f1e84a3fcb',1,'tvm::max(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.ht [...]
+ ['max',['Max',['../classtvm_1_1tir_1_1Max.html#a7dff11b4dea01bfc7a03eacd077f0729',1,'tvm::tir::Max::Max()'],['../classtvm_1_1arith_1_1IntSet.html#ac215840d3e9fb2817f1e5648e31317c5',1,'tvm::arith::IntSet::max()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#a2c5ea87b1155aa7810e0beb3b69b955b',1,'tvm::support::LinearCongruentialEngine::max()'],['../namespacetvm.html#a0df5ca82d2c566f628ebb2f1e84a3fcb',1,'tvm::max(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.ht [...]
['max_5fvalue',['max_value',['../namespacetvm.html#a4f1398024c0af23699447ef910b654b8',1,'tvm']]],
['maxconcurrency',['MaxConcurrency',['../namespacetvm_1_1runtime_1_1threading.html#af8c1c389a74e67bcc3680555288219f8',1,'tvm::runtime::threading']]],
['maximum',['maximum',['../namespacetvm_1_1topi.html#afd64bc3e27dfc97002d3add5d7ce4174',1,'tvm::topi::maximum(const tvm::PrimExpr &a, const tvm::PrimExpr &b)'],['../namespacetvm_1_1topi.html#a5338e9297463bc745027fca67daa2ebb',1,'tvm::topi::maximum(const tvm::te::Tensor &A, const tvm::te::Tensor &B, std::string name="T_" "maximum", std::string tag=kBroadcast)'],['../namespacetvm_1_1topi.html#a4076a8d6a2b243c548d741e9f6bcfe69',1,'tvm::topi::maximum(con [...]
@@ -57,7 +57,7 @@ var searchData=
['microtvmruntimegetoutput',['MicroTVMRuntimeGetOutput',['../microtvm__runtime_8h.html#a76129be7b6de972791a3f9a1b312acfa',1,'microtvm_runtime.h']]],
['microtvmruntimerun',['MicroTVMRuntimeRun',['../microtvm__runtime_8h.html#ac43a544f675dd716e8c279c3e41f6e45',1,'microtvm_runtime.h']]],
['microtvmruntimesetinput',['MicroTVMRuntimeSetInput',['../microtvm__runtime_8h.html#aa593edc600f4356f2b560702aa01b113',1,'microtvm_runtime.h']]],
- ['min',['Min',['../classtvm_1_1tir_1_1Min.html#a3a4403aec40029a5206e22cd334e356b',1,'tvm::tir::Min::Min()'],['../classtvm_1_1arith_1_1IntSet.html#ae5517de2862e93a801224eed98a57001',1,'tvm::arith::IntSet::min()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#aec5f11b588fa3a12294a46c945c34411',1,'tvm::support::LinearCongruentialEngine::min()'],['../namespacetvm.html#aac2abc149c1a47944c37b560181b15c0',1,'tvm::min(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.ht [...]
+ ['min',['min',['../classtvm_1_1arith_1_1IntSet.html#ae5517de2862e93a801224eed98a57001',1,'tvm::arith::IntSet::min()'],['../classtvm_1_1support_1_1LinearCongruentialEngine.html#aec5f11b588fa3a12294a46c945c34411',1,'tvm::support::LinearCongruentialEngine::min()'],['../classtvm_1_1tir_1_1Min.html#a3a4403aec40029a5206e22cd334e356b',1,'tvm::tir::Min::Min()'],['../namespacetvm.html#aac2abc149c1a47944c37b560181b15c0',1,'tvm::min(PrimExpr a, PrimExpr b, Span span=Span())'],['../namespacetvm.ht [...]
['min_5fvalue',['min_value',['../namespacetvm.html#a3b37fa55ea93d6868751a2441996b072',1,'tvm']]],
['minimum',['minimum',['../namespacetvm_1_1topi.html#a7ac1dc0d99ce93090a4cdf90ab19d4b8',1,'tvm::topi::minimum(const tvm::PrimExpr &a, const tvm::PrimExpr &b)'],['../namespacetvm_1_1topi.html#a0e19dc06a2b1ecbb83b0942fdf836169',1,'tvm::topi::minimum(const tvm::te::Tensor &A, const tvm::te::Tensor &B, std::string name="T_" "minimum", std::string tag=kBroadcast)'],['../namespacetvm_1_1topi.html#a28d4ef4b3426bff237215ce356dd5681',1,'tvm::topi::minimum(con [...]
['minop',['MinOp',['../namespacetvm_1_1topi.html#aea9a989b0aaa2aef03fe8ee237d8257e',1,'tvm::topi']]],
@@ -84,6 +84,7 @@ var searchData=
['mutatebyapply',['MutateByApply',['../classtvm_1_1runtime_1_1Array.html#a127d022f391a566b51abf16ce4bd74af',1,'tvm::runtime::Array']]],
['mutatecomputelocation',['MutateComputeLocation',['../classtvm_1_1meta__schedule_1_1Mutator.html#a2f706028c59f1c2d5a87ae58785b79c9',1,'tvm::meta_schedule::Mutator']]],
['mutateparallel',['MutateParallel',['../classtvm_1_1meta__schedule_1_1Mutator.html#acb242cfc6875055d75f7ea7adcfa9c14',1,'tvm::meta_schedule::Mutator']]],
+ ['mutatethreadbinding',['MutateThreadBinding',['../classtvm_1_1meta__schedule_1_1Mutator.html#a008b237e2c944cc25c123ef412dcd397',1,'tvm::meta_schedule::Mutator']]],
['mutatetilesize',['MutateTileSize',['../classtvm_1_1meta__schedule_1_1Mutator.html#a84ed21cbc627ff6dd49f983a05113696',1,'tvm::meta_schedule::Mutator']]],
['mutateunroll',['MutateUnroll',['../classtvm_1_1meta__schedule_1_1Mutator.html#a5bedfb467944180740728c76ba39312f',1,'tvm::meta_schedule::Mutator']]]
];
diff --git a/docs/reference/api/python/auto_scheduler.html b/docs/reference/api/python/auto_scheduler.html
index 456e9b25a..6b32a25fe 100644
--- a/docs/reference/api/python/auto_scheduler.html
+++ b/docs/reference/api/python/auto_scheduler.html
@@ -1715,7 +1715,7 @@ Can be the a function or the function name.</p></li>
<dl class="py function">
<dt class="sig sig-object py" id="tvm.auto_scheduler.auto_schedule">
-<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
+<span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">auto_schedule</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">search_policy</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em clas [...]
<dd><p>THIS API IS DEPRECATED.</p>
<p>Run auto scheduling search for a task.</p>
<dl class="field-list simple">
@@ -1752,7 +1752,7 @@ the initial naive schedule (state).</p>
<dl class="py class">
<dt class="sig sig-object py" id="tvm.auto_scheduler.SketchPolicy">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">tvm.auto_scheduler.</span></span><span class="sig-name descname"><span class="pre">SketchPolicy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">program_cost_model</span></span><span class="o"><span class="pre">=</span></span><span class="defau [...]
<dd><p>The search policy that searches in a hierarchical search space defined by sketches.
The policy randomly samples programs from the space defined by sketches and use evolutionary
search to fine-tune them.</p>
diff --git a/docs/reference/api/typedoc/classes/bytestreamreader.html b/docs/reference/api/typedoc/classes/bytestreamreader.html
index d75bcb141..33e88ccfa 100644
--- a/docs/reference/api/typedoc/classes/bytestreamreader.html
+++ b/docs/reference/api/typedoc/classes/bytestreamreader.html
@@ -119,7 +119,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -141,7 +141,7 @@
<div class="tsd-signature tsd-kind-icon">bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Uint8Array</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L43">rpc_server.ts:43</a></li>
</ul>
</aside>
</section>
@@ -151,7 +151,7 @@
<div class="tsd-signature tsd-kind-icon">offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 0</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L42">rpc_server.ts:42</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L42">rpc_server.ts:42</a></li>
</ul>
</aside>
</section>
@@ -168,7 +168,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L63">rpc_server.ts:63</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L63">rpc_server.ts:63</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">Uint8Array</span></h4>
@@ -185,7 +185,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L49">rpc_server.ts:49</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L49">rpc_server.ts:49</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -202,7 +202,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L57">rpc_server.ts:57</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L57">rpc_server.ts:57</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/cachedcallstack.html b/docs/reference/api/typedoc/classes/cachedcallstack.html
index 61a3f63cb..b9cf900e3 100644
--- a/docs/reference/api/typedoc/classes/cachedcallstack.html
+++ b/docs/reference/api/typedoc/classes/cachedcallstack.html
@@ -144,7 +144,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L223">memory.ts:223</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L223">memory.ts:223</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -172,7 +172,7 @@
<div class="tsd-signature tsd-kind-icon">temp<wbr>Args<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol"><</span><a href="../interfaces/disposable.html" class="tsd-signature-type">Disposable</a><span class="tsd-signature-symbol">></span><span class="tsd-signature-symbol"> = []</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L208">memory.ts:208</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L208">memory.ts:208</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -194,7 +194,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L312">memory.ts:312</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L312">memory.ts:312</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -226,7 +226,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L284">memory.ts:284</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L284">memory.ts:284</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -262,7 +262,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L388">memory.ts:388</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L388">memory.ts:388</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -300,7 +300,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L376">memory.ts:376</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L376">memory.ts:376</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -340,7 +340,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L267">memory.ts:267</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L267">memory.ts:267</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -373,7 +373,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L243">memory.ts:243</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L243">memory.ts:243</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -390,7 +390,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L321">memory.ts:321</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L321">memory.ts:321</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -422,7 +422,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L252">memory.ts:252</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L252">memory.ts:252</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -444,7 +444,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L359">memory.ts:359</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L359">memory.ts:359</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -470,7 +470,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L342">memory.ts:342</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L342">memory.ts:342</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -496,7 +496,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L350">memory.ts:350</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L350">memory.ts:350</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -522,7 +522,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L326">memory.ts:326</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L326">memory.ts:326</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -548,7 +548,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L363">memory.ts:363</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L363">memory.ts:363</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -574,7 +574,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L346">memory.ts:346</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L346">memory.ts:346</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -600,7 +600,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L334">memory.ts:334</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L334">memory.ts:334</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
diff --git a/docs/reference/api/typedoc/classes/dldatatype.html b/docs/reference/api/typedoc/classes/dldatatype.html
index c26d239aa..2a19624a6 100644
--- a/docs/reference/api/typedoc/classes/dldatatype.html
+++ b/docs/reference/api/typedoc/classes/dldatatype.html
@@ -119,7 +119,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L262">runtime.ts:262</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L262">runtime.ts:262</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
<div class="tsd-signature tsd-kind-icon">bits<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L260">runtime.ts:260</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L260">runtime.ts:260</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
<div class="tsd-signature tsd-kind-icon">code<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L258">runtime.ts:258</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L258">runtime.ts:258</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -177,7 +177,7 @@
<div class="tsd-signature tsd-kind-icon">lanes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L262">runtime.ts:262</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L262">runtime.ts:262</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -199,7 +199,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L279">runtime.ts:279</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L279">runtime.ts:279</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -216,7 +216,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L270">runtime.ts:270</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L270">runtime.ts:270</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/dldevice.html b/docs/reference/api/typedoc/classes/dldevice.html
index 9168dcca6..4d5d8dd18 100644
--- a/docs/reference/api/typedoc/classes/dldevice.html
+++ b/docs/reference/api/typedoc/classes/dldevice.html
@@ -118,7 +118,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L202">runtime.ts:202</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L202">runtime.ts:202</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -146,7 +146,7 @@
<div class="tsd-signature tsd-kind-icon">device<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L200">runtime.ts:200</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L200">runtime.ts:200</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -161,7 +161,7 @@
<div class="tsd-signature tsd-kind-icon">device<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L198">runtime.ts:198</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L198">runtime.ts:198</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -183,7 +183,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L223">runtime.ts:223</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L223">runtime.ts:223</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -205,7 +205,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L230">runtime.ts:230</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L230">runtime.ts:230</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">string</span></h4>
diff --git a/docs/reference/api/typedoc/classes/environment.html b/docs/reference/api/typedoc/classes/environment.html
index c948646b9..0000e72ed 100644
--- a/docs/reference/api/typedoc/classes/environment.html
+++ b/docs/reference/api/typedoc/classes/environment.html
@@ -125,7 +125,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/environment.ts#L86">environment.ts:86</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/environment.ts#L86">environment.ts:86</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -169,7 +169,7 @@
<aside class="tsd-sources">
<p>Implementation of <a href="../interfaces/libraryprovider.html">LibraryProvider</a>.<a href="../interfaces/libraryprovider.html#imports">imports</a></p>
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/environment.ts#L70">environment.ts:70</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/environment.ts#L70">environment.ts:70</a></li>
</ul>
</aside>
</section>
@@ -179,7 +179,7 @@
<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">void</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/environment.ts#L69">environment.ts:69</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/environment.ts#L69">environment.ts:69</a></li>
</ul>
</aside>
<div class="tsd-type-declaration">
@@ -210,7 +210,7 @@
<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol"><</span><span class="tsd-signature-type">ctypes.FTVMWasmPackedCFunc</span><span class="tsd-signature-symbol"> | </span><span class="tsd-signature-type">undefined</span><span class="tsd-signature-symbol">></span><span class="tsd-signature-symbol"> = [undefined,]</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/environment.ts#L78">environment.ts:78</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/environment.ts#L78">environment.ts:78</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -228,7 +228,7 @@
<div class="tsd-signature tsd-kind-icon">packedCFunc<wbr>Table<wbr>Free<wbr>Id<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol"><</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">></span><span class="tsd-signature-symbol"> = []</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/environment.ts#L84">environment.ts:84</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/environment.ts#L84">environment.ts:84</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -250,7 +250,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/environment.ts#L105">environment.ts:105</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/environment.ts#L105">environment.ts:105</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ffilibrary.html b/docs/reference/api/typedoc/classes/ffilibrary.html
index f6d2d0a3e..6353cc722 100644
--- a/docs/reference/api/typedoc/classes/ffilibrary.html
+++ b/docs/reference/api/typedoc/classes/ffilibrary.html
@@ -131,7 +131,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L49">runtime.ts:49</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L49">runtime.ts:49</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -156,7 +156,7 @@
<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol"><</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">></span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L46">runtime.ts:46</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L46">runtime.ts:46</a></li>
</ul>
</aside>
</section>
@@ -166,7 +166,7 @@
<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L45">runtime.ts:45</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L45">runtime.ts:45</a></li>
</ul>
</aside>
</section>
@@ -176,7 +176,7 @@
<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L44">runtime.ts:44</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L44">runtime.ts:44</a></li>
</ul>
</aside>
</section>
@@ -186,7 +186,7 @@
<div class="tsd-signature tsd-kind-icon">webGPUContext<span class="tsd-signature-symbol">:</span> <a href="webgpucontext.html" class="tsd-signature-type">WebGPUContext</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L47">runtime.ts:47</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L47">runtime.ts:47</a></li>
</ul>
</aside>
</section>
@@ -203,7 +203,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L76">runtime.ts:76</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L76">runtime.ts:76</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -226,7 +226,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L66">runtime.ts:66</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L66">runtime.ts:66</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -243,7 +243,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L84">runtime.ts:84</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L84">runtime.ts:84</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <a href="cachedcallstack.html" class="tsd-signature-type">CachedCallStack</a></h4>
@@ -260,7 +260,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L95">runtime.ts:95</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L95">runtime.ts:95</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -283,7 +283,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L72">runtime.ts:72</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L72">runtime.ts:72</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
diff --git a/docs/reference/api/typedoc/classes/graphexecutor.html b/docs/reference/api/typedoc/classes/graphexecutor.html
index d58b9ec72..020cf5013 100644
--- a/docs/reference/api/typedoc/classes/graphexecutor.html
+++ b/docs/reference/api/typedoc/classes/graphexecutor.html
@@ -130,7 +130,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L583">runtime.ts:583</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L583">runtime.ts:583</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -162,7 +162,7 @@
<div class="tsd-signature tsd-kind-icon">module<span class="tsd-signature-symbol">:</span> <a href="module.html" class="tsd-signature-type">Module</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L579">runtime.ts:579</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L579">runtime.ts:579</a></li>
</ul>
</aside>
</section>
@@ -179,7 +179,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L654">runtime.ts:654</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L654">runtime.ts:654</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -224,7 +224,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L597">runtime.ts:597</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L597">runtime.ts:597</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -241,7 +241,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L631">runtime.ts:631</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L631">runtime.ts:631</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -279,7 +279,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L644">runtime.ts:644</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L644">runtime.ts:644</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -310,7 +310,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L621">runtime.ts:621</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L621">runtime.ts:621</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -332,7 +332,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L609">runtime.ts:609</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L609">runtime.ts:609</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/instance.html b/docs/reference/api/typedoc/classes/instance.html
index f1ebcc4a3..1319484f7 100644
--- a/docs/reference/api/typedoc/classes/instance.html
+++ b/docs/reference/api/typedoc/classes/instance.html
@@ -139,7 +139,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L692">runtime.ts:692</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L692">runtime.ts:692</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -202,7 +202,7 @@
<div class="tsd-signature tsd-kind-icon">exports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol"><</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">Function</span><span class="tsd-signature-symbol">></span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L684">runtime.ts:684</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L684">runtime.ts:684</a></li>
</ul>
</aside>
</section>
@@ -212,7 +212,7 @@
<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L683">runtime.ts:683</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L683">runtime.ts:683</a></li>
</ul>
</aside>
</section>
@@ -229,7 +229,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L932">runtime.ts:932</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L932">runtime.ts:932</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -260,7 +260,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L994">runtime.ts:994</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L994">runtime.ts:994</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -303,7 +303,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L924">runtime.ts:924</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L924">runtime.ts:924</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -341,7 +341,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L732">runtime.ts:732</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L732">runtime.ts:732</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -358,7 +358,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L952">runtime.ts:952</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L952">runtime.ts:952</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -402,7 +402,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L816">runtime.ts:816</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L816">runtime.ts:816</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -434,7 +434,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L1033">runtime.ts:1033</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L1033">runtime.ts:1033</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -465,7 +465,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L846">runtime.ts:846</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L846">runtime.ts:846</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -497,7 +497,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L750">runtime.ts:750</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L750">runtime.ts:750</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -520,7 +520,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L1013">runtime.ts:1013</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L1013">runtime.ts:1013</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -568,7 +568,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L789">runtime.ts:789</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L789">runtime.ts:789</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -608,7 +608,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L914">runtime.ts:914</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L914">runtime.ts:914</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -646,7 +646,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L1134">runtime.ts:1134</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -698,7 +698,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L740">runtime.ts:740</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L740">runtime.ts:740</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -722,7 +722,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L868">runtime.ts:868</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L868">runtime.ts:868</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -754,7 +754,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L857">runtime.ts:857</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L857">runtime.ts:857</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -786,7 +786,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L940">runtime.ts:940</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L940">runtime.ts:940</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/memory.html b/docs/reference/api/typedoc/classes/memory.html
index 28b256c08..55c6c5758 100644
--- a/docs/reference/api/typedoc/classes/memory.html
+++ b/docs/reference/api/typedoc/classes/memory.html
@@ -130,7 +130,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L40">memory.ts:40</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L40">memory.ts:40</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -152,7 +152,7 @@
<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Memory</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L32">memory.ts:32</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L32">memory.ts:32</a></li>
</ul>
</aside>
</section>
@@ -162,7 +162,7 @@
<div class="tsd-signature tsd-kind-icon">wasm32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">boolean</span><span class="tsd-signature-symbol"> = true</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L33">memory.ts:33</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L33">memory.ts:33</a></li>
</ul>
</aside>
</section>
@@ -179,7 +179,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L154">memory.ts:154</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L154">memory.ts:154</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -210,7 +210,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L90">memory.ts:90</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L90">memory.ts:90</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -233,7 +233,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L97">memory.ts:97</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L97">memory.ts:97</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -256,7 +256,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L74">memory.ts:74</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L74">memory.ts:74</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -279,7 +279,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L81">memory.ts:81</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L81">memory.ts:81</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -302,7 +302,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L104">memory.ts:104</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L104">memory.ts:104</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -325,7 +325,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L132">memory.ts:132</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L132">memory.ts:132</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -362,7 +362,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L145">memory.ts:145</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L145">memory.ts:145</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -393,7 +393,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L60">memory.ts:60</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L60">memory.ts:60</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -416,7 +416,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L67">memory.ts:67</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L67">memory.ts:67</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -439,7 +439,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L53">memory.ts:53</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L53">memory.ts:53</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -462,7 +462,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L114">memory.ts:114</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L114">memory.ts:114</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -485,7 +485,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L124">memory.ts:124</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L124">memory.ts:124</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">number</span></h4>
@@ -502,7 +502,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/memory.ts#L175">memory.ts:175</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/memory.ts#L175">memory.ts:175</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/module.html b/docs/reference/api/typedoc/classes/module.html
index b8470a320..215daae2a 100644
--- a/docs/reference/api/typedoc/classes/module.html
+++ b/docs/reference/api/typedoc/classes/module.html
@@ -124,7 +124,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L504">runtime.ts:504</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L504">runtime.ts:504</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -170,7 +170,7 @@
<div class="tsd-signature tsd-kind-icon">handle<span class="tsd-signature-symbol">:</span> <a href="../index.html#pointer" class="tsd-signature-type">Pointer</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L502">runtime.ts:502</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L502">runtime.ts:502</a></li>
</ul>
</aside>
</section>
@@ -187,7 +187,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L516">runtime.ts:516</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L516">runtime.ts:516</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -204,7 +204,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L530">runtime.ts:530</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L530">runtime.ts:530</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -236,7 +236,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L561">runtime.ts:561</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L561">runtime.ts:561</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/ndarray.html b/docs/reference/api/typedoc/classes/ndarray.html
index 545e72356..dc0b4adb4 100644
--- a/docs/reference/api/typedoc/classes/ndarray.html
+++ b/docs/reference/api/typedoc/classes/ndarray.html
@@ -130,7 +130,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L304">runtime.ts:304</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L304">runtime.ts:304</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -158,7 +158,7 @@
<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <a href="dldevice.html" class="tsd-signature-type">DLDevice</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L297">runtime.ts:297</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L297">runtime.ts:297</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -173,7 +173,7 @@
<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L293">runtime.ts:293</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L293">runtime.ts:293</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -188,7 +188,7 @@
<div class="tsd-signature tsd-kind-icon">handle<span class="tsd-signature-symbol">:</span> <a href="../index.html#pointer" class="tsd-signature-type">Pointer</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L289">runtime.ts:289</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L289">runtime.ts:289</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -203,7 +203,7 @@
<div class="tsd-signature tsd-kind-icon">ndim<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L291">runtime.ts:291</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L291">runtime.ts:291</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -218,7 +218,7 @@
<div class="tsd-signature tsd-kind-icon">shape<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol"><</span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">></span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L295">runtime.ts:295</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L295">runtime.ts:295</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -240,7 +240,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L370">runtime.ts:370</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L370">runtime.ts:370</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -273,7 +273,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L414">runtime.ts:414</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L414">runtime.ts:414</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -305,7 +305,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L355">runtime.ts:355</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L355">runtime.ts:355</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
@@ -322,7 +322,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L474">runtime.ts:474</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L474">runtime.ts:474</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -346,7 +346,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L443">runtime.ts:443</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L443">runtime.ts:443</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/packedfunccell.html b/docs/reference/api/typedoc/classes/packedfunccell.html
index 1861c61bc..86eb57cce 100644
--- a/docs/reference/api/typedoc/classes/packedfunccell.html
+++ b/docs/reference/api/typedoc/classes/packedfunccell.html
@@ -122,7 +122,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L158">runtime.ts:158</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L158">runtime.ts:158</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -147,7 +147,7 @@
<div class="tsd-signature tsd-kind-icon">handle<span class="tsd-signature-symbol">:</span> <a href="../index.html#pointer" class="tsd-signature-type">Pointer</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L157">runtime.ts:157</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L157">runtime.ts:157</a></li>
</ul>
</aside>
</section>
@@ -164,7 +164,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L165">runtime.ts:165</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L165">runtime.ts:165</a></li>
</ul>
</aside>
<h4 class="tsd-returns-title">Returns <span class="tsd-signature-type">void</span></h4>
diff --git a/docs/reference/api/typedoc/classes/rpcserver.html b/docs/reference/api/typedoc/classes/rpcserver.html
index caa055ba4..beff42b5a 100644
--- a/docs/reference/api/typedoc/classes/rpcserver.html
+++ b/docs/reference/api/typedoc/classes/rpcserver.html
@@ -115,7 +115,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L92">rpc_server.ts:92</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L92">rpc_server.ts:92</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -176,7 +176,7 @@
<div class="tsd-signature tsd-kind-icon">get<wbr>Imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol"><</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">unknown</span><span class="tsd-signat [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L82">rpc_server.ts:82</a></li>
</ul>
</aside>
<div class="tsd-type-declaration">
@@ -201,7 +201,7 @@
<div class="tsd-signature tsd-kind-icon">key<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L78">rpc_server.ts:78</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L78">rpc_server.ts:78</a></li>
</ul>
</aside>
</section>
@@ -211,7 +211,7 @@
<div class="tsd-signature tsd-kind-icon">logger<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>msg<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">void</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L81">rpc_server.ts:81</a></li>
</ul>
</aside>
<div class="tsd-type-declaration">
@@ -242,7 +242,7 @@
<div class="tsd-signature tsd-kind-icon">socket<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">WebSocket</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L79">rpc_server.ts:79</a></li>
</ul>
</aside>
</section>
@@ -252,7 +252,7 @@
<div class="tsd-signature tsd-kind-icon">state<span class="tsd-signature-symbol">:</span> <a href="../enums/rpcserverstate.html" class="tsd-signature-type">RPCServerState</a><span class="tsd-signature-symbol"> = RPCServerState.InitHeader</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L80">rpc_server.ts:80</a></li>
</ul>
</aside>
</section>
@@ -262,7 +262,7 @@
<div class="tsd-signature tsd-kind-icon">url<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L77">rpc_server.ts:77</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L77">rpc_server.ts:77</a></li>
</ul>
</aside>
</section>
diff --git a/docs/reference/api/typedoc/classes/scalar.html b/docs/reference/api/typedoc/classes/scalar.html
index e25a48c9e..b5f34d85e 100644
--- a/docs/reference/api/typedoc/classes/scalar.html
+++ b/docs/reference/api/typedoc/classes/scalar.html
@@ -112,7 +112,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L145">runtime.ts:145</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -137,7 +137,7 @@
<div class="tsd-signature tsd-kind-icon">dtype<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L145">runtime.ts:145</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L145">runtime.ts:145</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -152,7 +152,7 @@
<div class="tsd-signature tsd-kind-icon">value<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L143">runtime.ts:143</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L143">runtime.ts:143</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/classes/webgpucontext.html b/docs/reference/api/typedoc/classes/webgpucontext.html
index bd435526b..5b5d09cee 100644
--- a/docs/reference/api/typedoc/classes/webgpucontext.html
+++ b/docs/reference/api/typedoc/classes/webgpucontext.html
@@ -120,7 +120,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L57">webgpu.ts:57</a></li>
</ul>
</aside>
<h4 class="tsd-parameters-title">Parameters</h4>
@@ -145,7 +145,7 @@
<div class="tsd-signature tsd-kind-icon">device<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">GPUDevice</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L50">webgpu.ts:50</a></li>
</ul>
</aside>
</section>
@@ -155,7 +155,7 @@
<div class="tsd-signature tsd-kind-icon">memory<span class="tsd-signature-symbol">:</span> <a href="memory.html" class="tsd-signature-type">Memory</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L51">webgpu.ts:51</a></li>
</ul>
</aside>
</section>
@@ -172,7 +172,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L84">webgpu.ts:84</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -209,7 +209,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L170">webgpu.ts:170</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L170">webgpu.ts:170</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L67">webgpu.ts:67</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/enums/argtypecode.html b/docs/reference/api/typedoc/enums/argtypecode.html
index fcf7a4f4b..23fe77f79 100644
--- a/docs/reference/api/typedoc/enums/argtypecode.html
+++ b/docs/reference/api/typedoc/enums/argtypecode.html
@@ -106,7 +106,7 @@
<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 6</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L220">ctypes.ts:220</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L220">ctypes.ts:220</a></li>
</ul>
</aside>
</section>
@@ -116,7 +116,7 @@
<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L216">ctypes.ts:216</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L216">ctypes.ts:216</a></li>
</ul>
</aside>
</section>
@@ -126,7 +126,7 @@
<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L214">ctypes.ts:214</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L214">ctypes.ts:214</a></li>
</ul>
</aside>
</section>
@@ -136,7 +136,7 @@
<div class="tsd-signature tsd-kind-icon">Null<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L218">ctypes.ts:218</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L218">ctypes.ts:218</a></li>
</ul>
</aside>
</section>
@@ -146,7 +146,7 @@
<div class="tsd-signature tsd-kind-icon">TVMBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 12</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L226">ctypes.ts:226</a></li>
</ul>
</aside>
</section>
@@ -156,7 +156,7 @@
<div class="tsd-signature tsd-kind-icon">TVMDLTensor<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 7</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L221">ctypes.ts:221</a></li>
</ul>
</aside>
</section>
@@ -166,7 +166,7 @@
<div class="tsd-signature tsd-kind-icon">TVMData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L219">ctypes.ts:219</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L219">ctypes.ts:219</a></li>
</ul>
</aside>
</section>
@@ -176,7 +176,7 @@
<div class="tsd-signature tsd-kind-icon">TVMModule<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 9</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L223">ctypes.ts:223</a></li>
</ul>
</aside>
</section>
@@ -186,7 +186,7 @@
<div class="tsd-signature tsd-kind-icon">TVMNDArray<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 13</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L227">ctypes.ts:227</a></li>
</ul>
</aside>
</section>
@@ -196,7 +196,7 @@
<div class="tsd-signature tsd-kind-icon">TVMObject<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L222">ctypes.ts:222</a></li>
</ul>
</aside>
</section>
@@ -206,7 +206,7 @@
<div class="tsd-signature tsd-kind-icon">TVMObjectRValue<wbr>Ref<wbr>Arg<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 14</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L228">ctypes.ts:228</a></li>
</ul>
</aside>
</section>
@@ -216,7 +216,7 @@
<div class="tsd-signature tsd-kind-icon">TVMOpaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L217">ctypes.ts:217</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L217">ctypes.ts:217</a></li>
</ul>
</aside>
</section>
@@ -226,7 +226,7 @@
<div class="tsd-signature tsd-kind-icon">TVMPacked<wbr>Func<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 10</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L224">ctypes.ts:224</a></li>
</ul>
</aside>
</section>
@@ -236,7 +236,7 @@
<div class="tsd-signature tsd-kind-icon">TVMStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 11</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L225">ctypes.ts:225</a></li>
</ul>
</aside>
</section>
@@ -246,7 +246,7 @@
<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L215">ctypes.ts:215</a></li>
</ul>
</aside>
</section>
diff --git a/docs/reference/api/typedoc/enums/aynccallbackcode.html b/docs/reference/api/typedoc/enums/aynccallbackcode.html
index 845eda6a3..bdac7bdca 100644
--- a/docs/reference/api/typedoc/enums/aynccallbackcode.html
+++ b/docs/reference/api/typedoc/enums/aynccallbackcode.html
@@ -93,7 +93,7 @@
<div class="tsd-signature tsd-kind-icon">k<wbr>Exception<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 5</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L676">runtime.ts:676</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L676">runtime.ts:676</a></li>
</ul>
</aside>
</section>
@@ -103,7 +103,7 @@
<div class="tsd-signature tsd-kind-icon">k<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L675">runtime.ts:675</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L675">runtime.ts:675</a></li>
</ul>
</aside>
</section>
diff --git a/docs/reference/api/typedoc/enums/dldatatypecode.html b/docs/reference/api/typedoc/enums/dldatatypecode.html
index 8d94d14ab..ac9504c99 100644
--- a/docs/reference/api/typedoc/enums/dldatatypecode.html
+++ b/docs/reference/api/typedoc/enums/dldatatypecode.html
@@ -95,7 +95,7 @@
<div class="tsd-signature tsd-kind-icon">Float<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L242">runtime.ts:242</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L242">runtime.ts:242</a></li>
</ul>
</aside>
</section>
@@ -105,7 +105,7 @@
<div class="tsd-signature tsd-kind-icon">Int<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 0</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L240">runtime.ts:240</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L240">runtime.ts:240</a></li>
</ul>
</aside>
</section>
@@ -115,7 +115,7 @@
<div class="tsd-signature tsd-kind-icon">Opaque<wbr>Handle<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 3</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L243">runtime.ts:243</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L243">runtime.ts:243</a></li>
</ul>
</aside>
</section>
@@ -125,7 +125,7 @@
<div class="tsd-signature tsd-kind-icon">UInt<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L241">runtime.ts:241</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L241">runtime.ts:241</a></li>
</ul>
</aside>
</section>
diff --git a/docs/reference/api/typedoc/enums/rpcserverstate.html b/docs/reference/api/typedoc/enums/rpcserverstate.html
index 45661eed7..cfb593c4b 100644
--- a/docs/reference/api/typedoc/enums/rpcserverstate.html
+++ b/docs/reference/api/typedoc/enums/rpcserverstate.html
@@ -90,7 +90,7 @@
<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L27">rpc_server.ts:27</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L27">rpc_server.ts:27</a></li>
</ul>
</aside>
</section>
@@ -100,7 +100,7 @@
<div class="tsd-signature tsd-kind-icon">Init<wbr>Header<wbr>Key<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L28">rpc_server.ts:28</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L28">rpc_server.ts:28</a></li>
</ul>
</aside>
</section>
@@ -110,7 +110,7 @@
<div class="tsd-signature tsd-kind-icon">Init<wbr>Server<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L29">rpc_server.ts:29</a></li>
</ul>
</aside>
</section>
@@ -120,7 +120,7 @@
<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Body<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L32">rpc_server.ts:32</a></li>
</ul>
</aside>
</section>
@@ -130,7 +130,7 @@
<div class="tsd-signature tsd-kind-icon">Receive<wbr>Packet<wbr>Header<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L31">rpc_server.ts:31</a></li>
</ul>
</aside>
</section>
@@ -140,7 +140,7 @@
<div class="tsd-signature tsd-kind-icon">Wait<wbr>For<wbr>Callback<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L30">rpc_server.ts:30</a></li>
</ul>
</aside>
</section>
diff --git a/docs/reference/api/typedoc/enums/sizeof.html b/docs/reference/api/typedoc/enums/sizeof.html
index b5e8a0107..cc836be41 100644
--- a/docs/reference/api/typedoc/enums/sizeof.html
+++ b/docs/reference/api/typedoc/enums/sizeof.html
@@ -100,7 +100,7 @@
<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L206">ctypes.ts:206</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L206">ctypes.ts:206</a></li>
</ul>
</aside>
</section>
@@ -110,7 +110,7 @@
<div class="tsd-signature tsd-kind-icon">DLDevice<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = I32 + I32</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L207">ctypes.ts:207</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L207">ctypes.ts:207</a></li>
</ul>
</aside>
</section>
@@ -120,7 +120,7 @@
<div class="tsd-signature tsd-kind-icon">F32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L203">ctypes.ts:203</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L203">ctypes.ts:203</a></li>
</ul>
</aside>
</section>
@@ -130,7 +130,7 @@
<div class="tsd-signature tsd-kind-icon">F64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L204">ctypes.ts:204</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L204">ctypes.ts:204</a></li>
</ul>
</aside>
</section>
@@ -140,7 +140,7 @@
<div class="tsd-signature tsd-kind-icon">I32<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 4</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L201">ctypes.ts:201</a></li>
</ul>
</aside>
</section>
@@ -150,7 +150,7 @@
<div class="tsd-signature tsd-kind-icon">I64<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L202">ctypes.ts:202</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L202">ctypes.ts:202</a></li>
</ul>
</aside>
</section>
@@ -160,7 +160,7 @@
<div class="tsd-signature tsd-kind-icon">TVMValue<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 8</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L205">ctypes.ts:205</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L205">ctypes.ts:205</a></li>
</ul>
</aside>
</section>
@@ -170,7 +170,7 @@
<div class="tsd-signature tsd-kind-icon">U16<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 2</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L200">ctypes.ts:200</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L200">ctypes.ts:200</a></li>
</ul>
</aside>
</section>
@@ -180,7 +180,7 @@
<div class="tsd-signature tsd-kind-icon">U8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol"> = 1</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L199">ctypes.ts:199</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L199">ctypes.ts:199</a></li>
</ul>
</aside>
</section>
diff --git a/docs/reference/api/typedoc/index.html b/docs/reference/api/typedoc/index.html
index 834d5bfad..f96ba9014 100644
--- a/docs/reference/api/typedoc/index.html
+++ b/docs/reference/api/typedoc/index.html
@@ -174,7 +174,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Alloc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>shape<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, ndim<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeCode<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, dtypeBits<span class="tsd [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L112">ctypes.ts:112</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L112">ctypes.ts:112</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -238,7 +238,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>Bytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">num [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L128">ctypes.ts:128</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L128">ctypes.ts:128</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -282,7 +282,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>From<wbr>To<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>from<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, to<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-sig [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L144">ctypes.ts:144</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L144">ctypes.ts:144</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -326,7 +326,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Copy<wbr>ToBytes<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, data<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nbytes<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</sp [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L136">ctypes.ts:136</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L136">ctypes.ts:136</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -370,7 +370,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMArray<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>handle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L121">ctypes.ts:121</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L121">ctypes.ts:121</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -406,7 +406,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMBackend<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number< [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L160">ctypes.ts:160</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L160">ctypes.ts:160</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -458,7 +458,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMCFunc<wbr>Set<wbr>Return<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ret<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signa [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L77">ctypes.ts:77</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L77">ctypes.ts:77</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -506,7 +506,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMCb<wbr>Arg<wbr>ToReturn<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>value<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, code<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span c [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L83">ctypes.ts:83</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L83">ctypes.ts:83</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -545,7 +545,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Call<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, argValues<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCode<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-t [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L67">ctypes.ts:67</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L67">ctypes.ts:67</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -601,7 +601,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>func<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L57">ctypes.ts:57</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L57">ctypes.ts:57</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -637,7 +637,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Get<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span cla [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L100">ctypes.ts:100</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L100">ctypes.ts:100</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -676,7 +676,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>List<wbr>Global<wbr>Names<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>outSize<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, outArray<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L88">ctypes.ts:88</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L88">ctypes.ts:88</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -715,7 +715,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMFunc<wbr>Register<wbr>Global<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>name<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, f<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, override<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</spa [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L94">ctypes.ts:94</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L94">ctypes.ts:94</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -758,7 +758,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMGet<wbr>Last<wbr>Error<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L34">ctypes.ts:34</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -788,7 +788,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Free<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L52">ctypes.ts:52</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L52">ctypes.ts:52</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -824,7 +824,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Get<wbr>Function<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, funcName<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, queryImports<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">numbe [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L42">ctypes.ts:42</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -872,7 +872,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMMod<wbr>Import<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>mod<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, dep<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-si [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L48">ctypes.ts:48</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -912,7 +912,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMSynchronize<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>deviceType<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, deviceId<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, stream<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signatur [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L150">ctypes.ts:150</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L150">ctypes.ts:150</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -954,7 +954,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Alloc<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>size<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L167">ctypes.ts:167</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L167">ctypes.ts:167</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -990,7 +990,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Free<wbr>Space<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>ptr<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">void</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L170">ctypes.ts:170</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L170">ctypes.ts:170</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1026,7 +1026,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>Func<wbr>Create<wbr>FromCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resource<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, out<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> =&g [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L187">ctypes.ts:187</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L187">ctypes.ts:187</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1066,7 +1066,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>args<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, typeCodes<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a>, nargs<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">number</span>, [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L179">ctypes.ts:179</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1118,7 +1118,7 @@
<div class="tsd-signature tsd-kind-icon">FTVMWasm<wbr>PackedCFunc<wbr>Finalizer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>resourceHandle<span class="tsd-signature-symbol">: </span><a href="index.html#pointer" class="tsd-signature-type">Pointer</a><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">void</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L193">ctypes.ts:193</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L193">ctypes.ts:193</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1154,7 +1154,7 @@
<div class="tsd-signature tsd-kind-icon">GPUPointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L25">webgpu.ts:25</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1169,7 +1169,7 @@
<div class="tsd-signature tsd-kind-icon">Packed<wbr>Func<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">...</span>args<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol"> & </span><a href="interfaces/disp [...]
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L36">runtime.ts:36</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L36">runtime.ts:36</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1184,7 +1184,7 @@
<div class="tsd-signature tsd-kind-icon">Pointer<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L25">ctypes.ts:25</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1199,7 +1199,7 @@
<div class="tsd-signature tsd-kind-icon">Ptr<wbr>Offset<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/ctypes.ts#L28">ctypes.ts:28</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1217,7 +1217,7 @@
<div class="tsd-signature tsd-kind-icon">RPC_<wbr>MAGIC<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">1045105</span><span class="tsd-signature-symbol"> = 1045105</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/rpc_server.ts#L36">rpc_server.ts:36</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/rpc_server.ts#L36">rpc_server.ts:36</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1239,7 +1239,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/support.ts#L25">support.ts:25</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/support.ts#L25">support.ts:25</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1271,7 +1271,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/support.ts#L39">support.ts:39</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/support.ts#L39">support.ts:39</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1300,7 +1300,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/support.ts#L52">support.ts:52</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/support.ts#L52">support.ts:52</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1337,7 +1337,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/compact.ts#L38">compact.ts:38</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/compact.ts#L38">compact.ts:38</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1368,7 +1368,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L30">webgpu.ts:30</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1390,7 +1390,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/environment.ts#L32">environment.ts:32</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/environment.ts#L32">environment.ts:32</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1421,7 +1421,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/compact.ts#L24">compact.ts:24</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/compact.ts#L24">compact.ts:24</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1443,7 +1443,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L1356">runtime.ts:1356</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L1356">runtime.ts:1356</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1508,7 +1508,7 @@
<li class="tsd-description">
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/support.ts#L62">support.ts:62</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/support.ts#L62">support.ts:62</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -1530,7 +1530,7 @@
<div class="tsd-signature tsd-kind-icon">DLData<wbr>Type<wbr>Code<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L246">runtime.ts:246</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L246">runtime.ts:246</a></li>
</ul>
</aside>
<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1539,7 +1539,7 @@
<div class="tsd-signature tsd-kind-icon">0<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = "int"</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L247">runtime.ts:247</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L247">runtime.ts:247</a></li>
</ul>
</aside>
</section>
@@ -1549,7 +1549,7 @@
<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = "uint"</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L248">runtime.ts:248</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L248">runtime.ts:248</a></li>
</ul>
</aside>
</section>
@@ -1559,7 +1559,7 @@
<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = "float"</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L249">runtime.ts:249</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L249">runtime.ts:249</a></li>
</ul>
</aside>
</section>
@@ -1569,7 +1569,7 @@
<div class="tsd-signature tsd-kind-icon">3<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = "handle"</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L250">runtime.ts:250</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L250">runtime.ts:250</a></li>
</ul>
</aside>
</section>
@@ -1580,7 +1580,7 @@
<div class="tsd-signature tsd-kind-icon">Device<wbr>Enum<wbr>ToStr<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L175">runtime.ts:175</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L175">runtime.ts:175</a></li>
</ul>
</aside>
<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1589,7 +1589,7 @@
<div class="tsd-signature tsd-kind-icon">1<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = "cpu"</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L176">runtime.ts:176</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L176">runtime.ts:176</a></li>
</ul>
</aside>
</section>
@@ -1599,7 +1599,7 @@
<div class="tsd-signature tsd-kind-icon">15<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = "webgpu"</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L180">runtime.ts:180</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L180">runtime.ts:180</a></li>
</ul>
</aside>
</section>
@@ -1609,7 +1609,7 @@
<div class="tsd-signature tsd-kind-icon">2<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = "cuda"</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L177">runtime.ts:177</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L177">runtime.ts:177</a></li>
</ul>
</aside>
</section>
@@ -1619,7 +1619,7 @@
<div class="tsd-signature tsd-kind-icon">4<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = "opencl"</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L178">runtime.ts:178</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L178">runtime.ts:178</a></li>
</ul>
</aside>
</section>
@@ -1629,7 +1629,7 @@
<div class="tsd-signature tsd-kind-icon">8<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span><span class="tsd-signature-symbol"> = "metal"</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L179">runtime.ts:179</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L179">runtime.ts:179</a></li>
</ul>
</aside>
</section>
@@ -1640,7 +1640,7 @@
<div class="tsd-signature tsd-kind-icon">Device<wbr>Str<wbr>ToEnum<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">object</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L183">runtime.ts:183</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L183">runtime.ts:183</a></li>
</ul>
</aside>
<section class="tsd-panel tsd-member tsd-kind-variable tsd-parent-kind-object-literal">
@@ -1649,7 +1649,7 @@
<div class="tsd-signature tsd-kind-icon">cl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L186">runtime.ts:186</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L186">runtime.ts:186</a></li>
</ul>
</aside>
</section>
@@ -1659,7 +1659,7 @@
<div class="tsd-signature tsd-kind-icon">cpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 1</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L184">runtime.ts:184</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L184">runtime.ts:184</a></li>
</ul>
</aside>
</section>
@@ -1669,7 +1669,7 @@
<div class="tsd-signature tsd-kind-icon">cuda<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 2</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L185">runtime.ts:185</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L185">runtime.ts:185</a></li>
</ul>
</aside>
</section>
@@ -1679,7 +1679,7 @@
<div class="tsd-signature tsd-kind-icon">metal<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 8</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L189">runtime.ts:189</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L189">runtime.ts:189</a></li>
</ul>
</aside>
</section>
@@ -1689,7 +1689,7 @@
<div class="tsd-signature tsd-kind-icon">opencl<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 4</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L187">runtime.ts:187</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L187">runtime.ts:187</a></li>
</ul>
</aside>
</section>
@@ -1699,7 +1699,7 @@
<div class="tsd-signature tsd-kind-icon">vulkan<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 7</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L188">runtime.ts:188</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L188">runtime.ts:188</a></li>
</ul>
</aside>
</section>
@@ -1709,7 +1709,7 @@
<div class="tsd-signature tsd-kind-icon">webgpu<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">number</span><span class="tsd-signature-symbol"> = 15</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/runtime.ts#L190">runtime.ts:190</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/runtime.ts#L190">runtime.ts:190</a></li>
</ul>
</aside>
</section>
diff --git a/docs/reference/api/typedoc/interfaces/disposable.html b/docs/reference/api/typedoc/interfaces/disposable.html
index 5585cf953..3000fa0fc 100644
--- a/docs/reference/api/typedoc/interfaces/disposable.html
+++ b/docs/reference/api/typedoc/interfaces/disposable.html
@@ -113,7 +113,7 @@
<div class="tsd-signature tsd-kind-icon">dispose<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">void</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/types.ts#L52">types.ts:52</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/types.ts#L52">types.ts:52</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/reference/api/typedoc/interfaces/functioninfo.html b/docs/reference/api/typedoc/interfaces/functioninfo.html
index 384a67996..2d5094132 100644
--- a/docs/reference/api/typedoc/interfaces/functioninfo.html
+++ b/docs/reference/api/typedoc/interfaces/functioninfo.html
@@ -95,7 +95,7 @@
<div class="tsd-signature tsd-kind-icon">arg_<wbr>types<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol"><</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">></span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L41">webgpu.ts:41</a></li>
</ul>
</aside>
</section>
@@ -105,7 +105,7 @@
<div class="tsd-signature tsd-kind-icon">launch_<wbr>param_<wbr>tags<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Array</span><span class="tsd-signature-symbol"><</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">></span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L42">webgpu.ts:42</a></li>
</ul>
</aside>
</section>
@@ -115,7 +115,7 @@
<div class="tsd-signature tsd-kind-icon">name<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">string</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/webgpu.ts#L40">webgpu.ts:40</a></li>
</ul>
</aside>
</section>
diff --git a/docs/reference/api/typedoc/interfaces/libraryprovider.html b/docs/reference/api/typedoc/interfaces/libraryprovider.html
index 21ba9b64f..3cfde3160 100644
--- a/docs/reference/api/typedoc/interfaces/libraryprovider.html
+++ b/docs/reference/api/typedoc/interfaces/libraryprovider.html
@@ -112,7 +112,7 @@
<div class="tsd-signature tsd-kind-icon">imports<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-type">Record</span><span class="tsd-signature-symbol"><</span><span class="tsd-signature-type">string</span><span class="tsd-signature-symbol">, </span><span class="tsd-signature-type">any</span><span class="tsd-signature-symbol">></span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/types.ts#L34">types.ts:34</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/types.ts#L34">types.ts:34</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
@@ -127,7 +127,7 @@
<div class="tsd-signature tsd-kind-icon">start<span class="tsd-signature-symbol">:</span> <span class="tsd-signature-symbol">(</span>inst<span class="tsd-signature-symbol">: </span><span class="tsd-signature-type">Instance</span><span class="tsd-signature-symbol">)</span><span class="tsd-signature-symbol"> => </span><span class="tsd-signature-type">void</span></div>
<aside class="tsd-sources">
<ul>
- <li>Defined in <a href="https://github.com/apache/tvm/blob/50997035b/web/src/types.ts#L39">types.ts:39</a></li>
+ <li>Defined in <a href="https://github.com/apache/tvm/blob/d0999bbd3/web/src/types.ts#L39">types.ts:39</a></li>
</ul>
</aside>
<div class="tsd-comment tsd-typography">
diff --git a/docs/searchindex.js b/docs/searchindex.js
index e29ed156c..cd3bf3427 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
+Search.setIndex({docnames:["arch/benchmark","arch/convert_layout","arch/debugger","arch/device_target_interactions","arch/frontend/tensorflow","arch/hybrid_script","arch/index","arch/inferbound","arch/introduction_to_module_serialization","arch/microtvm_design","arch/microtvm_project_api","arch/model_library_format","arch/pass_infra","arch/relay_intro","arch/relay_op_strategy","arch/runtime","arch/runtimes/vulkan","arch/security","arch/virtual_machine","contribute/ci","contribute/code_gu [...]
\ No newline at end of file
diff --git a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
index ac85a59bd..9fdee32c6 100644
--- a/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/autotvm/sg_execution_times.html
@@ -300,10 +300,10 @@
<div class="section" id="computation-times">
<span id="sphx-glr-topic-vta-tutorials-autotvm-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:21.185</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
+<p><strong>00:20.393</strong> total execution time for <strong>topic_vta_tutorials_autotvm</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:20.970</strong>: <a class="reference internal" href="tune_relay_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-relay-vta-py"><span class="std std-ref">Auto-tuning a convolutional network on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_vta.py</span></code>)</p></li>
-<li><p><strong>00:00.215</strong>: <a class="reference internal" href="tune_alu_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-alu-vta-py"><span class="std std-ref">Auto-tuning a ALU fused op on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_alu_vta.py</span></code>)</p></li>
+<li><p><strong>00:20.191</strong>: <a class="reference internal" href="tune_relay_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-relay-vta-py"><span class="std std-ref">Auto-tuning a convolutional network on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_relay_vta.py</span></code>)</p></li>
+<li><p><strong>00:00.202</strong>: <a class="reference internal" href="tune_alu_vta.html#sphx-glr-topic-vta-tutorials-autotvm-tune-alu-vta-py"><span class="std std-ref">Auto-tuning a ALU fused op on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">tune_alu_vta.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_classification.html b/docs/topic/vta/tutorials/frontend/deploy_classification.html
index 4aa4a26f1..546fef3ca 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_classification.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_classification.html
@@ -539,7 +539,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
DeprecationWarning,
/workspace/vta/tutorials/frontend/deploy_classification.py:213: DeprecationWarning: legacy graph executor behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_executor.GraphModule for the new recommended usage.
relay_prog, target=tvm.target.Target(target, host=env.target_host), params=params
-resnet18_v1 inference graph built in 22.43s!
+resnet18_v1 inference graph built in 21.33s!
</pre></div>
</div>
</div>
diff --git a/docs/topic/vta/tutorials/frontend/deploy_detection.html b/docs/topic/vta/tutorials/frontend/deploy_detection.html
index 8ce0ca289..a26bc5896 100644
--- a/docs/topic/vta/tutorials/frontend/deploy_detection.html
+++ b/docs/topic/vta/tutorials/frontend/deploy_detection.html
@@ -557,7 +557,7 @@ and dense layer which will both be executed in fp32 on the CPU.</p></li>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>/workspace/python/tvm/relay/build_module.py:431: DeprecationWarning: Please use input parameter mod (tvm.IRModule) instead of deprecated parameter mod (tvm.relay.function.Function)
DeprecationWarning,
-yolov3-tiny inference graph built in 15.50s!
+yolov3-tiny inference graph built in 14.80s!
</pre></div>
</div>
</div>
diff --git a/docs/topic/vta/tutorials/frontend/sg_execution_times.html b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
index 2deeed867..d4f6f834e 100644
--- a/docs/topic/vta/tutorials/frontend/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/frontend/sg_execution_times.html
@@ -300,10 +300,10 @@
<div class="section" id="computation-times">
<span id="sphx-glr-topic-vta-tutorials-frontend-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>01:30.179</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
+<p><strong>01:28.651</strong> total execution time for <strong>topic_vta_tutorials_frontend</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:47.506</strong>: <a class="reference internal" href="deploy_detection.html#sphx-glr-topic-vta-tutorials-frontend-deploy-detection-py"><span class="std std-ref">Deploy Pretrained Vision Detection Model from Darknet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_detection.py</span></code>)</p></li>
-<li><p><strong>00:42.673</strong>: <a class="reference internal" href="deploy_classification.html#sphx-glr-topic-vta-tutorials-frontend-deploy-classification-py"><span class="std std-ref">Deploy Pretrained Vision Model from MxNet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_classification.py</span></code>)</p></li>
+<li><p><strong>00:47.025</strong>: <a class="reference internal" href="deploy_detection.html#sphx-glr-topic-vta-tutorials-frontend-deploy-detection-py"><span class="std std-ref">Deploy Pretrained Vision Detection Model from Darknet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_detection.py</span></code>)</p></li>
+<li><p><strong>00:41.626</strong>: <a class="reference internal" href="deploy_classification.html#sphx-glr-topic-vta-tutorials-frontend-deploy-classification-py"><span class="std std-ref">Deploy Pretrained Vision Model from MxNet on VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">deploy_classification.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/topic/vta/tutorials/optimize/sg_execution_times.html b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
index fddc42e75..22b65649d 100644
--- a/docs/topic/vta/tutorials/optimize/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/optimize/sg_execution_times.html
@@ -300,10 +300,10 @@
<div class="section" id="computation-times">
<span id="sphx-glr-topic-vta-tutorials-optimize-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:03.561</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
+<p><strong>00:03.542</strong> total execution time for <strong>topic_vta_tutorials_optimize</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:02.992</strong>: <a class="reference internal" href="convolution_opt.html#sphx-glr-topic-vta-tutorials-optimize-convolution-opt-py"><span class="std std-ref">2D Convolution Optimization</span></a> (<code class="docutils literal notranslate"><span class="pre">convolution_opt.py</span></code>)</p></li>
-<li><p><strong>00:00.569</strong>: <a class="reference internal" href="matrix_multiply_opt.html#sphx-glr-topic-vta-tutorials-optimize-matrix-multiply-opt-py"><span class="std std-ref">Matrix Multiply Blocking</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply_opt.py</span></code>)</p></li>
+<li><p><strong>00:02.990</strong>: <a class="reference internal" href="convolution_opt.html#sphx-glr-topic-vta-tutorials-optimize-convolution-opt-py"><span class="std std-ref">2D Convolution Optimization</span></a> (<code class="docutils literal notranslate"><span class="pre">convolution_opt.py</span></code>)</p></li>
+<li><p><strong>00:00.553</strong>: <a class="reference internal" href="matrix_multiply_opt.html#sphx-glr-topic-vta-tutorials-optimize-matrix-multiply-opt-py"><span class="std std-ref">Matrix Multiply Blocking</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply_opt.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/topic/vta/tutorials/sg_execution_times.html b/docs/topic/vta/tutorials/sg_execution_times.html
index a2b2bbfde..c0110f659 100644
--- a/docs/topic/vta/tutorials/sg_execution_times.html
+++ b/docs/topic/vta/tutorials/sg_execution_times.html
@@ -300,10 +300,10 @@
<div class="section" id="computation-times">
<span id="sphx-glr-topic-vta-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
-<p><strong>00:01.032</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
+<p><strong>00:01.022</strong> total execution time for <strong>topic_vta_tutorials</strong> files:</p>
<ul class="simple">
-<li><p><strong>00:00.526</strong>: <a class="reference internal" href="matrix_multiply.html#sphx-glr-topic-vta-tutorials-matrix-multiply-py"><span class="std std-ref">Simple Matrix Multiply</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply.py</span></code>)</p></li>
-<li><p><strong>00:00.506</strong>: <a class="reference internal" href="vta_get_started.html#sphx-glr-topic-vta-tutorials-vta-get-started-py"><span class="std std-ref">Get Started with VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">vta_get_started.py</span></code>)</p></li>
+<li><p><strong>00:00.519</strong>: <a class="reference internal" href="matrix_multiply.html#sphx-glr-topic-vta-tutorials-matrix-multiply-py"><span class="std std-ref">Simple Matrix Multiply</span></a> (<code class="docutils literal notranslate"><span class="pre">matrix_multiply.py</span></code>)</p></li>
+<li><p><strong>00:00.503</strong>: <a class="reference internal" href="vta_get_started.html#sphx-glr-topic-vta-tutorials-vta-get-started-py"><span class="std std-ref">Get Started with VTA</span></a> (<code class="docutils literal notranslate"><span class="pre">vta_get_started.py</span></code>)</p></li>
</ul>
</div>
diff --git a/docs/tutorial/auto_scheduler_matmul_x86.html b/docs/tutorial/auto_scheduler_matmul_x86.html
index 9be376a90..a85707cbe 100644
--- a/docs/tutorial/auto_scheduler_matmul_x86.html
+++ b/docs/tutorial/auto_scheduler_matmul_x86.html
@@ -453,7 +453,7 @@ trials, we can load the best schedule from the log file and apply it.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>*E
</pre></div>
</div>
</div>
@@ -545,7 +545,7 @@ operator fusion.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 93.811 ms
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>Execution time of this operator: 93.507 ms
</pre></div>
</div>
</div>
@@ -621,6 +621,7 @@ automatically optimize a matrix multiplication, without the need to specify a
search template. It ends a series of examples that starts from the Tensor
Expression (TE) language that demonstrates how TVM can optimize computational
operations.</p>
+<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 6.694 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-tutorial-auto-scheduler-matmul-x86-py">
<div class="sphx-glr-download docutils container">
<p><a class="reference download internal" download="" href="../_downloads/eac4389b114db015e95cb3cdf8b86b83/auto_scheduler_matmul_x86.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">auto_scheduler_matmul_x86.py</span></code></a></p>
diff --git a/docs/tutorial/autotvm_relay_x86.html b/docs/tutorial/autotvm_relay_x86.html
index 5106076e2..11a6905eb 100644
--- a/docs/tutorial/autotvm_relay_x86.html
+++ b/docs/tutorial/autotvm_relay_x86.html
@@ -516,7 +516,7 @@ standard deviation.</p>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
-<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{'mean': 496.7864342000008, 'median': 496.7793454000031, 'std': 0.5719564614245756}
+<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{'mean': 496.2918664199986, 'median': 496.27590769999586, 'std': 0.634099203437212}
</pre></div>
</div>
</div>
@@ -670,179 +670,179 @@ depending on the specifics of the model and the target platform.</p>
</div>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>[Task 1/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 1/25] Current/Best: 17.37/ 17.37 GFLOPS | Progress: (4/20) | 6.10 s
-[Task 1/25] Current/Best: 6.11/ 17.37 GFLOPS | Progress: (8/20) | 8.99 s
-[Task 1/25] Current/Best: 11.52/ 22.73 GFLOPS | Progress: (12/20) | 11.45 s
-[Task 1/25] Current/Best: 16.67/ 22.73 GFLOPS | Progress: (16/20) | 13.13 s
-[Task 1/25] Current/Best: 11.60/ 23.83 GFLOPS | Progress: (20/20) | 14.88 s Done.
+[Task 1/25] Current/Best: 17.26/ 17.26 GFLOPS | Progress: (4/20) | 5.99 s
+[Task 1/25] Current/Best: 6.16/ 17.26 GFLOPS | Progress: (8/20) | 8.92 s
+[Task 1/25] Current/Best: 11.57/ 22.77 GFLOPS | Progress: (12/20) | 11.34 s
+[Task 1/25] Current/Best: 16.81/ 22.87 GFLOPS | Progress: (16/20) | 13.01 s
+[Task 1/25] Current/Best: 11.64/ 23.88 GFLOPS | Progress: (20/20) | 14.74 s Done.
[Task 2/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/20) | 0.00 s
-[Task 2/25] Current/Best: 12.27/ 13.24 GFLOPS | Progress: (4/20) | 3.87 s
-[Task 2/25] Current/Best: 13.75/ 18.16 GFLOPS | Progress: (8/20) | 5.17 s
-[Task 2/25] Current/Best: 21.02/ 21.02 GFLOPS | Progress: (12/20) | 6.48 s
-[Task 2/25] Current/Best: 12.49/ 21.02 GFLOPS | Progress: (16/20) | 7.74 s
-[Task 2/25] Current/Best: 18.82/ 21.02 GFLOPS | Progress: (20/20) | 9.32 s Done.
... 509 lines suppressed ...