You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by lm...@apache.org on 2020/12/26 08:11:56 UTC

[tvm-site] branch asf-site updated: Docs build at Sat Dec 26 00:11:28 PST 2020

This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 36ac1ca  Docs build at Sat Dec 26 00:11:28 PST 2020
36ac1ca is described below

commit 36ac1cab6e80c4b4520df7f54d162fe30cd92629
Author: Lianmin Zheng <li...@gmail.com>
AuthorDate: Sat Dec 26 00:11:29 2020 -0800

    Docs build at Sat Dec 26 00:11:28 PST 2020
---
 .../tune_simple_template.py                        |    2 +-
 .../tune_network_mali.ipynb}                       |   43 +-
 .../tune_simple_template.ipynb                     |    2 +-
 .../tune_conv2d_cuda.ipynb                         |    2 +-
 .../tune_relay_cuda.py                             |   10 +-
 .../tune_network_cuda.py                           |    2 +-
 .../tune_relay_mobile_gpu.ipynb                    |   11 +-
 .../tune_conv2d_cuda.py                            |    2 +-
 .../tune_relay_vta.ipynb                           |    4 +-
 .../tune_relay_vta.py                              |   33 +-
 .../tune_conv2d_layer_cuda.py                      |   34 +-
 .../696dd37904ef92773435ca321ff41bfb/from_onnx.py  |   23 +-
 .../tune_relay_cuda.ipynb                          |   11 +-
 .../tune_network_mali.py}                          |  143 +-
 .../tune_matmul_x86.py                             |   31 +-
 .../tune_network_x86.ipynb                         |    2 +-
 .../tune_network_x86.py                            |    2 +-
 .../tune_relay_arm.py                              |    4 +-
 .../tune_conv2d_layer_cuda.ipynb                   |    2 +-
 .../micro_tflite.ipynb                             |    2 +-
 .../tune_network_cuda.ipynb                        |    2 +-
 .../tune_relay_mobile_gpu.py                       |    7 +-
 .../from_onnx.ipynb                                |    8 +-
 .../tune_matmul_x86.ipynb                          |    9 +-
 .../tune_relay_arm.ipynb                           |    4 +-
 .../micro_tflite.py                                |    3 +
 docs/_images/sphx_glr_tune_network_mali_thumb.png  |  Bin 0 -> 26786 bytes
 docs/_sources/install/from_source.rst.txt          |    2 +-
 docs/_sources/langref/relay_pattern.rst.txt        |   19 +
 .../auto_scheduler/sg_execution_times.rst.txt      |   11 +-
 .../auto_scheduler/tune_conv2d_layer_cuda.rst.txt  | 1044 +---
 .../auto_scheduler/tune_matmul_x86.rst.txt         |   47 +-
 .../auto_scheduler/tune_network_cuda.rst.txt       |    4 +-
 ...twork_x86.rst.txt => tune_network_mali.rst.txt} |  443 +-
 .../auto_scheduler/tune_network_x86.rst.txt        |   11 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |   16 +-
 .../tutorials/autotvm/tune_conv2d_cuda.rst.txt     |   46 +-
 .../tutorials/autotvm/tune_relay_arm.rst.txt       |    4 +-
 .../tutorials/autotvm/tune_relay_cuda.rst.txt      |   10 +-
 .../autotvm/tune_relay_mobile_gpu.rst.txt          |    6 +-
 .../tutorials/autotvm/tune_simple_template.rst.txt |   22 +-
 .../tutorials/dev/sg_execution_times.rst.txt       |    8 +-
 .../frontend/deploy_model_on_android.rst.txt       |    2 +-
 .../deploy_object_detection_pytorch.rst.txt        |    5 +-
 .../tutorials/frontend/deploy_prequantized.rst.txt |    6 +-
 .../frontend/deploy_prequantized_tflite.rst.txt    |    4 +-
 .../tutorials/frontend/deploy_ssd_gluoncv.rst.txt  |    2 +-
 .../_sources/tutorials/frontend/from_mxnet.rst.txt |    8 +
 docs/_sources/tutorials/frontend/from_onnx.rst.txt |   25 +-
 .../tutorials/frontend/from_pytorch.rst.txt        |    2 +-
 .../tutorials/frontend/from_tensorflow.rst.txt     | 2108 +------
 .../tutorials/frontend/sg_execution_times.rst.txt  |   40 +-
 .../get_started/cross_compilation_and_rpc.rst.txt  |    2 +-
 .../get_started/relay_quick_start.rst.txt          |    2 +-
 .../get_started/sg_execution_times.rst.txt         |   10 +-
 .../get_started/tensor_expr_get_started.rst.txt    |    2 +-
 docs/_sources/tutorials/index.rst.txt              |   20 +
 .../tutorials/language/schedule_primitives.rst.txt |   14 +-
 .../tutorials/language/sg_execution_times.rst.txt  |   18 +-
 docs/_sources/tutorials/language/tensorize.rst.txt |    8 +-
 .../tutorials/language/tuple_inputs.rst.txt        |   22 +-
 docs/_sources/tutorials/micro/micro_tflite.rst.txt |    3 +
 .../tutorials/micro/sg_execution_times.rst.txt     |    6 +-
 .../tutorials/optimize/opt_conv_cuda.rst.txt       |    2 +-
 .../tutorials/optimize/opt_conv_tensorcore.rst.txt |    2 +-
 docs/_sources/tutorials/optimize/opt_gemm.rst.txt  |   20 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |   10 +-
 docs/_sources/tutorials/topi/intro_topi.rst.txt    |    2 +-
 .../tutorials/topi/sg_execution_times.rst.txt      |    4 +-
 .../tutorials/autotvm/sg_execution_times.rst.txt   |    4 +-
 .../vta/tutorials/autotvm/tune_relay_vta.rst.txt   |   35 +-
 .../frontend/deploy_classification.rst.txt         |    4 +-
 .../tutorials/frontend/sg_execution_times.rst.txt  |    4 +-
 .../_sources/vta/tutorials/matrix_multiply.rst.txt |    4 +-
 .../vta/tutorials/optimize/convolution_opt.rst.txt |   12 +-
 .../tutorials/optimize/matrix_multiply_opt.rst.txt |   16 +-
 .../tutorials/optimize/sg_execution_times.rst.txt  |    6 +-
 .../vta/tutorials/sg_execution_times.rst.txt       |    6 +-
 docs/api/doxygen/annotated.html                    |  480 +-
 docs/api/doxygen/auto__schedule_8h_source.html     |    4 +-
 docs/api/doxygen/builtin_8h.html                   |    3 +
 docs/api/doxygen/builtin_8h_source.html            |   31 +-
 docs/api/doxygen/c__runtime__api_8h.html           |    2 +-
 docs/api/doxygen/c__runtime__api_8h__dep__incl.svg | 1292 ++--
 docs/api/doxygen/classes.html                      |  341 +-
 docs/api/doxygen/classtvm_1_1BaseAttrsNode.html    |   10 +-
 ...stvm_1_1auto__scheduler_1_1MeasureCallback.html |    2 +-
 ..._1_1auto__scheduler_1_1MeasureCallbackNode.html |    4 +-
 ...uler_1_1MeasureCallbackNode__inherit__graph.svg |   74 +-
 ...cheduler_1_1MeasureCallback__inherit__graph.svg |   76 +-
 ...ler_1_1PythonBasedMeasureCallback-members.html} |   12 +-
 ...__scheduler_1_1PythonBasedMeasureCallback.html} |   79 +-
 ...1_1PythonBasedMeasureCallbackNode-members.html} |   14 +-
 ...heduler_1_1PythonBasedMeasureCallbackNode.html} |   83 +-
 ...PythonBasedMeasureCallbackNode__coll__graph.svg |   72 +
 ...onBasedMeasureCallbackNode__inherit__graph.svg} |   44 +-
 ..._1_1PythonBasedMeasureCallback__coll__graph.svg |   58 +
 ...1PythonBasedMeasureCallback__inherit__graph.svg |   58 +
 .../classtvm_1_1relay_1_1CallPattern-members.html  |    2 +-
 .../doxygen/classtvm_1_1relay_1_1CallPattern.html  |   20 +-
 ...asstvm_1_1relay_1_1CallPatternNode-members.html |    8 +-
 .../classtvm_1_1relay_1_1CallPatternNode.html      |   39 +-
 ...vm_1_1relay_1_1CallPatternNode__coll__graph.svg |  117 +-
 ...1_1relay_1_1CallPatternNode__inherit__graph.svg |   52 +-
 .../doxygen/classtvm_1_1relay_1_1DFPattern.html    |    2 +-
 ...Pattern_01_6n_00_01Args_8_8_8_08_4-members.html |   19 +-
 ...nst_01DFPattern_01_6n_00_01Args_8_8_8_08_4.html |   38 +
 ...ern_01_6n_00_01Args_8_8_8_08_4__coll__graph.svg |    2 +-
 .../classtvm_1_1relay_1_1DFPatternNode.html        |    2 +-
 ...m_1_1relay_1_1DFPatternNode__inherit__graph.svg |  328 +-
 ...sstvm_1_1relay_1_1DFPatternVisitor-members.html |   15 +-
 .../classtvm_1_1relay_1_1DFPatternVisitor.html     |   30 +-
 ...m_1_1relay_1_1DFPatternVisitor__coll__graph.svg |   39 +-
 ..._1relay_1_1DFPatternVisitor__inherit__graph.svg |   39 +-
 ...sstvm_1_1relay_1_1DFPattern__inherit__graph.svg |  315 +-
 ...sstvm_1_1relay_1_1FunctionPattern-members.html} |   13 +-
 ...l => classtvm_1_1relay_1_1FunctionPattern.html} |   92 +-
 ...m_1_1relay_1_1FunctionPatternNode-members.html} |   15 +-
 ... classtvm_1_1relay_1_1FunctionPatternNode.html} |  112 +-
 ..._1relay_1_1FunctionPatternNode__coll__graph.svg |   88 +
 ...elay_1_1FunctionPatternNode__inherit__graph.svg |   56 +
 ...vm_1_1relay_1_1FunctionPattern__coll__graph.svg |   58 +
 ...1_1relay_1_1FunctionPattern__inherit__graph.svg |   58 +
 .../doxygen/classtvm_1_1runtime_1_1DataType.html   |    8 +-
 docs/api/doxygen/compute__dag_8h.html              |    9 +-
 docs/api/doxygen/compute__dag_8h_source.html       |    3 +-
 docs/api/doxygen/crt_2packed__func_8h.html         |    2 +-
 docs/api/doxygen/crt_2packed__func_8h__incl.svg    |   95 +-
 docs/api/doxygen/crt_8h.html                       |   37 +-
 docs/api/doxygen/crt_8h_source.html                |    4 +-
 docs/api/doxygen/dataflow__pattern_8h.html         |    6 +
 docs/api/doxygen/dataflow__pattern_8h_source.html  |  100 +-
 .../dataflow__pattern__functor_8h_source.html      |   25 +-
 .../dir_a2900df4deca8dd2bcded616f0fe650a.html      |    2 +-
 docs/api/doxygen/error__codes_8h.html              |   13 +-
 docs/api/doxygen/error__codes_8h__dep__incl.svg    |  176 +-
 docs/api/doxygen/error__codes_8h_source.html       |   19 +-
 docs/api/doxygen/files.html                        |    2 +-
 docs/api/doxygen/functions.html                    |    2 +
 docs/api/doxygen/functions_a.html                  |   16 +-
 docs/api/doxygen/functions_b.html                  |    3 +-
 docs/api/doxygen/functions_c.html                  |    8 +-
 docs/api/doxygen/functions_f.html                  |   12 +-
 docs/api/doxygen/functions_func_c.html             |   11 +-
 docs/api/doxygen/functions_func_f.html             |    9 +-
 docs/api/doxygen/functions_func_p.html             |    3 +
 docs/api/doxygen/functions_func_r.html             |    2 +-
 docs/api/doxygen/functions_func_t.html             |   14 +-
 docs/api/doxygen/functions_func_v.html             |   21 +-
 docs/api/doxygen/functions_i.html                  |    2 +-
 docs/api/doxygen/functions_p.html                  |    8 +-
 docs/api/doxygen/functions_r.html                  |    2 +-
 docs/api/doxygen/functions_s.html                  |    9 +-
 docs/api/doxygen/functions_t.html                  |   19 +-
 docs/api/doxygen/functions_v.html                  |   40 +-
 docs/api/doxygen/functions_vars.html               |    2 +
 docs/api/doxygen/functions_vars_a.html             |    8 +-
 docs/api/doxygen/functions_vars_b.html             |    1 +
 docs/api/doxygen/functions_vars_c.html             |    3 +
 docs/api/doxygen/functions_vars_f.html             |    3 +
 docs/api/doxygen/functions_vars_i.html             |    2 +-
 docs/api/doxygen/functions_vars_p.html             |    1 +
 docs/api/doxygen/functions_vars_s.html             |    5 +-
 docs/api/doxygen/functions_vars_t.html             |    1 -
 docs/api/doxygen/functions_vars_v.html             |    3 +
 docs/api/doxygen/globals.html                      |    1 +
 docs/api/doxygen/globals_e.html                    |    1 +
 docs/api/doxygen/globals_eval.html                 |    3 +
 docs/api/doxygen/globals_f.html                    |    1 +
 docs/api/doxygen/globals_func.html                 |   38 +-
 docs/api/doxygen/globals_g.html                    |    1 +
 docs/api/doxygen/globals_i.html                    |    1 +
 docs/api/doxygen/globals_k.html                    |    4 +
 .../api/doxygen/{globals_s.html => globals_m.html} |   13 +-
 docs/api/doxygen/globals_p.html                    |    1 +
 docs/api/doxygen/globals_r.html                    |    1 +
 docs/api/doxygen/globals_s.html                    |    1 +
 docs/api/doxygen/globals_t.html                    |   13 +-
 docs/api/doxygen/globals_type.html                 |    3 +
 docs/api/doxygen/globals_u.html                    |    3 +-
 docs/api/doxygen/globals_v.html                    |   10 +-
 docs/api/doxygen/graph__runtime_8h.html            |   35 +-
 docs/api/doxygen/graph__runtime_8h__incl.svg       |  167 +-
 docs/api/doxygen/graph__runtime_8h_source.html     |    6 +-
 docs/api/doxygen/hierarchy.html                    | 1663 +++---
 docs/api/doxygen/inherit_graph_10.svg              |   38 +-
 docs/api/doxygen/inherit_graph_100.svg             |   12 +-
 docs/api/doxygen/inherit_graph_101.svg             |   12 +-
 docs/api/doxygen/inherit_graph_102.svg             |   15 +-
 docs/api/doxygen/inherit_graph_103.svg             |   15 +-
 docs/api/doxygen/inherit_graph_104.svg             |   12 +-
 docs/api/doxygen/inherit_graph_105.svg             |  145 +-
 docs/api/doxygen/inherit_graph_106.svg             |  146 +-
 docs/api/doxygen/inherit_graph_107.svg             |   12 +-
 docs/api/doxygen/inherit_graph_108.svg             |   12 +-
 docs/api/doxygen/inherit_graph_109.svg             |   15 +-
 docs/api/doxygen/inherit_graph_11.svg              |   25 +-
 docs/api/doxygen/inherit_graph_110.svg             |   16 +-
 docs/api/doxygen/inherit_graph_111.svg             |   17 +-
 docs/api/doxygen/inherit_graph_112.svg             |   12 +-
 docs/api/doxygen/inherit_graph_113.svg             |   12 +-
 docs/api/doxygen/inherit_graph_114.svg             |   12 +-
 docs/api/doxygen/inherit_graph_115.svg             |   17 +-
 docs/api/doxygen/inherit_graph_116.svg             |   16 +-
 docs/api/doxygen/inherit_graph_117.svg             |   15 +-
 docs/api/doxygen/inherit_graph_118.svg             |   17 +-
 docs/api/doxygen/inherit_graph_119.svg             |   17 +-
 docs/api/doxygen/inherit_graph_12.svg              |   23 +-
 docs/api/doxygen/inherit_graph_120.svg             |   14 +-
 docs/api/doxygen/inherit_graph_121.svg             |   15 +-
 docs/api/doxygen/inherit_graph_122.svg             |   12 +-
 docs/api/doxygen/inherit_graph_123.svg             |   54 +-
 docs/api/doxygen/inherit_graph_124.svg             |   58 +-
 docs/api/doxygen/inherit_graph_125.svg             |   19 +-
 docs/api/doxygen/inherit_graph_126.svg             |    4 +-
 docs/api/doxygen/inherit_graph_127.svg             |   19 +-
 docs/api/doxygen/inherit_graph_128.svg             |   21 +-
 docs/api/doxygen/inherit_graph_129.svg             |   18 +-
 docs/api/doxygen/inherit_graph_13.svg              |   28 +-
 docs/api/doxygen/inherit_graph_130.svg             |   15 +-
 docs/api/doxygen/inherit_graph_131.svg             |   12 +-
 docs/api/doxygen/inherit_graph_132.svg             |   12 +-
 docs/api/doxygen/inherit_graph_133.svg             |   12 +-
 docs/api/doxygen/inherit_graph_134.svg             |   15 +-
 docs/api/doxygen/inherit_graph_135.svg             |   15 +-
 docs/api/doxygen/inherit_graph_136.svg             |   12 +-
 docs/api/doxygen/inherit_graph_137.svg             |   15 +-
 docs/api/doxygen/inherit_graph_138.svg             |   15 +-
 docs/api/doxygen/inherit_graph_139.svg             |   15 +-
 docs/api/doxygen/inherit_graph_14.svg              |    4 +-
 docs/api/doxygen/inherit_graph_140.svg             |   15 +-
 docs/api/doxygen/inherit_graph_141.svg             |   12 +-
 docs/api/doxygen/inherit_graph_142.svg             |   12 +-
 docs/api/doxygen/inherit_graph_143.svg             |   12 +-
 docs/api/doxygen/inherit_graph_144.svg             |   12 +-
 docs/api/doxygen/inherit_graph_145.svg             |   15 +-
 docs/api/doxygen/inherit_graph_146.svg             |   17 +-
 docs/api/doxygen/inherit_graph_147.svg             |   16 +-
 docs/api/doxygen/inherit_graph_148.svg             |   15 +-
 docs/api/doxygen/inherit_graph_149.svg             |   14 +-
 docs/api/doxygen/inherit_graph_15.svg              |    4 +-
 docs/api/doxygen/inherit_graph_150.svg             |   12 +-
 docs/api/doxygen/inherit_graph_151.svg             |   69 +-
 docs/api/doxygen/inherit_graph_152.svg             |   54 +-
 docs/api/doxygen/inherit_graph_153.svg             |   72 +-
 docs/api/doxygen/inherit_graph_154.svg             |   19 +-
 docs/api/doxygen/inherit_graph_155.svg             |   15 +-
 docs/api/doxygen/inherit_graph_156.svg             |   15 +-
 docs/api/doxygen/inherit_graph_157.svg             |   27 +-
 docs/api/doxygen/inherit_graph_158.svg             |   24 +-
 docs/api/doxygen/inherit_graph_159.svg             |   28 +-
 docs/api/doxygen/inherit_graph_16.svg              |   15 +-
 docs/api/doxygen/inherit_graph_160.svg             |   12 +-
 docs/api/doxygen/inherit_graph_161.svg             |   12 +-
 docs/api/doxygen/inherit_graph_162.svg             |   12 +-
 docs/api/doxygen/inherit_graph_163.svg             |   12 +-
 docs/api/doxygen/inherit_graph_164.svg             |   12 +-
 docs/api/doxygen/inherit_graph_165.svg             |   12 +-
 docs/api/doxygen/inherit_graph_166.svg             |   12 +-
 docs/api/doxygen/inherit_graph_167.svg             |   12 +-
 docs/api/doxygen/inherit_graph_168.svg             |   12 +-
 docs/api/doxygen/inherit_graph_169.svg             |   12 +-
 docs/api/doxygen/inherit_graph_17.svg              |   15 +-
 ...inherit_graph_169.svg => inherit_graph_170.svg} |    0
 docs/api/doxygen/inherit_graph_18.svg              |   12 +-
 docs/api/doxygen/inherit_graph_19.svg              |    4 +-
 docs/api/doxygen/inherit_graph_2.svg               |   12 +-
 docs/api/doxygen/inherit_graph_20.svg              |   41 +-
 docs/api/doxygen/inherit_graph_21.svg              |   37 +-
 docs/api/doxygen/inherit_graph_22.svg              |   25 +-
 docs/api/doxygen/inherit_graph_23.svg              |   12 +-
 docs/api/doxygen/inherit_graph_24.svg              |   12 +-
 docs/api/doxygen/inherit_graph_25.svg              |   12 +-
 docs/api/doxygen/inherit_graph_26.svg              |   15 +-
 docs/api/doxygen/inherit_graph_27.svg              |   14 +-
 docs/api/doxygen/inherit_graph_28.svg              |   15 +-
 docs/api/doxygen/inherit_graph_29.svg              |   12 +-
 docs/api/doxygen/inherit_graph_3.svg               |   12 +-
 docs/api/doxygen/inherit_graph_30.svg              |   15 +-
 docs/api/doxygen/inherit_graph_31.svg              |   15 +-
 docs/api/doxygen/inherit_graph_32.svg              |   15 +-
 docs/api/doxygen/inherit_graph_33.svg              |   14 +-
 docs/api/doxygen/inherit_graph_34.svg              |    4 +-
 docs/api/doxygen/inherit_graph_35.svg              |   14 +-
 docs/api/doxygen/inherit_graph_36.svg              |    4 +-
 docs/api/doxygen/inherit_graph_37.svg              |   54 +-
 docs/api/doxygen/inherit_graph_38.svg              |   54 +-
 docs/api/doxygen/inherit_graph_39.svg              |    4 +-
 docs/api/doxygen/inherit_graph_4.svg               |   15 +-
 docs/api/doxygen/inherit_graph_40.svg              |   27 +-
 docs/api/doxygen/inherit_graph_41.svg              |   25 +-
 docs/api/doxygen/inherit_graph_42.svg              |   24 +-
 docs/api/doxygen/inherit_graph_43.svg              |   12 +-
 docs/api/doxygen/inherit_graph_44.svg              |   17 +-
 docs/api/doxygen/inherit_graph_45.svg              |    4 +-
 docs/api/doxygen/inherit_graph_46.svg              |   17 +-
 docs/api/doxygen/inherit_graph_47.svg              |   12 +-
 docs/api/doxygen/inherit_graph_48.svg              |   14 +-
 docs/api/doxygen/inherit_graph_49.svg              |    4 +-
 docs/api/doxygen/inherit_graph_5.svg               |   15 +-
 docs/api/doxygen/inherit_graph_50.svg              |    4 +-
 docs/api/doxygen/inherit_graph_51.svg              |    4 +-
 docs/api/doxygen/inherit_graph_52.svg              |    4 +-
 docs/api/doxygen/inherit_graph_53.svg              |   15 +-
 docs/api/doxygen/inherit_graph_54.svg              |   15 +-
 docs/api/doxygen/inherit_graph_55.svg              |    4 +-
 docs/api/doxygen/inherit_graph_56.svg              |   17 +-
 docs/api/doxygen/inherit_graph_57.svg              |   16 +-
 docs/api/doxygen/inherit_graph_58.svg              |   12 +-
 docs/api/doxygen/inherit_graph_59.svg              |   12 +-
 docs/api/doxygen/inherit_graph_6.svg               |   12 +-
 docs/api/doxygen/inherit_graph_60.svg              |   15 +-
 docs/api/doxygen/inherit_graph_61.svg              | 3076 +---------
 docs/api/doxygen/inherit_graph_62.svg              | 3102 +++++++++-
 docs/api/doxygen/inherit_graph_63.svg              |   12 +-
 docs/api/doxygen/inherit_graph_64.svg              |   16 +-
 docs/api/doxygen/inherit_graph_65.svg              |   16 +-
 docs/api/doxygen/inherit_graph_66.svg              |   12 +-
 docs/api/doxygen/inherit_graph_67.svg              |   15 +-
 docs/api/doxygen/inherit_graph_68.svg              |   14 +-
 docs/api/doxygen/inherit_graph_69.svg              |   12 +-
 docs/api/doxygen/inherit_graph_7.svg               |   12 +-
 docs/api/doxygen/inherit_graph_70.svg              |   25 +-
 docs/api/doxygen/inherit_graph_71.svg              |   37 +-
 docs/api/doxygen/inherit_graph_72.svg              |   36 +-
 docs/api/doxygen/inherit_graph_73.svg              |   12 +-
 docs/api/doxygen/inherit_graph_74.svg              |   38 +-
 docs/api/doxygen/inherit_graph_75.svg              |   41 +-
 docs/api/doxygen/inherit_graph_76.svg              |   12 +-
 docs/api/doxygen/inherit_graph_77.svg              |   15 +-
 docs/api/doxygen/inherit_graph_78.svg              |   25 +-
 docs/api/doxygen/inherit_graph_79.svg              |   25 +-
 docs/api/doxygen/inherit_graph_8.svg               |   12 +-
 docs/api/doxygen/inherit_graph_80.svg              |   25 +-
 docs/api/doxygen/inherit_graph_81.svg              |   28 +-
 docs/api/doxygen/inherit_graph_82.svg              |   15 +-
 docs/api/doxygen/inherit_graph_83.svg              |   15 +-
 docs/api/doxygen/inherit_graph_84.svg              |   12 +-
 docs/api/doxygen/inherit_graph_85.svg              |   15 +-
 docs/api/doxygen/inherit_graph_86.svg              | 6162 +------------------
 docs/api/doxygen/inherit_graph_87.svg              | 6216 +++++++++++++++++++-
 docs/api/doxygen/inherit_graph_88.svg              |   14 +-
 docs/api/doxygen/inherit_graph_89.svg              |    4 +-
 docs/api/doxygen/inherit_graph_9.svg               |   37 +-
 docs/api/doxygen/inherit_graph_90.svg              |   12 +-
 docs/api/doxygen/inherit_graph_91.svg              |   12 +-
 docs/api/doxygen/inherit_graph_92.svg              |   12 +-
 docs/api/doxygen/inherit_graph_93.svg              |   12 +-
 docs/api/doxygen/inherit_graph_94.svg              |    4 +-
 docs/api/doxygen/inherit_graph_95.svg              |   15 +-
 docs/api/doxygen/inherit_graph_96.svg              |   15 +-
 docs/api/doxygen/inherit_graph_97.svg              |   25 +-
 docs/api/doxygen/inherit_graph_98.svg              |  213 +-
 docs/api/doxygen/inherit_graph_99.svg              |  215 +-
 docs/api/doxygen/inherits.html                     |  308 +-
 docs/api/doxygen/measure_8h.html                   |    6 +
 docs/api/doxygen/measure_8h_source.html            |   77 +-
 docs/api/doxygen/measure__record_8h.html           |   14 +-
 docs/api/doxygen/measure__record_8h_source.html    |   21 +-
 docs/api/doxygen/namespacemembers.html             |    6 +
 docs/api/doxygen/namespacemembers_func.html        |    3 +
 docs/api/doxygen/namespacemembers_func_g.html      |    5 +-
 docs/api/doxygen/namespacemembers_func_w.html      |    2 +-
 docs/api/doxygen/namespacemembers_g.html           |    7 +-
 docs/api/doxygen/namespacemembers_s.html           |    2 +-
 docs/api/doxygen/namespacemembers_vars.html        |    8 +
 docs/api/doxygen/namespacemembers_w.html           |    2 +-
 .../doxygen/namespacetvm_1_1auto__scheduler.html   |   76 +-
 docs/api/doxygen/namespacetvm_1_1relay.html        |    9 +
 .../doxygen/namespacetvm_1_1tir_1_1builtin.html    |   20 +
 docs/api/doxygen/platform_8h.html                  |   89 +-
 docs/api/doxygen/platform_8h__incl.svg             |   81 +-
 docs/api/doxygen/platform_8h_source.html           |    5 +-
 docs/api/doxygen/relay_2attrs_2nn_8h.html          |    3 +
 docs/api/doxygen/relay_2attrs_2nn_8h_source.html   |  237 +-
 docs/api/doxygen/runtime_2crt_2memory_8h.html      |  103 +-
 docs/api/doxygen/runtime_2crt_2memory_8h__incl.svg |   80 +-
 .../doxygen/runtime_2crt_2memory_8h_source.html    |   11 +-
 docs/api/doxygen/search/all_0.js                   |    2 +-
 docs/api/doxygen/search/all_1.js                   |   25 +-
 docs/api/doxygen/search/all_10.js                  |   17 +-
 docs/api/doxygen/search/all_12.js                  |    6 +-
 docs/api/doxygen/search/all_13.js                  |   21 +-
 docs/api/doxygen/search/all_14.js                  |   48 +-
 docs/api/doxygen/search/all_15.js                  |    2 +-
 docs/api/doxygen/search/all_16.js                  |   23 +-
 docs/api/doxygen/search/all_17.js                  |    2 +-
 docs/api/doxygen/search/all_2.js                   |   11 +-
 docs/api/doxygen/search/all_3.js                   |   15 +-
 docs/api/doxygen/search/all_5.js                   |    4 +-
 docs/api/doxygen/search/all_6.js                   |   21 +-
 docs/api/doxygen/search/all_7.js                   |    1 +
 docs/api/doxygen/search/all_9.js                   |    6 +-
 docs/api/doxygen/search/all_b.js                   |    1 +
 docs/api/doxygen/search/all_c.js                   |    2 +-
 docs/api/doxygen/search/all_d.js                   |    8 +-
 docs/api/doxygen/search/classes_0.js               |    1 +
 docs/api/doxygen/search/classes_1.js               |    3 +-
 docs/api/doxygen/search/classes_10.js              |    2 +-
 docs/api/doxygen/search/classes_13.js              |    2 +-
 docs/api/doxygen/search/classes_4.js               |    4 +-
 docs/api/doxygen/search/classes_5.js               |    2 +
 docs/api/doxygen/search/classes_8.js               |    2 +-
 docs/api/doxygen/search/classes_9.js               |    2 +-
 docs/api/doxygen/search/classes_a.js               |    1 +
 docs/api/doxygen/search/classes_d.js               |    2 +
 docs/api/doxygen/search/classes_f.js               |    2 +-
 docs/api/doxygen/search/enumvalues_5.js            |    1 +
 docs/api/doxygen/search/functions_1.js             |    1 +
 docs/api/doxygen/search/functions_10.js            |    3 +-
 docs/api/doxygen/search/functions_12.js            |    2 +-
 docs/api/doxygen/search/functions_14.js            |   18 +-
 docs/api/doxygen/search/functions_15.js            |    2 +-
 docs/api/doxygen/search/functions_16.js            |    7 +-
 docs/api/doxygen/search/functions_17.js            |    2 +-
 docs/api/doxygen/search/functions_3.js             |    4 +-
 docs/api/doxygen/search/functions_6.js             |    1 +
 docs/api/doxygen/search/functions_7.js             |    1 +
 docs/api/doxygen/search/functions_d.js             |    1 +
 docs/api/doxygen/search/typedefs_9.js              |    1 +
 docs/api/doxygen/search/variables_0.js             |    2 +-
 docs/api/doxygen/search/variables_1.js             |    6 +-
 docs/api/doxygen/search/variables_11.js            |    3 +-
 docs/api/doxygen/search/variables_12.js            |    2 +-
 docs/api/doxygen/search/variables_14.js            |    2 +-
 docs/api/doxygen/search/variables_2.js             |    2 +-
 docs/api/doxygen/search/variables_3.js             |    1 +
 docs/api/doxygen/search/variables_6.js             |    1 +
 docs/api/doxygen/search/variables_9.js             |    2 +-
 docs/api/doxygen/search/variables_f.js             |    2 +-
 docs/api/doxygen/search__policy_8h_source.html     |    2 +-
 ...l => structMemoryManagerInterface-members.html} |   14 +-
 docs/api/doxygen/structMemoryManagerInterface.html |  187 +
 .../structMemoryManagerInterface__coll__graph.svg  |   24 +
 ...ttvm_1_1relay_1_1BatchMatmulAttrs-members.html} |   24 +-
 ...=> structtvm_1_1relay_1_1BatchMatmulAttrs.html} |   57 +-
 ...m_1_1relay_1_1BatchMatmulAttrs__coll__graph.svg |  179 +
 ..._1relay_1_1BatchMatmulAttrs__inherit__graph.svg |   88 +
 .../structtvm_1_1relay_1_1Conv2DAttrs-members.html |    2 +-
 .../doxygen/structtvm_1_1relay_1_1Conv2DAttrs.html |   10 +-
 ...ucttvm_1_1relay_1_1Conv2DAttrs__coll__graph.svg |  344 +-
 .../structtvm_1_1relay_1_1DenseAttrs-members.html  |   31 +-
 .../doxygen/structtvm_1_1relay_1_1DenseAttrs.html  |   18 +-
 ...ructtvm_1_1relay_1_1DenseAttrs__coll__graph.svg |  306 +-
 ...ttvm_1_1relay_1_1DenseAttrs__inherit__graph.svg |  102 +-
 ...vm_1_1relay_1_1GetValidCountsAttrs-members.html |    2 +-
 .../structtvm_1_1relay_1_1GetValidCountsAttrs.html |   10 +-
 ..._1relay_1_1GetValidCountsAttrs__coll__graph.svg |   98 +-
 ...elay_1_1NonMaximumSuppressionAttrs-members.html |    2 +-
 ...tvm_1_1relay_1_1NonMaximumSuppressionAttrs.html |   10 +-
 ..._1_1NonMaximumSuppressionAttrs__coll__graph.svg |  150 +-
 ...cttvm_1_1relay_1_1SparseDenseAttrs-members.html |   15 +-
 .../structtvm_1_1relay_1_1SparseDenseAttrs.html    |   19 +
 ...m_1_1relay_1_1SparseDenseAttrs__coll__graph.svg |    2 +-
 ..._1relay_1_1SparseDenseAttrs__inherit__graph.svg |    2 +-
 docs/api/doxygen/utvm__rpc__server_8h.html         |   29 +-
 docs/api/doxygen/utvm__rpc__server_8h_source.html  |    4 +-
 docs/api/doxygen/vision_8h_source.html             |   72 +-
 .../javadoc/org/apache/tvm/class-use/Function.html |   12 +-
 .../javadoc/org/apache/tvm/class-use/Module.html   |    8 +-
 docs/api/python/auto_scheduler.html                |   45 +-
 docs/api/python/contrib.html                       |    8 +-
 docs/api/python/relay/dataflow_pattern.html        |   38 +-
 docs/api/python/relay/index.html                   |   92 +-
 docs/api/python/relay/nn.html                      |  118 +-
 docs/api/python/relay/transform.html               |   10 +-
 docs/api/python/relay/vision.html                  |    2 +-
 docs/api/python/runtime.html                       |   14 +-
 docs/api/python/topi.html                          |  252 +-
 docs/api/rust/compiler_ext/fn.tvm_export.html      |    2 +-
 .../rust/implementors/core/convert/trait.From.js   |    2 +-
 docs/api/rust/search-index.js                      |    6 +-
 docs/api/rust/settings.html                        |    4 +-
 docs/api/rust/src/tvm_rt/array.rs.html             |   64 +-
 docs/api/rust/tvm/enum.Error.html                  |   10 +-
 docs/api/rust/tvm/enum.NDArrayError.html           |    4 +-
 docs/api/rust/tvm/errors/enum.Error.html           |   10 +-
 docs/api/rust/tvm/errors/enum.NDArrayError.html    |    4 +-
 docs/api/rust/tvm/function/enum.ArgValue.html      |   62 +-
 docs/api/rust/tvm/function/enum.RetValue.html      |   44 +-
 docs/api/rust/tvm/function/struct.Function.html    |    8 +-
 docs/api/rust/tvm/module/struct.Module.html        |   14 +-
 docs/api/rust/tvm/ndarray/struct.NDArray.html      |    6 +-
 .../rust/tvm/ndarray/struct.NDArrayContainer.html  |    4 +-
 docs/api/rust/tvm/runtime/array/struct.Array.html  |   24 +-
 .../rust/tvm/runtime/array/struct.IntoIter.html    |    4 +-
 docs/api/rust/tvm/runtime/enum.ArgValue.html       |   62 +-
 docs/api/rust/tvm/runtime/enum.Error.html          |   10 +-
 docs/api/rust/tvm/runtime/enum.NDArrayError.html   |    4 +-
 docs/api/rust/tvm/runtime/enum.RetValue.html       |   44 +-
 docs/api/rust/tvm/runtime/errors/enum.Error.html   |   10 +-
 .../rust/tvm/runtime/errors/enum.NDArrayError.html |    4 +-
 .../rust/tvm/runtime/function/enum.ArgValue.html   |   62 +-
 .../rust/tvm/runtime/function/enum.RetValue.html   |   44 +-
 .../rust/tvm/runtime/function/struct.Function.html |    8 +-
 docs/api/rust/tvm/runtime/map/struct.Map.html      |   12 +-
 .../api/rust/tvm/runtime/module/struct.Module.html |   14 +-
 .../rust/tvm/runtime/ndarray/struct.NDArray.html   |    6 +-
 .../runtime/ndarray/struct.NDArrayContainer.html   |    4 +-
 .../rust/tvm/runtime/object/struct.ObjectPtr.html  |   14 +-
 .../rust/tvm/runtime/object/struct.ObjectRef.html  |   18 +-
 .../rust/tvm/runtime/object/trait.IsObjectRef.html |    2 +-
 .../api/rust/tvm/runtime/string/struct.String.html |    4 +-
 .../rust/tvm/runtime/string/struct.StringObj.html  |    4 +-
 docs/api/rust/tvm/runtime/struct.Function.html     |    8 +-
 docs/api/rust/tvm/runtime/struct.Module.html       |   14 +-
 docs/api/rust/tvm/runtime/struct.NDArray.html      |    6 +-
 docs/api/rust/tvm/runtime/struct.ObjectPtr.html    |   14 +-
 docs/api/rust/tvm/runtime/struct.ObjectRef.html    |   18 +-
 docs/api/rust/tvm/runtime/struct.String.html       |    4 +-
 docs/api/rust/tvm/runtime/struct.StringObj.html    |    4 +-
 docs/api/rust/tvm/runtime/trait.IsObjectRef.html   |    2 +-
 docs/api/rust/tvm/struct.Function.html             |    8 +-
 docs/api/rust/tvm/struct.Module.html               |   14 +-
 docs/api/rust/tvm/struct.NDArray.html              |    6 +-
 docs/api/rust/tvm_graph_rt/struct.Entry.html       |    6 +-
 docs/api/rust/tvm_graph_rt/struct.Graph.html       |    6 +-
 docs/api/rust/tvm_graph_rt/struct.Node.html        |    6 +-
 docs/api/rust/tvm_rt/array/index.html              |    2 +-
 docs/api/rust/tvm_rt/array/struct.Array.html       |   24 +-
 docs/api/rust/tvm_rt/array/struct.IntoIter.html    |    4 +-
 docs/api/rust/tvm_rt/enum.ArgValue.html            |    6 +-
 docs/api/rust/tvm_rt/enum.RetValue.html            |    6 +-
 docs/api/rust/tvm_rt/function/enum.ArgValue.html   |    6 +-
 docs/api/rust/tvm_rt/function/enum.RetValue.html   |    6 +-
 docs/api/rust/tvm_rt/object/trait.IsObjectRef.html |    2 +-
 docs/api/typedoc/assets/js/main.js                 |    2 +-
 docs/api/typedoc/classes/bytestreamreader.html     |   13 +-
 docs/api/typedoc/classes/cachedcallstack.html      |   35 +-
 docs/api/typedoc/classes/dlcontext.html            |   11 +-
 docs/api/typedoc/classes/dldatatype.html           |   13 +-
 docs/api/typedoc/classes/environment.html          |   13 +-
 docs/api/typedoc/classes/ffilibrary.html           |   21 +-
 docs/api/typedoc/classes/graphruntime.html         |   17 +-
 docs/api/typedoc/classes/instance.html             |   41 +-
 docs/api/typedoc/classes/memory.html               |   35 +-
 docs/api/typedoc/classes/module.html               |   11 +-
 docs/api/typedoc/classes/ndarray.html              |   23 +-
 docs/api/typedoc/classes/packedfunccell.html       |    7 +-
 docs/api/typedoc/classes/rpcserver.html            |   15 +-
 docs/api/typedoc/classes/scalar.html               |    7 +-
 docs/api/typedoc/classes/webgpucontext.html        |   13 +-
 docs/api/typedoc/enums/argtypecode.html            |   31 +-
 docs/api/typedoc/enums/aynccallbackcode.html       |    5 +-
 docs/api/typedoc/enums/dldatatypecode.html         |    9 +-
 docs/api/typedoc/enums/rpcserverstate.html         |   13 +-
 docs/api/typedoc/enums/sizeof.html                 |   19 +-
 docs/api/typedoc/index.html                        |  115 +-
 docs/api/typedoc/interfaces/disposable.html        |    3 +-
 docs/api/typedoc/interfaces/functioninfo.html      |    7 +-
 docs/api/typedoc/interfaces/libraryprovider.html   |    5 +-
 docs/genindex.html                                 |   26 +-
 docs/install/from_source.html                      |    2 +-
 docs/langref/relay_pattern.html                    |   15 +
 docs/objects.inv                                   |  Bin 17634 -> 17784 bytes
 docs/searchindex.js                                |    2 +-
 .../auto_scheduler/sg_execution_times.html         |   11 +-
 .../auto_scheduler/tune_conv2d_layer_cuda.html     | 1044 +---
 docs/tutorials/auto_scheduler/tune_matmul_x86.html |   41 +-
 .../auto_scheduler/tune_network_cuda.html          |    9 +-
 ...une_network_x86.html => tune_network_mali.html} |  480 +-
 .../tutorials/auto_scheduler/tune_network_x86.html |   16 +-
 docs/tutorials/autotvm/sg_execution_times.html     |   14 +-
 docs/tutorials/autotvm/tune_conv2d_cuda.html       |   46 +-
 docs/tutorials/autotvm/tune_relay_arm.html         |    6 +-
 docs/tutorials/autotvm/tune_relay_cuda.html        |   10 +-
 docs/tutorials/autotvm/tune_relay_mobile_gpu.html  |    8 +-
 docs/tutorials/autotvm/tune_simple_template.html   |   22 +-
 docs/tutorials/dev/sg_execution_times.html         |    8 +-
 .../frontend/deploy_model_on_android.html          |    2 +-
 .../frontend/deploy_object_detection_pytorch.html  |    5 +-
 docs/tutorials/frontend/deploy_prequantized.html   |    6 +-
 .../frontend/deploy_prequantized_tflite.html       |    4 +-
 docs/tutorials/frontend/deploy_ssd_gluoncv.html    |    2 +-
 docs/tutorials/frontend/from_mxnet.html            |    4 +
 docs/tutorials/frontend/from_onnx.html             |   23 +-
 docs/tutorials/frontend/from_pytorch.html          |    6 +-
 docs/tutorials/frontend/from_tensorflow.html       | 2104 +------
 docs/tutorials/frontend/sg_execution_times.html    |   40 +-
 .../get_started/cross_compilation_and_rpc.html     |    2 +-
 docs/tutorials/get_started/relay_quick_start.html  |  120 +-
 docs/tutorials/get_started/sg_execution_times.html |   10 +-
 .../get_started/tensor_expr_get_started.html       |    2 +-
 docs/tutorials/index.html                          |   33 +-
 docs/tutorials/language/schedule_primitives.html   |   14 +-
 docs/tutorials/language/sg_execution_times.html    |   18 +-
 docs/tutorials/language/tensorize.html             |    8 +-
 docs/tutorials/language/tuple_inputs.html          |   22 +-
 docs/tutorials/micro/micro_tflite.html             |    3 +
 docs/tutorials/micro/sg_execution_times.html       |    6 +-
 docs/tutorials/optimize/opt_conv_cuda.html         |    2 +-
 docs/tutorials/optimize/opt_conv_tensorcore.html   |    2 +-
 docs/tutorials/optimize/opt_gemm.html              |   20 +-
 docs/tutorials/optimize/sg_execution_times.html    |   10 +-
 docs/tutorials/topi/intro_topi.html                |    2 +-
 docs/tutorials/topi/sg_execution_times.html        |    4 +-
 docs/vta/tutorials/autotvm/sg_execution_times.html |    4 +-
 docs/vta/tutorials/autotvm/tune_relay_vta.html     |  219 +-
 .../tutorials/frontend/deploy_classification.html  |   14 +-
 .../vta/tutorials/frontend/sg_execution_times.html |    4 +-
 docs/vta/tutorials/matrix_multiply.html            |    4 +-
 docs/vta/tutorials/optimize/convolution_opt.html   |    8 +-
 .../tutorials/optimize/matrix_multiply_opt.html    |   12 +-
 .../vta/tutorials/optimize/sg_execution_times.html |    6 +-
 docs/vta/tutorials/sg_execution_times.html         |    6 +-
 605 files changed, 20145 insertions(+), 23061 deletions(-)

diff --git a/docs/_downloads/0bb862dbb3a4c434477f93fe2c147fbb/tune_simple_template.py b/docs/_downloads/0bb862dbb3a4c434477f93fe2c147fbb/tune_simple_template.py
index db199fc..d7d43c7 100644
--- a/docs/_downloads/0bb862dbb3a4c434477f93fe2c147fbb/tune_simple_template.py
+++ b/docs/_downloads/0bb862dbb3a4c434477f93fe2c147fbb/tune_simple_template.py
@@ -41,7 +41,7 @@ __name__ == "__main__":` block.
 #
 # .. code-block:: bash
 #
-#   pip3 install --user psutil xgboost
+#   pip3 install --user psutil xgboost cloudpickle
 #
 # To make TVM run faster in tuning, it is recommended to use cython
 # as FFI of TVM. In the root directory of TVM, execute
diff --git a/docs/_downloads/afa7f0ecb19178546f310a1dfa66281f/tune_network_x86.ipynb b/docs/_downloads/0c8b1cb0bb1d1dff7899c341215a0f35/tune_network_mali.ipynb
similarity index 58%
copy from docs/_downloads/afa7f0ecb19178546f310a1dfa66281f/tune_network_x86.ipynb
copy to docs/_downloads/0c8b1cb0bb1d1dff7899c341215a0f35/tune_network_mali.ipynb
index e03fb03..4254721 100644
--- a/docs/_downloads/afa7f0ecb19178546f310a1dfa66281f/tune_network_x86.ipynb
+++ b/docs/_downloads/0c8b1cb0bb1d1dff7899c341215a0f35/tune_network_mali.ipynb
@@ -15,7 +15,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "\nAuto-scheduling a Neural Network for x86 CPU\n============================================\n**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_\n\nAuto-tuning for specific devices and workloads is critical for getting the\nbest performance. This is a tutorial on how to tune a whole neural\nnetwork for x86 CPU with the auto-scheduler.\n\nTo auto-tune a neural network, we partition the network into small subgraphs and \ntune them independently. Each subgraph is treated  [...]
+        "\nAuto-scheduling a Neural Network for mali GPU\n=============================================\n**Author**: `Zhao Wu <https://github.com/FrozenGene>`_\n\nAuto-tuning for specific devices and workloads is critical for getting the\nbest performance. This is a tutorial on how to tune a whole neural\nnetwork for mali GPU with the auto-scheduler.\n\nTo auto-tune a neural network, we partition the network into small subgraphs and \ntune them independently. Each subgraph is treated as  [...]
       ]
     },
     {
@@ -26,7 +26,7 @@
       },
       "outputs": [],
       "source": [
-        "import numpy as np\n\nimport tvm\nfrom tvm import relay, auto_scheduler\nimport tvm.relay.testing\nfrom tvm.contrib import graph_runtime"
+        "import numpy as np\n\nimport tvm\nfrom tvm import relay, auto_scheduler\nimport tvm.relay.testing\nfrom tvm.contrib import graph_runtime\nimport os"
       ]
     },
     {
@@ -44,14 +44,14 @@
       },
       "outputs": [],
       "source": [
-        "def get_network(name, batch_size, layout=\"NHWC\", dtype=\"float32\"):\n    \"\"\"Get the symbol definition and random weight of a network\"\"\"\n\n    # auto-scheduler prefers NHWC layout\n    if layout == \"NHWC\":\n        image_shape = (224, 224, 3)\n    elif layout == \"NCHW\":\n        image_shape = (3, 224, 224)\n    else:\n        raise ValueError(\"Invalid layout: \" + layout)\n\n    input_shape = (batch_size,) + image_shape\n    output_shape = (batch_size, 1000)\n\n    [...]
+        "def get_network(name, batch_size, layout=\"NHWC\", dtype=\"float32\"):\n    \"\"\"Get the symbol definition and random weight of a network\"\"\"\n\n    # auto-scheduler prefers NHWC layout\n    if layout == \"NHWC\":\n        image_shape = (224, 224, 3)\n    elif layout == \"NCHW\":\n        image_shape = (3, 224, 224)\n    else:\n        raise ValueError(\"Invalid layout: \" + layout)\n\n    input_shape = (batch_size,) + image_shape\n    output_shape = (batch_size, 1000)\n\n    [...]
       ]
     },
     {
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Extract Search Tasks\n--------------------\nNext, we extract the search tasks and their weights from a network.\nThe weight of a task is the number of appearances of the task's subgraph\nin the whole network.\nBy using the weight, we can approximate the end-to-end latency of the network\nas :code:`sum(latency[t] * weight[t])`, where :code:`latency[t]` is the\nlatency of a task and :code:`weight[t]` is the weight of the task.\nThe task scheduler will just optimize this objective.\n\n"
+        "Start an RPC Tracker and Register Devices to the Tracker\n--------------------------------------------------------\nPlease refer to the \"Start RPC Tracker\" and \"Register Devices to RPC Tracker\" setions\nin this `tutorial <tutorials-autotvm-start-rpc-tracker>` to start an RPC tracker\nand register devices to the tracker.\n\n"
       ]
     },
     {
@@ -62,14 +62,14 @@
       },
       "outputs": [],
       "source": [
-        "# Extract tasks from the network\nprint(\"Extract tasks...\")\nmod, params, input_shape, output_shape = get_network(network, batch_size, layout, dtype=dtype)\ntasks, task_weights = auto_scheduler.extract_tasks(mod[\"main\"], params, target)\n\nfor idx, task in enumerate(tasks):\n    print(\"========== Task %d  (workload key: %s) ==========\" % (idx, task.workload_key))\n    print(task.compute_dag)"
+        "# Replace this with the device key in your tracker\ndevice_key = \"rk3399\""
       ]
     },
     {
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Begin Tuning\n------------\nNow, we set some options for tuning and launch the search tasks\n\n* :code:`num_measure_trials` is the number of measurement trials we can use during the tuning.\n  You can set it to a small number (e.g., 200) for a fast demonstrative run.\n  In practice, we recommend setting it around :code:`800 * len(tasks)`,\n  which is typically enough for the search to converge.\n  For example, there are 29 tasks in resnet-50, so we can set it as 20000.\n  You ca [...]
+        "Extract Search Tasks\n--------------------\nNext, we extract the search tasks and their weights from a network.\nThe weight of a task is the number of appearances of the task's subgraph\nin the whole network.\nBy using the weight, we can approximate the end-to-end latency of the network\nas :code:`sum(latency[t] * weight[t])`, where :code:`latency[t]` is the\nlatency of a task and :code:`weight[t]` is the weight of the task.\nThe task scheduler will just optimize this objective.\n\n"
       ]
     },
     {
@@ -80,28 +80,21 @@
       },
       "outputs": [],
       "source": [
-        "def run_tuning():\n    print(\"Begin tuning...\")\n    tuner = auto_scheduler.TaskScheduler(tasks, task_weights)\n    tune_option = auto_scheduler.TuningOptions(\n        num_measure_trials=200,  # change this to 20000 to achieve the best performance\n        runner=auto_scheduler.LocalRunner(repeat=10, enable_cpu_cache_flush=True),\n        measure_callbacks=[auto_scheduler.RecordToFile(log_file)],\n    )\n\n    tuner.tune(tune_option)\n\n\n# We do not run the tuning in our web [...]
-      ]
-    },
-    {
-      "cell_type": "markdown",
-      "metadata": {},
-      "source": [
-        "<div class=\"alert alert-info\"><h4>Note</h4><p>Explain the printed information during tuning\n\n  During the tuning, a lot of information will be printed on the console.\n  They are used for debugging purposes. The most important info is the output\n  of the task scheduler. The following table is a sample output.\n\n  .. code-block:: c\n\n    ----------------------------------------------------------------------\n    ------------------------------  [ Task Scheduler ]\n    ----- [...]
+        "# Extract tasks from the network\nprint(\"Extract tasks...\")\nmod, params, input_shape, output_shape = get_network(network, batch_size, layout, dtype=dtype)\ntasks, task_weights = auto_scheduler.extract_tasks(mod[\"main\"], params, target, target_host)\n\nfor idx, task in enumerate(tasks):\n    print(\"========== Task %d  (workload key: %s) ==========\" % (idx, task.workload_key))\n    print(task.compute_dag)"
       ]
     },
     {
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "<div class=\"alert alert-info\"><h4>Note</h4><p>Terminate the tuning earlier\n\n  You can terminate the tuning earlier by forcibly killing this process.\n  As long as you get at least one valid schedule for each task in the log file,\n  you should be able to do the compilation (the secion below).</p></div>\n\n\n"
+        "<div class=\"alert alert-info\"><h4>Note</h4><p>How to get the hardware parameters from remote device\n\n  .. code-block:: python\n\n    from tvm.auto_scheduler.utils import request_remote\n    remote = request_remote(device_key, \"0.0.0.0\", 9190)\n    ctx = remote.cl()\n    max_shared_memory_per_block = ctx.max_shared_memory_per_block\n    # There is no explicit local memory limition\n    # so we can use INT32_MAX to disalbe the check on local_memory.\n    max_local_memory_per [...]
       ]
     },
     {
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Compile and Evaluate\n--------------------\nAfter auto-tuning, we can compile the network with the best schedules we found.\nAll measurement records are dumped into the log file during auto-tuning,\nso we can read the log file and load the best schedules.\n\n"
+        "Tuning and Evaluate\n-------------------\nNow, we set some options for tuning, launch the search tasks and evaluate the end-to-end performance\n\n* :code:`num_measure_trials` is the number of measurement trials we can use during the tuning.\n  You can set it to a small number (e.g., 200) for a fast demonstrative run.\n  In practice, we recommend setting it around :code:`800 * len(tasks)`,\n  which is typically enough for the search to converge.\n  For example, there are 29 tasks [...]
       ]
     },
     {
@@ -112,14 +105,28 @@
       },
       "outputs": [],
       "source": [
-        "# Compile with the history best\nprint(\"Compile...\")\nwith auto_scheduler.ApplyHistoryBest(log_file):\n    with tvm.transform.PassContext(opt_level=3, config={\"relay.backend.use_auto_scheduler\": True}):\n        lib = relay.build(mod, target=target, params=params)\n\n# Create graph runtime\nctx = tvm.context(str(target), 0)\nmodule = graph_runtime.GraphModule(lib[\"default\"](ctx))\ndata_tvm = tvm.nd.array((np.random.uniform(size=input_shape)).astype(dtype))\nmodule.set_inpu [...]
+        "def tune_and_evaluate():\n    print(\"Begin tuning...\")\n    tuner = auto_scheduler.TaskScheduler(tasks, task_weights)\n    tune_option = auto_scheduler.TuningOptions(\n        num_measure_trials=200,  # change this to 20000 to achieve the best performance\n        builder=auto_scheduler.LocalBuilder(build_func=\"ndk\" if use_ndk else \"default\"),\n        runner=auto_scheduler.RPCRunner(\n            device_key, host=\"0.0.0.0\", port=9190, repeat=3, timeout=50\n        ),\n  [...]
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {},
+      "source": [
+        "<div class=\"alert alert-info\"><h4>Note</h4><p>Explain the printed information during tuning\n\n  During the tuning, a lot of information will be printed on the console.\n  They are used for debugging purposes. The most important info is the output\n  of the task scheduler. The following table is a sample output.\n\n  .. code-block:: c\n\n    ----------------------------------------------------------------------\n    ------------------------------  [ Task Scheduler ]\n    ----- [...]
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {},
+      "source": [
+        "<div class=\"alert alert-info\"><h4>Note</h4><p>Terminate the tuning earlier\n\n  You can terminate the tuning earlier by forcibly killing this process.\n  As long as you get at least one valid schedule for each task in the log file,\n  you should be able to do the compilation (the secion below).</p></div>\n\n\n"
       ]
     },
     {
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Other Tips\n----------\n1. During the tuning, the auto-scheduler needs to compile many programs and\n   extract feature from them. This part is CPU-intensive,\n   so a high-performance CPU with many cores is recommended for faster search.\n2. You can use :code:`python3 -m tvm.auto_scheduler.measure_record --mode distill --i log.json`\n   to distill the large log file and only save the best useful records.\n3. You can resume a search from the previous log file. You just need to\n [...]
+        "Other Tips\n----------\n1. During the tuning, the auto-scheduler needs to compile many programs and\n   extract feature from them. This part is CPU-intensive,\n   so a high-performance CPU with many cores is recommended for faster search.\n2. You can use :code:`python3 -m tvm.auto_scheduler.measure_record --mode distill --i log.json`\n   to distill the large log file and only save the best useful records.\n3. You can resume a search from the previous log file. You just need to\n [...]
       ]
     }
   ],
diff --git a/docs/_downloads/0d95a85fc279fdff660608ef305b9107/tune_simple_template.ipynb b/docs/_downloads/0d95a85fc279fdff660608ef305b9107/tune_simple_template.ipynb
index 63b2965..f003ad0 100644
--- a/docs/_downloads/0d95a85fc279fdff660608ef305b9107/tune_simple_template.ipynb
+++ b/docs/_downloads/0d95a85fc279fdff660608ef305b9107/tune_simple_template.ipynb
@@ -22,7 +22,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Install dependencies\n--------------------\nTo use autotvm package in TVM, we need to install some extra dependencies.\nThis step (installing xgboost) can be skipped as it doesn't need XGBoost\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost\n\nTo make TVM run faster in tuning, it is recommended to use cython\nas FFI of TVM. In the root directory of TVM, execute\n(change \"3\" to \"2\" if you use python2):\n\n.. code-bl [...]
+        "Install dependencies\n--------------------\nTo use autotvm package in TVM, we need to install some extra dependencies.\nThis step (installing xgboost) can be skipped as it doesn't need XGBoost\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost cloudpickle\n\nTo make TVM run faster in tuning, it is recommended to use cython\nas FFI of TVM. In the root directory of TVM, execute\n(change \"3\" to \"2\" if you use python2):\n [...]
       ]
     },
     {
diff --git a/docs/_downloads/2354a24ad8bc07194943c49f2fb48874/tune_conv2d_cuda.ipynb b/docs/_downloads/2354a24ad8bc07194943c49f2fb48874/tune_conv2d_cuda.ipynb
index 06ec1b5..7c7b428 100644
--- a/docs/_downloads/2354a24ad8bc07194943c49f2fb48874/tune_conv2d_cuda.ipynb
+++ b/docs/_downloads/2354a24ad8bc07194943c49f2fb48874/tune_conv2d_cuda.ipynb
@@ -22,7 +22,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Install dependencies\n--------------------\nTo use autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado\n\nTo make TVM run faster in tuning, it is recommended to use cython\nas FFI of tvm. In the root directory of tvm, execute\n\n.. code-block:: bash\n\n  pip3 install --user cython\n  sudo make cython3\n\nNow return to python code. Import packages.\n\n"
+        "Install dependencies\n--------------------\nTo use autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado cloudpickle\n\nTo make TVM run faster in tuning, it is recommended to use cython\nas FFI of tvm. In the root directory of tvm, execute\n\n.. code-block:: bash\n\n  pip3 install --user cython\n  sudo make cython3\n\nNow return to python code. Impor [...]
       ]
     },
     {
diff --git a/docs/_downloads/272a5a893d007658546dc0eaf0a7aeed/tune_relay_cuda.py b/docs/_downloads/272a5a893d007658546dc0eaf0a7aeed/tune_relay_cuda.py
index 76a30ec..148ebbf 100644
--- a/docs/_downloads/272a5a893d007658546dc0eaf0a7aeed/tune_relay_cuda.py
+++ b/docs/_downloads/272a5a893d007658546dc0eaf0a7aeed/tune_relay_cuda.py
@@ -47,7 +47,7 @@ __name__ == "__main__":` block.
 #
 # .. code-block:: bash
 #
-#   pip3 install --user psutil xgboost tornado
+#   pip3 install --user psutil xgboost tornado cloudpickle
 #
 # To make TVM run faster during tuning, it is recommended to use cython
 # as FFI of tvm. In the root directory of tvm, execute:
@@ -311,12 +311,12 @@ def tune_and_evaluate(tuning_opt):
 #
 #   Finally, always feel free to ask our community for help on https://discuss.tvm.apache.org
 
+#################################################################
+# .. _tutorials-autotvm-scale-up-rpc-tracker:
 
 #################################################################
 # Scale up measurement by using multiple devices
 # ----------------------------------------------
-# .. _tutorials-autotvm-rpc-tracker:
-#
 # If you have multiple devices, you can use all of them for measurement.
 # TVM uses the RPC Tracker to manage distributed devices.
 # The RPC Tracker is a centralized controller node. We can register all devices to
@@ -337,8 +337,8 @@ def tune_and_evaluate(tuning_opt):
 #
 #   INFO:RPCTracker:bind to 0.0.0.0:9190
 #
-# Then open another new terminal for the RPC server. We need to start one server
-# for each dedicated device. We use a string key to distinguish the types of devices.
+# Then open another new terminal for the RPC server. We need to start one dedicated server
+# for each device. We use a string key to distinguish the types of devices.
 # You can pick a name you like.
 # (Note: For rocm backend, there are some internal errors with the compiler,
 # we need to add `--no-fork` to the argument list.)
diff --git a/docs/_downloads/2771a7fc8bf8eeb7788823ff349aacc0/tune_network_cuda.py b/docs/_downloads/2771a7fc8bf8eeb7788823ff349aacc0/tune_network_cuda.py
index 3da9f3f..b098869 100644
--- a/docs/_downloads/2771a7fc8bf8eeb7788823ff349aacc0/tune_network_cuda.py
+++ b/docs/_downloads/2771a7fc8bf8eeb7788823ff349aacc0/tune_network_cuda.py
@@ -306,7 +306,7 @@ print("Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), n
 #    in function :code:`run_tuning`. Say,
 #    :code:`tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)`
 # 4. If you have multiple target GPUs, you can use all of them for measurements to
-#    parallelize the measurements. Check this :ref:`section <tutorials-autotvm-rpc-tracker>`
+#    parallelize the measurements. Check this :ref:`section <tutorials-autotvm-scale-up-rpc-tracker>`
 #    to learn how to use the RPC Tracker and RPC Server.
 #    To use the RPC Tracker in auto-scheduler, replace the runner in :code:`TuningOptions`
 #    with :any:`auto_scheduler.RPCRunner`.
diff --git a/docs/_downloads/2c0ed53a9ebd68caf76cd8235fae2711/tune_relay_mobile_gpu.ipynb b/docs/_downloads/2c0ed53a9ebd68caf76cd8235fae2711/tune_relay_mobile_gpu.ipynb
index 1a78641..705ba34 100644
--- a/docs/_downloads/2c0ed53a9ebd68caf76cd8235fae2711/tune_relay_mobile_gpu.ipynb
+++ b/docs/_downloads/2c0ed53a9ebd68caf76cd8235fae2711/tune_relay_mobile_gpu.ipynb
@@ -22,7 +22,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Install dependencies\n--------------------\nTo use the autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado\n\nTo make TVM run faster during tuning, it is recommended to use cython\nas FFI of tvm. In the root directory of tvm, execute\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user cython\n  sudo make cy [...]
+        "Install dependencies\n--------------------\nTo use the autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado cloudpickle\n\nTo make TVM run faster during tuning, it is recommended to use cython\nas FFI of tvm. In the root directory of tvm, execute\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user cython\n   [...]
       ]
     },
     {
@@ -58,6 +58,13 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
+        "\n"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {},
+      "source": [
         "Start RPC Tracker\n-----------------\nTVM uses RPC session to communicate with ARM boards.\nDuring tuning, the tuner will send the generated code to the board and\nmeasure the speed of code on the board.\n\nTo scale up the tuning, TVM uses RPC Tracker to manage distributed devices.\nThe RPC Tracker is a centralized controller node. We can register all devices to\nthe tracker. For example, if we have 10 phones, we can register all of them\nto the tracker, and run 10 measurements  [...]
       ]
     },
@@ -65,7 +72,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Register devices to RPC Tracker\n-----------------------------------\nNow we can register our devices to the tracker. The first step is to\nbuild the TVM runtime for the ARM devices.\n\n* For Linux:\n  Follow this section `build-tvm-runtime-on-device` to build\n  the TVM runtime on the device. Then register the device to tracker by\n\n  .. code-block:: bash\n\n    python -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=rk3399\n\n  (replace :code:`[HOST_IP]` with the IP addr [...]
+        "Register Devices to RPC Tracker\n-----------------------------------\nNow we can register our devices to the tracker. The first step is to\nbuild the TVM runtime for the ARM devices.\n\n* For Linux:\n  Follow this section `build-tvm-runtime-on-device` to build\n  the TVM runtime on the device. Then register the device to tracker by\n\n  .. code-block:: bash\n\n    python -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=rk3399\n\n  (replace :code:`[HOST_IP]` with the IP addr [...]
       ]
     },
     {
diff --git a/docs/_downloads/2c8ef0390ad4c53ca85671fa36c33b26/tune_conv2d_cuda.py b/docs/_downloads/2c8ef0390ad4c53ca85671fa36c33b26/tune_conv2d_cuda.py
index b662baf..c320495 100644
--- a/docs/_downloads/2c8ef0390ad4c53ca85671fa36c33b26/tune_conv2d_cuda.py
+++ b/docs/_downloads/2c8ef0390ad4c53ca85671fa36c33b26/tune_conv2d_cuda.py
@@ -36,7 +36,7 @@ __name__ == "__main__":` block.
 #
 # .. code-block:: bash
 #
-#   pip3 install --user psutil xgboost tornado
+#   pip3 install --user psutil xgboost tornado cloudpickle
 #
 # To make TVM run faster in tuning, it is recommended to use cython
 # as FFI of tvm. In the root directory of tvm, execute
diff --git a/docs/_downloads/48bd751ebaae08fce134e559f86a25cc/tune_relay_vta.ipynb b/docs/_downloads/48bd751ebaae08fce134e559f86a25cc/tune_relay_vta.ipynb
index 9846b89..410d180 100644
--- a/docs/_downloads/48bd751ebaae08fce134e559f86a25cc/tune_relay_vta.ipynb
+++ b/docs/_downloads/48bd751ebaae08fce134e559f86a25cc/tune_relay_vta.ipynb
@@ -22,7 +22,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Install dependencies\n--------------------\nTo use the autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado mxnet requests \"Pillow<7\"\n\nTo make TVM run faster during tuning, it is recommended to use cython\nas FFI of TVM. In the root directory of TVM, execute\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install - [...]
+        "Install dependencies\n--------------------\nTo use the autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado mxnet requests \"Pillow<7\" cloudpickle\n\nTo make TVM run faster during tuning, it is recommended to use cython\nas FFI of TVM. In the root directory of TVM, execute\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pi [...]
       ]
     },
     {
@@ -144,7 +144,7 @@
       },
       "outputs": [],
       "source": [
-        "def tune_and_evaluate(tuning_opt):\n\n    if env.TARGET != \"sim\":\n        # Get remote from fleet node\n        remote = autotvm.measure.request_remote(\n            env.TARGET, tracker_host, tracker_port, timeout=10000\n        )\n        # Reconfigure the JIT runtime and FPGA.\n        vta.reconfig_runtime(remote)\n        vta.program_fpga(remote, bitstream=None)\n    else:\n        # In simulation mode, host the RPC server locally.\n        remote = rpc.LocalSession()\n\n  [...]
+        "def tune_and_evaluate(tuning_opt):\n\n    # Register VTA tuning tasks\n    register_vta_tuning_tasks()\n\n    # Perform task extraction on Relay program\n    print(\"Extract tasks...\")\n    relay_prog, params = compile_network(env, target, network, start_pack, stop_pack)\n    mod = tvm.IRModule.from_expr(relay_prog)\n    tasks = autotvm.task.extract_from_program(\n        mod,\n        params=params,\n        ops=(relay.op.get(\"nn.conv2d\"),),\n        target=target,\n         [...]
       ]
     },
     {
diff --git a/docs/_downloads/612f9e42b0247df5c8ab277534e2af65/tune_relay_vta.py b/docs/_downloads/612f9e42b0247df5c8ab277534e2af65/tune_relay_vta.py
index 7f04424..273f0af 100644
--- a/docs/_downloads/612f9e42b0247df5c8ab277534e2af65/tune_relay_vta.py
+++ b/docs/_downloads/612f9e42b0247df5c8ab277534e2af65/tune_relay_vta.py
@@ -40,7 +40,7 @@ log file to get the best knob parameters.
 #
 # .. code-block:: bash
 #
-#   pip3 install --user psutil xgboost tornado mxnet requests "Pillow<7"
+#   pip3 install --user psutil xgboost tornado mxnet requests "Pillow<7" cloudpickle
 #
 # To make TVM run faster during tuning, it is recommended to use cython
 # as FFI of TVM. In the root directory of TVM, execute
@@ -340,18 +340,6 @@ def register_vta_tuning_tasks():
 
 def tune_and_evaluate(tuning_opt):
 
-    if env.TARGET != "sim":
-        # Get remote from fleet node
-        remote = autotvm.measure.request_remote(
-            env.TARGET, tracker_host, tracker_port, timeout=10000
-        )
-        # Reconfigure the JIT runtime and FPGA.
-        vta.reconfig_runtime(remote)
-        vta.program_fpga(remote, bitstream=None)
-    else:
-        # In simulation mode, host the RPC server locally.
-        remote = rpc.LocalSession()
-
     # Register VTA tuning tasks
     register_vta_tuning_tasks()
 
@@ -407,6 +395,19 @@ def tune_and_evaluate(tuning_opt):
     print("Tuning...")
     tune_tasks(tasks, **tuning_opt)
 
+    # evaluate with tuning history
+    if env.TARGET != "sim":
+        # Get remote from fleet node
+        remote = autotvm.measure.request_remote(
+            env.TARGET, tracker_host, tracker_port, timeout=10000
+        )
+        # Reconfigure the JIT runtime and FPGA.
+        vta.reconfig_runtime(remote)
+        vta.program_fpga(remote, bitstream=None)
+    else:
+        # In simulation mode, host the RPC server locally.
+        remote = rpc.LocalSession()
+
     # compile kernels with history best records
     with autotvm.tophub.context(target, extra_files=[log_file]):
         # Compile network
@@ -425,9 +426,9 @@ def tune_and_evaluate(tuning_opt):
         # Export library
         print("Upload...")
         temp = utils.tempdir()
-        lib.save(temp.relpath("graphlib.o"))
-        remote.upload(temp.relpath("graphlib.o"))
-        lib = remote.load_module("graphlib.o")
+        lib.export_library(temp.relpath("graphlib.tar"))
+        remote.upload(temp.relpath("graphlib.tar"))
+        lib = remote.load_module("graphlib.tar")
 
         # Generate the graph runtime
         ctx = remote.ext_dev(0) if device == "vta" else remote.cpu(0)
diff --git a/docs/_downloads/678f3c372a599a18d909aed0fefb30be/tune_conv2d_layer_cuda.py b/docs/_downloads/678f3c372a599a18d909aed0fefb30be/tune_conv2d_layer_cuda.py
index 103ceb4..396bdb0 100644
--- a/docs/_downloads/678f3c372a599a18d909aed0fefb30be/tune_conv2d_layer_cuda.py
+++ b/docs/_downloads/678f3c372a599a18d909aed0fefb30be/tune_conv2d_layer_cuda.py
@@ -186,18 +186,24 @@ print(task.print_best(log_file, print_mode="cuda"))
 # and resume the status of search policy and cost model with the log file.
 # In the example below we resume the status and do more 5 trials.
 
-cost_model = auto_scheduler.XGBModel()
-cost_model.update_from_file(log_file)
-search_policy = auto_scheduler.SketchPolicy(
-    task, cost_model, init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file)]
-)
-measure_ctx = auto_scheduler.LocalRPCMeasureContext(min_repeat_ms=300)
-tune_option = auto_scheduler.TuningOptions(
-    num_measure_trials=5,
-    runner=measure_ctx.runner,
-    measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
-)
-task.tune(tune_option, search_policy=search_policy)
 
-# Kill the measurement process
-del measure_ctx
+def resume_search(task, log_file):
+    print("Resume search:")
+    cost_model = auto_scheduler.XGBModel()
+    cost_model.update_from_file(log_file)
+    search_policy = auto_scheduler.SketchPolicy(
+        task, cost_model, init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file)]
+    )
+    measure_ctx = auto_scheduler.LocalRPCMeasureContext(min_repeat_ms=300)
+    tune_option = auto_scheduler.TuningOptions(
+        num_measure_trials=5,
+        runner=measure_ctx.runner,
+        measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
+    )
+    task.tune(tune_option, search_policy=search_policy)
+
+    # Kill the measurement process
+    del measure_ctx
+
+
+resume_search(task, log_file)
diff --git a/docs/_downloads/696dd37904ef92773435ca321ff41bfb/from_onnx.py b/docs/_downloads/696dd37904ef92773435ca321ff41bfb/from_onnx.py
index 1557ea5..1b969bc 100644
--- a/docs/_downloads/696dd37904ef92773435ca321ff41bfb/from_onnx.py
+++ b/docs/_downloads/696dd37904ef92773435ca321ff41bfb/from_onnx.py
@@ -60,7 +60,11 @@ onnx_model = onnx.load(model_path)
 ######################################################################
 # Load a test image
 # ---------------------------------------------
-# A single cat dominates the examples!
+# A single cat dominates the examples! This model takes a single input image of size
+# 224x224 and outputs a scaled image that is 3x greater than the input along each
+# axis, a 672x672 image. Re-scale the cat image to fit this input shape then
+# convert to `YCbCr`. The super resolution model will then be applied to the
+# luminance (`Y`) channel.
 from PIL import Image
 
 img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
@@ -73,6 +77,14 @@ x = np.array(img_y)[np.newaxis, np.newaxis, :, :]
 ######################################################################
 # Compile the model with relay
 # ---------------------------------------------
+# Typically ONNX models mix model input values with parameter values, with
+# the input having the name `1`. This model dependent, and you should check
+# with the documentation for your model to determine the full input and
+# parameter name space.
+#
+# Passing in the shape dictionary to the `relay.frontend.from_onnx` method
+# tells relay which ONNX parameters are inputs, and which are parameters, and
+# provides a static definition of the input size.
 target = "llvm"
 
 input_name = "1"
@@ -91,7 +103,9 @@ tvm_output = intrp.evaluate()(tvm.nd.array(x.astype(dtype)), **params).asnumpy()
 ######################################################################
 # Display results
 # ---------------------------------------------
-# We put input and output image neck to neck
+# We put input and output image neck to neck. The luminance channel, `Y` is the output
+# from the model. The chroma channels `Cb` and `Cr` are resized to match with a simple
+# bicubic algorithm. The image is then recombined and converted back to `RGB`.
 from matplotlib import pyplot as plt
 
 out_y = Image.fromarray(np.uint8((tvm_output[0, 0]).clip(0, 255)), mode="L")
@@ -112,3 +126,8 @@ plt.show()
 # into a static shapes at compile time. If this fails, there may still be dynamic
 # operations in the model. Not all TVM kernels currently support dynamic shapes,
 # please file an issue on discuss.tvm.apache.org if you hit an error with dynamic kernels.
+#
+# This particular model was build using an older version of ONNX. During the import
+# phase ONNX importer will run the ONNX verifier, which may throw a `Mismatched attribute type`
+# warning. Because TVM supports a number of different ONNX versions, the Relay model
+# will still be valid.
diff --git a/docs/_downloads/739deb9ab034a5315ce6ba6bf7e5ff44/tune_relay_cuda.ipynb b/docs/_downloads/739deb9ab034a5315ce6ba6bf7e5ff44/tune_relay_cuda.ipynb
index fe0f765..82a5712 100644
--- a/docs/_downloads/739deb9ab034a5315ce6ba6bf7e5ff44/tune_relay_cuda.ipynb
+++ b/docs/_downloads/739deb9ab034a5315ce6ba6bf7e5ff44/tune_relay_cuda.ipynb
@@ -22,7 +22,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Install dependencies\n--------------------\nTo use the autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado\n\nTo make TVM run faster during tuning, it is recommended to use cython\nas FFI of tvm. In the root directory of tvm, execute:\n\n.. code-block:: bash\n\n  pip3 install --user cython\n  sudo make cython3\n\nNow return to python code. Import p [...]
+        "Install dependencies\n--------------------\nTo use the autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado cloudpickle\n\nTo make TVM run faster during tuning, it is recommended to use cython\nas FFI of tvm. In the root directory of tvm, execute:\n\n.. code-block:: bash\n\n  pip3 install --user cython\n  sudo make cython3\n\nNow return to python co [...]
       ]
     },
     {
@@ -133,7 +133,14 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Scale up measurement by using multiple devices\n----------------------------------------------\n\nIf you have multiple devices, you can use all of them for measurement.\nTVM uses the RPC Tracker to manage distributed devices.\nThe RPC Tracker is a centralized controller node. We can register all devices to\nthe tracker. For example, if we have 10 GPU cards, we can register all of them\nto the tracker, and run 10 measurements in parallel, accelerating the tuning process.\n\nTo st [...]
+        "\n"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {},
+      "source": [
+        "Scale up measurement by using multiple devices\n----------------------------------------------\nIf you have multiple devices, you can use all of them for measurement.\nTVM uses the RPC Tracker to manage distributed devices.\nThe RPC Tracker is a centralized controller node. We can register all devices to\nthe tracker. For example, if we have 10 GPU cards, we can register all of them\nto the tracker, and run 10 measurements in parallel, accelerating the tuning process.\n\nTo star [...]
       ]
     },
     {
diff --git a/docs/_downloads/b3eb5454a38ef6a663c9e4a7a3e61896/tune_network_x86.py b/docs/_downloads/78bebde8ea0f8558ac1a6fe12999f99f/tune_network_mali.py
similarity index 75%
copy from docs/_downloads/b3eb5454a38ef6a663c9e4a7a3e61896/tune_network_x86.py
copy to docs/_downloads/78bebde8ea0f8558ac1a6fe12999f99f/tune_network_mali.py
index a491759..d3fefa7 100644
--- a/docs/_downloads/b3eb5454a38ef6a663c9e4a7a3e61896/tune_network_x86.py
+++ b/docs/_downloads/78bebde8ea0f8558ac1a6fe12999f99f/tune_network_mali.py
@@ -15,13 +15,13 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Auto-scheduling a Neural Network for x86 CPU
-============================================
-**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_
+Auto-scheduling a Neural Network for mali GPU
+=============================================
+**Author**: `Zhao Wu <https://github.com/FrozenGene>`_
 
 Auto-tuning for specific devices and workloads is critical for getting the
 best performance. This is a tutorial on how to tune a whole neural
-network for x86 CPU with the auto-scheduler.
+network for mali GPU with the auto-scheduler.
 
 To auto-tune a neural network, we partition the network into small subgraphs and 
 tune them independently. Each subgraph is treated as one search task.
@@ -50,6 +50,7 @@ import tvm
 from tvm import relay, auto_scheduler
 import tvm.relay.testing
 from tvm.contrib import graph_runtime
+import os
 
 #################################################################
 # Define a Network
@@ -131,15 +132,30 @@ def get_network(name, batch_size, layout="NHWC", dtype="float32"):
 
 
 # Define the neural network and compilation target.
-# If the target machine supports avx512 instructions, replace the
-# "llvm -mcpu=core-avx2" with "llvm -mcpu=skylake-avx512"
-network = "resnet-50"
+network = "mobilenet"
 batch_size = 1
 layout = "NHWC"
-target = tvm.target.Target("llvm -mcpu=core-avx2")
+# Set this to True if you use ndk tools for cross compiling
+use_ndk = True
+# Path to cross compiler
+os.environ["TVM_NDK_CC"] = "/usr/bin/aarch64-linux-gnu-g++"
+target_host = tvm.target.Target("llvm -mtriple=aarch64-linux-gnu")
+target = tvm.target.Target("opencl -device=mali")
 dtype = "float32"
 log_file = "%s-%s-B%d-%s.json" % (network, layout, batch_size, target.kind.name)
 
+
+#################################################################
+# Start an RPC Tracker and Register Devices to the Tracker
+# --------------------------------------------------------
+# Please refer to the "Start RPC Tracker" and "Register Devices to RPC Tracker" setions
+# in this :ref:`tutorial <tutorials-autotvm-start-rpc-tracker>` to start an RPC tracker
+# and register devices to the tracker.
+
+# Replace this with the device key in your tracker
+device_key = "rk3399"
+
+
 #################################################################
 # Extract Search Tasks
 # --------------------
@@ -154,16 +170,41 @@ log_file = "%s-%s-B%d-%s.json" % (network, layout, batch_size, target.kind.name)
 # Extract tasks from the network
 print("Extract tasks...")
 mod, params, input_shape, output_shape = get_network(network, batch_size, layout, dtype=dtype)
-tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, target)
+tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, target, target_host)
 
 for idx, task in enumerate(tasks):
     print("========== Task %d  (workload key: %s) ==========" % (idx, task.workload_key))
     print(task.compute_dag)
+######################################################################
+# .. note:: How to get the hardware parameters from remote device
+#
+#   .. code-block:: python
+#
+#     from tvm.auto_scheduler.utils import request_remote
+#     remote = request_remote(device_key, "0.0.0.0", 9190)
+#     ctx = remote.cl()
+#     max_shared_memory_per_block = ctx.max_shared_memory_per_block
+#     # There is no explicit local memory limition
+#     # so we can use INT32_MAX to disalbe the check on local_memory.
+#     max_local_memory_per_block = 2147483647 # INT32_MAX
+#     max_threads_per_block = ctx.max_threads_per_block
+#     max_vthread_extent = int(ctx.warp_size / 4) if int(ctx.warp_size / 4) > 1 else ctx.warp_size
+#     warp_size = ctx.warp_size
+#     hardware_params = auto_scheduler.HardwareParams(-1, 16, 64,
+#                                                     max_shared_memory_per_block, max_local_memory_per_block,
+#                                                     max_threads_per_block, max_vthread_extent, warp_size)
+#
+#  Now you could pass it to search task and tune
+#
+#   .. code-block:: python
+#
+#     tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, target, target_host, hardware_params)
+#
 
 #################################################################
-# Begin Tuning
-# ------------
-# Now, we set some options for tuning and launch the search tasks
+# Tuning and Evaluate
+# -------------------
+# Now, we set some options for tuning, launch the search tasks and evaluate the end-to-end performance
 #
 # * :code:`num_measure_trials` is the number of measurement trials we can use during the tuning.
 #   You can set it to a small number (e.g., 200) for a fast demonstrative run.
@@ -179,23 +220,60 @@ for idx, task in enumerate(tasks):
 #
 
 
-def run_tuning():
+def tune_and_evaluate():
     print("Begin tuning...")
     tuner = auto_scheduler.TaskScheduler(tasks, task_weights)
     tune_option = auto_scheduler.TuningOptions(
         num_measure_trials=200,  # change this to 20000 to achieve the best performance
-        runner=auto_scheduler.LocalRunner(repeat=10, enable_cpu_cache_flush=True),
+        builder=auto_scheduler.LocalBuilder(build_func="ndk" if use_ndk else "default"),
+        runner=auto_scheduler.RPCRunner(
+            device_key, host="0.0.0.0", port=9190, repeat=3, timeout=50
+        ),
         measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
     )
 
     tuner.tune(tune_option)
 
+    # Compile the whole network
+    print("Compile...")
+    with auto_scheduler.ApplyHistoryBest(log_file):
+        with tvm.transform.PassContext(
+            opt_level=3, config={"relay.backend.use_auto_scheduler": True}
+        ):
+            lib = relay.build(mod, target=target, target_host=target_host, params=params)
+
+    # Create graph runtime
+    print("=============== Request Remote ===============")
+    from tvm.auto_scheduler.utils import request_remote
+
+    remote = request_remote(device_key, "0.0.0.0", 9190)
+    ctx = remote.cl()
+    from tvm.contrib import utils, ndk
+
+    temp = utils.tempdir()
+    filename = "deploy_lib.so"
+    path_lib = temp.relpath(filename)
+    lib.export_library(path_lib, ndk.create_shared)
+    remote.upload(path_lib)
+    loaded_lib = remote.load_module(filename)
+    module = graph_runtime.GraphModule(loaded_lib["default"](ctx))
+    data = (np.random.uniform(size=input_shape)).astype(dtype)
+    data_tvm = tvm.nd.array(data)
+    module.set_input("data", data_tvm)
+
+    # Evaluate
+    print("Evaluate inference time cost...")
+    ftimer = module.module.time_evaluator("run", ctx, repeat=3, min_repeat_ms=500)
+    prof_res = np.array(ftimer().results) * 1e3  # convert to millisecond
+    print(
+        "Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), np.std(prof_res))
+    )
 
-# We do not run the tuning in our webpage server since it takes too long.
-# Uncomment the following line to run it by yourself.
 
-# run_tuning()
+# We do not run the tuning in our webpage server since server doesn't have mali gpu.
+# Uncomment the following line to run it by yourself.
 
+# tune_and_evaluate()
 
 ######################################################################
 # .. note:: Explain the printed information during tuning
@@ -265,33 +343,6 @@ def run_tuning():
 #   you should be able to do the compilation (the secion below).
 #
 
-
-#################################################################
-# Compile and Evaluate
-# --------------------
-# After auto-tuning, we can compile the network with the best schedules we found.
-# All measurement records are dumped into the log file during auto-tuning,
-# so we can read the log file and load the best schedules.
-
-# Compile with the history best
-print("Compile...")
-with auto_scheduler.ApplyHistoryBest(log_file):
-    with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}):
-        lib = relay.build(mod, target=target, params=params)
-
-# Create graph runtime
-ctx = tvm.context(str(target), 0)
-module = graph_runtime.GraphModule(lib["default"](ctx))
-data_tvm = tvm.nd.array((np.random.uniform(size=input_shape)).astype(dtype))
-module.set_input("data", data_tvm)
-
-# Evaluate
-print("Evaluate inference time cost...")
-ftimer = module.module.time_evaluator("run", ctx, repeat=3, min_repeat_ms=500)
-prof_res = np.array(ftimer().results) * 1e3  # convert to millisecond
-print("Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), np.std(prof_res)))
-
-
 #################################################################
 # Other Tips
 # ----------
@@ -304,8 +355,8 @@ print("Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), n
 #    add a new argument :code:`load_log_file` when creating the task scheduler
 #    in function :code:`run_tuning`. Say,
 #    :code:`tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)`
-# 4. If you have multiple target CPUs, you can use all of them for measurements to
-#    parallelize the measurements. Check this :ref:`section <tutorials-autotvm-rpc-tracker>`
+# 4. If you have multiple target GPUs, you can use all of them for measurements to
+#    parallelize the measurements. Check this :ref:`section <tutorials-autotvm-scale-up-rpc-tracker>`
 #    to learn how to use the RPC Tracker and RPC Server.
 #    To use the RPC Tracker in auto-scheduler, replace the runner in :code:`TuningOptions`
 #    with :any:`auto_scheduler.RPCRunner`.
diff --git a/docs/_downloads/91b0339c8f3cc2594cee580dc450149a/tune_matmul_x86.py b/docs/_downloads/91b0339c8f3cc2594cee580dc450149a/tune_matmul_x86.py
index 9bc15ae..084f5ae 100644
--- a/docs/_downloads/91b0339c8f3cc2594cee580dc450149a/tune_matmul_x86.py
+++ b/docs/_downloads/91b0339c8f3cc2594cee580dc450149a/tune_matmul_x86.py
@@ -174,36 +174,17 @@ print(task.print_best(log_file))
 # In the example below we resume the status and do more 5 trials.
 
 
-def resume_search(task, log_file_name):
+def resume_search(task, log_file):
+    print("Resume search:")
     cost_model = auto_scheduler.XGBModel()
-    cost_model.update_from_file(log_file_name)
+    cost_model.update_from_file(log_file)
     search_policy = auto_scheduler.SketchPolicy(
-        task,
-        cost_model,
-        init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file_name)],
+        task, cost_model, init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file)]
     )
     tune_option = auto_scheduler.TuningOptions(
-        num_measure_trials=5, measure_callbacks=[auto_scheduler.RecordToFile(log_file_name)]
+        num_measure_trials=5, measure_callbacks=[auto_scheduler.RecordToFile(log_file)]
     )
     task.tune(tune_option, search_policy=search_policy)
 
 
-# resume_search(task, log_file)
-
-######################################################################
-# .. note::
-#   We cannot run the line above because of the conflict between
-#   python's multiprocessing and tvm's thread pool.
-#   After running a tvm generated binary the python's multiprocessing library
-#   will hang forever. You have to make sure that you don't run any tvm
-#   generated binaries before calling auot-scheduler's search.
-#   To run the function above, you should comment out all code in
-#   "Check correctness and evaluate performance" section.
-#
-#   You should be careful about this problem in your applications.
-#   There are other workarounds for this problem.
-#   For example, you can start a new thread/process (with the builtin python library
-#   threading or multiprocessing) and run the tvm binaries in the new thread/process.
-#   This provides an isolation and avoids the conflict in the main thread/process.
-#   You can also use :any:`auto_scheduler.LocalRPCMeasureContext` for auto-scheduler,
-#   as shown in the GPU tutorial (:ref:`auto-scheduler-conv-gpu`).
+resume_search(task, log_file)
diff --git a/docs/_downloads/afa7f0ecb19178546f310a1dfa66281f/tune_network_x86.ipynb b/docs/_downloads/afa7f0ecb19178546f310a1dfa66281f/tune_network_x86.ipynb
index e03fb03..99b970b 100644
--- a/docs/_downloads/afa7f0ecb19178546f310a1dfa66281f/tune_network_x86.ipynb
+++ b/docs/_downloads/afa7f0ecb19178546f310a1dfa66281f/tune_network_x86.ipynb
@@ -119,7 +119,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Other Tips\n----------\n1. During the tuning, the auto-scheduler needs to compile many programs and\n   extract feature from them. This part is CPU-intensive,\n   so a high-performance CPU with many cores is recommended for faster search.\n2. You can use :code:`python3 -m tvm.auto_scheduler.measure_record --mode distill --i log.json`\n   to distill the large log file and only save the best useful records.\n3. You can resume a search from the previous log file. You just need to\n [...]
+        "Other Tips\n----------\n1. During the tuning, the auto-scheduler needs to compile many programs and\n   extract feature from them. This part is CPU-intensive,\n   so a high-performance CPU with many cores is recommended for faster search.\n2. You can use :code:`python3 -m tvm.auto_scheduler.measure_record --mode distill --i log.json`\n   to distill the large log file and only save the best useful records.\n3. You can resume a search from the previous log file. You just need to\n [...]
       ]
     }
   ],
diff --git a/docs/_downloads/b3eb5454a38ef6a663c9e4a7a3e61896/tune_network_x86.py b/docs/_downloads/b3eb5454a38ef6a663c9e4a7a3e61896/tune_network_x86.py
index a491759..7f96254 100644
--- a/docs/_downloads/b3eb5454a38ef6a663c9e4a7a3e61896/tune_network_x86.py
+++ b/docs/_downloads/b3eb5454a38ef6a663c9e4a7a3e61896/tune_network_x86.py
@@ -305,7 +305,7 @@ print("Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), n
 #    in function :code:`run_tuning`. Say,
 #    :code:`tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)`
 # 4. If you have multiple target CPUs, you can use all of them for measurements to
-#    parallelize the measurements. Check this :ref:`section <tutorials-autotvm-rpc-tracker>`
+#    parallelize the measurements. Check this :ref:`section <tutorials-autotvm-scale-up-rpc-tracker>`
 #    to learn how to use the RPC Tracker and RPC Server.
 #    To use the RPC Tracker in auto-scheduler, replace the runner in :code:`TuningOptions`
 #    with :any:`auto_scheduler.RPCRunner`.
diff --git a/docs/_downloads/baf1373314e0e040008107ff2571b4cd/tune_relay_arm.py b/docs/_downloads/baf1373314e0e040008107ff2571b4cd/tune_relay_arm.py
index 317af5f..2b38923 100644
--- a/docs/_downloads/baf1373314e0e040008107ff2571b4cd/tune_relay_arm.py
+++ b/docs/_downloads/baf1373314e0e040008107ff2571b4cd/tune_relay_arm.py
@@ -49,7 +49,7 @@ __name__ == "__main__":` block.
 #
 # .. code-block:: bash
 #
-#   pip3 install --user psutil xgboost tornado
+#   pip3 install --user psutil xgboost tornado cloudpickle
 #
 # To make TVM run faster during tuning, it is recommended to use cython
 # as FFI of TVM. In the root directory of TVM, execute
@@ -148,7 +148,7 @@ def get_network(name, batch_size):
 #   INFO:RPCTracker:bind to 0.0.0.0:9190
 
 #################################################################
-# Register devices to RPC Tracker
+# Register Devices to RPC Tracker
 # -----------------------------------
 # Now we can register our devices to the tracker. The first step is to
 # build the TVM runtime for the ARM devices.
diff --git a/docs/_downloads/bcb4a24e8acc1ca84214bc8d7fb7954b/tune_conv2d_layer_cuda.ipynb b/docs/_downloads/bcb4a24e8acc1ca84214bc8d7fb7954b/tune_conv2d_layer_cuda.ipynb
index 807f070..725f881 100644
--- a/docs/_downloads/bcb4a24e8acc1ca84214bc8d7fb7954b/tune_conv2d_layer_cuda.ipynb
+++ b/docs/_downloads/bcb4a24e8acc1ca84214bc8d7fb7954b/tune_conv2d_layer_cuda.ipynb
@@ -177,7 +177,7 @@
       },
       "outputs": [],
       "source": [
-        "cost_model = auto_scheduler.XGBModel()\ncost_model.update_from_file(log_file)\nsearch_policy = auto_scheduler.SketchPolicy(\n    task, cost_model, init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file)]\n)\nmeasure_ctx = auto_scheduler.LocalRPCMeasureContext(min_repeat_ms=300)\ntune_option = auto_scheduler.TuningOptions(\n    num_measure_trials=5,\n    runner=measure_ctx.runner,\n    measure_callbacks=[auto_scheduler.RecordToFile(log_file)],\n)\ntask.tune(tune_opt [...]
+        "def resume_search(task, log_file):\n    print(\"Resume search:\")\n    cost_model = auto_scheduler.XGBModel()\n    cost_model.update_from_file(log_file)\n    search_policy = auto_scheduler.SketchPolicy(\n        task, cost_model, init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file)]\n    )\n    measure_ctx = auto_scheduler.LocalRPCMeasureContext(min_repeat_ms=300)\n    tune_option = auto_scheduler.TuningOptions(\n        num_measure_trials=5,\n        runner=mea [...]
       ]
     }
   ],
diff --git a/docs/_downloads/cd8ac9c09164cc04dd9ecd131c536680/micro_tflite.ipynb b/docs/_downloads/cd8ac9c09164cc04dd9ecd131c536680/micro_tflite.ipynb
index 5b26c9e..ff7698e 100644
--- a/docs/_downloads/cd8ac9c09164cc04dd9ecd131c536680/micro_tflite.ipynb
+++ b/docs/_downloads/cd8ac9c09164cc04dd9ecd131c536680/micro_tflite.ipynb
@@ -98,7 +98,7 @@
       },
       "outputs": [],
       "source": [
-        "TARGET = tvm.target.target.micro(\"host\")\n\nwith tvm.transform.PassContext(\n    opt_level=3, config={\"tir.disable_vectorize\": True}, disabled_pass=[\"FuseOps\"]\n):\n    graph, c_mod, c_params = relay.build(mod, target=TARGET, params=params)\n\n\n# %%\n# Running on simulated device\n# ----------------------------------------------\n#\n# First, compile a static microTVM runtime for the targeted device. In this case, the host simulated\n# device is used.\nworkspace = tvm.micr [...]
+        "TARGET = tvm.target.target.micro(\"host\")\n\nwith tvm.transform.PassContext(\n    opt_level=3, config={\"tir.disable_vectorize\": True}, disabled_pass=[\"FuseOps\"]\n):\n    graph, c_mod, c_params = relay.build(mod, target=TARGET, params=params)\n\n\n# %%\n# Running on simulated device\n# ----------------------------------------------\n#\n# First, compile a static microTVM runtime for the targeted device. In this case, the host simulated\n# device is used.\nworkspace = tvm.micr [...]
       ]
     },
     {
diff --git a/docs/_downloads/dad91669fd0ea707f1374fe331b0dffe/tune_network_cuda.ipynb b/docs/_downloads/dad91669fd0ea707f1374fe331b0dffe/tune_network_cuda.ipynb
index ab57869..3161dc9 100644
--- a/docs/_downloads/dad91669fd0ea707f1374fe331b0dffe/tune_network_cuda.ipynb
+++ b/docs/_downloads/dad91669fd0ea707f1374fe331b0dffe/tune_network_cuda.ipynb
@@ -119,7 +119,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Other Tips\n----------\n1. During the tuning, the auto-scheduler needs to compile many programs and\n   extract feature from them. This part is CPU-intensive,\n   so a high-performance CPU with many cores is recommended for faster search.\n2. You can use :code:`python3 -m tvm.auto_scheduler.measure_record --mode distill --i log.json`\n   to distill the large log file and only save the best useful records.\n3. You can resume a search from the previous log file. You just need to\n [...]
+        "Other Tips\n----------\n1. During the tuning, the auto-scheduler needs to compile many programs and\n   extract feature from them. This part is CPU-intensive,\n   so a high-performance CPU with many cores is recommended for faster search.\n2. You can use :code:`python3 -m tvm.auto_scheduler.measure_record --mode distill --i log.json`\n   to distill the large log file and only save the best useful records.\n3. You can resume a search from the previous log file. You just need to\n [...]
       ]
     }
   ],
diff --git a/docs/_downloads/e41367a7f459e4f4dca82180009c1539/tune_relay_mobile_gpu.py b/docs/_downloads/e41367a7f459e4f4dca82180009c1539/tune_relay_mobile_gpu.py
index 5e97273..859ac58 100644
--- a/docs/_downloads/e41367a7f459e4f4dca82180009c1539/tune_relay_mobile_gpu.py
+++ b/docs/_downloads/e41367a7f459e4f4dca82180009c1539/tune_relay_mobile_gpu.py
@@ -47,7 +47,7 @@ __name__ == "__main__":` block.
 #
 # .. code-block:: bash
 #
-#   pip3 install --user psutil xgboost tornado
+#   pip3 install --user psutil xgboost tornado cloudpickle
 #
 # To make TVM run faster during tuning, it is recommended to use cython
 # as FFI of tvm. In the root directory of tvm, execute
@@ -121,6 +121,9 @@ def get_network(name, batch_size):
 
 
 #################################################################
+# .. _tutorials-autotvm-start-rpc-tracker:
+
+#################################################################
 # Start RPC Tracker
 # -----------------
 # TVM uses RPC session to communicate with ARM boards.
@@ -147,7 +150,7 @@ def get_network(name, batch_size):
 #   INFO:RPCTracker:bind to 0.0.0.0:9190
 
 #################################################################
-# Register devices to RPC Tracker
+# Register Devices to RPC Tracker
 # -----------------------------------
 # Now we can register our devices to the tracker. The first step is to
 # build the TVM runtime for the ARM devices.
diff --git a/docs/_downloads/e92c7219a1cd7838e61f9683f4228a7f/from_onnx.ipynb b/docs/_downloads/e92c7219a1cd7838e61f9683f4228a7f/from_onnx.ipynb
index 8953d2d..24b37aa 100644
--- a/docs/_downloads/e92c7219a1cd7838e61f9683f4228a7f/from_onnx.ipynb
+++ b/docs/_downloads/e92c7219a1cd7838e61f9683f4228a7f/from_onnx.ipynb
@@ -51,7 +51,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Load a test image\n---------------------------------------------\nA single cat dominates the examples!\n\n"
+        "Load a test image\n---------------------------------------------\nA single cat dominates the examples! This model takes a single input image of size\n224x224 and outputs a scaled image that is 3x greater than the input along each\naxis, a 672x672 image. Re-scale the cat image to fit this input shape then\nconvert to `YCbCr`. The super resolution model will then be applied to the\nluminance (`Y`) channel.\n\n"
       ]
     },
     {
@@ -69,7 +69,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Compile the model with relay\n---------------------------------------------\n\n"
+        "Compile the model with relay\n---------------------------------------------\nTypically ONNX models mix model input values with parameter values, with\nthe input having the name `1`. This model dependent, and you should check\nwith the documentation for your model to determine the full input and\nparameter name space.\n\nPassing in the shape dictionary to the `relay.frontend.from_onnx` method\ntells relay which ONNX parameters are inputs, and which are parameters, and\nprovides a [...]
       ]
     },
     {
@@ -105,7 +105,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Display results\n---------------------------------------------\nWe put input and output image neck to neck\n\n"
+        "Display results\n---------------------------------------------\nWe put input and output image neck to neck. The luminance channel, `Y` is the output\nfrom the model. The chroma channels `Cb` and `Cr` are resized to match with a simple\nbicubic algorithm. The image is then recombined and converted back to `RGB`.\n\n"
       ]
     },
     {
@@ -123,7 +123,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Notes\n---------------------------------------------\nBy default, ONNX defines models in terms of dynamic shapes. The ONNX importer\nretains that dynamism upon import, and the compiler attemps to convert the model\ninto a static shapes at compile time. If this fails, there may still be dynamic\noperations in the model. Not all TVM kernels currently support dynamic shapes,\nplease file an issue on discuss.tvm.apache.org if you hit an error with dynamic kernels.\n\n"
+        "Notes\n---------------------------------------------\nBy default, ONNX defines models in terms of dynamic shapes. The ONNX importer\nretains that dynamism upon import, and the compiler attemps to convert the model\ninto a static shapes at compile time. If this fails, there may still be dynamic\noperations in the model. Not all TVM kernels currently support dynamic shapes,\nplease file an issue on discuss.tvm.apache.org if you hit an error with dynamic kernels.\n\nThis particular [...]
       ]
     }
   ],
diff --git a/docs/_downloads/f1a09967bab66114252357e4a9babb45/tune_matmul_x86.ipynb b/docs/_downloads/f1a09967bab66114252357e4a9babb45/tune_matmul_x86.ipynb
index 2640a03..63e191d 100644
--- a/docs/_downloads/f1a09967bab66114252357e4a9babb45/tune_matmul_x86.ipynb
+++ b/docs/_downloads/f1a09967bab66114252357e4a9babb45/tune_matmul_x86.ipynb
@@ -177,14 +177,7 @@
       },
       "outputs": [],
       "source": [
-        "def resume_search(task, log_file_name):\n    cost_model = auto_scheduler.XGBModel()\n    cost_model.update_from_file(log_file_name)\n    search_policy = auto_scheduler.SketchPolicy(\n        task,\n        cost_model,\n        init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file_name)],\n    )\n    tune_option = auto_scheduler.TuningOptions(\n        num_measure_trials=5, measure_callbacks=[auto_scheduler.RecordToFile(log_file_name)]\n    )\n    task.tune(tune_op [...]
-      ]
-    },
-    {
-      "cell_type": "markdown",
-      "metadata": {},
-      "source": [
-        "<div class=\"alert alert-info\"><h4>Note</h4><p>We cannot run the line above because of the conflict between\n  python's multiprocessing and tvm's thread pool.\n  After running a tvm generated binary the python's multiprocessing library\n  will hang forever. You have to make sure that you don't run any tvm\n  generated binaries before calling auot-scheduler's search.\n  To run the function above, you should comment out all code in\n  \"Check correctness and evaluate performance\ [...]
+        "def resume_search(task, log_file):\n    print(\"Resume search:\")\n    cost_model = auto_scheduler.XGBModel()\n    cost_model.update_from_file(log_file)\n    search_policy = auto_scheduler.SketchPolicy(\n        task, cost_model, init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file)]\n    )\n    tune_option = auto_scheduler.TuningOptions(\n        num_measure_trials=5, measure_callbacks=[auto_scheduler.RecordToFile(log_file)]\n    )\n    task.tune(tune_option, se [...]
       ]
     }
   ],
diff --git a/docs/_downloads/f8f7a2adf30f5033603d79cdbacd9235/tune_relay_arm.ipynb b/docs/_downloads/f8f7a2adf30f5033603d79cdbacd9235/tune_relay_arm.ipynb
index 47a08d4..0e61aa9 100644
--- a/docs/_downloads/f8f7a2adf30f5033603d79cdbacd9235/tune_relay_arm.ipynb
+++ b/docs/_downloads/f8f7a2adf30f5033603d79cdbacd9235/tune_relay_arm.ipynb
@@ -22,7 +22,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Install dependencies\n--------------------\nTo use the autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado\n\nTo make TVM run faster during tuning, it is recommended to use cython\nas FFI of TVM. In the root directory of TVM, execute\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user cython\n  sudo make cy [...]
+        "Install dependencies\n--------------------\nTo use the autotvm package in tvm, we need to install some extra dependencies.\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user psutil xgboost tornado cloudpickle\n\nTo make TVM run faster during tuning, it is recommended to use cython\nas FFI of TVM. In the root directory of TVM, execute\n(change \"3\" to \"2\" if you use python2):\n\n.. code-block:: bash\n\n  pip3 install --user cython\n   [...]
       ]
     },
     {
@@ -65,7 +65,7 @@
       "cell_type": "markdown",
       "metadata": {},
       "source": [
-        "Register devices to RPC Tracker\n-----------------------------------\nNow we can register our devices to the tracker. The first step is to\nbuild the TVM runtime for the ARM devices.\n\n* For Linux:\n  Follow this section `build-tvm-runtime-on-device` to build\n  the TVM runtime on the device. Then register the device to tracker by\n\n  .. code-block:: bash\n\n    python -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=rk3399\n\n  (replace :code:`[HOST_IP]` with the IP addr [...]
+        "Register Devices to RPC Tracker\n-----------------------------------\nNow we can register our devices to the tracker. The first step is to\nbuild the TVM runtime for the ARM devices.\n\n* For Linux:\n  Follow this section `build-tvm-runtime-on-device` to build\n  the TVM runtime on the device. Then register the device to tracker by\n\n  .. code-block:: bash\n\n    python -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=rk3399\n\n  (replace :code:`[HOST_IP]` with the IP addr [...]
       ]
     },
     {
diff --git a/docs/_downloads/fd012fa7b67f4e333acce1d25a8e62bc/micro_tflite.py b/docs/_downloads/fd012fa7b67f4e333acce1d25a8e62bc/micro_tflite.py
index 293f95c..7ec5506 100644
--- a/docs/_downloads/fd012fa7b67f4e333acce1d25a8e62bc/micro_tflite.py
+++ b/docs/_downloads/fd012fa7b67f4e333acce1d25a8e62bc/micro_tflite.py
@@ -184,6 +184,9 @@ micro_binary = tvm.micro.build_static_runtime(
     c_mod,
     lib_opts=opts["bin_opts"],
     bin_opts=opts["bin_opts"],
+    # Use the microTVM memory manager. If, in your main.cc, you change TVMPlatformMemoryAllocate and
+    # TVMPlatformMemoryFree to use e.g. malloc() and free(), you can omit this extra library.
+    extra_libs=[os.path.join(tvm.micro.build.CRT_ROOT_DIR, "memory")],
 )
 
 
diff --git a/docs/_images/sphx_glr_tune_network_mali_thumb.png b/docs/_images/sphx_glr_tune_network_mali_thumb.png
new file mode 100644
index 0000000..233f8e6
Binary files /dev/null and b/docs/_images/sphx_glr_tune_network_mali_thumb.png differ
diff --git a/docs/_sources/install/from_source.rst.txt b/docs/_sources/install/from_source.rst.txt
index 3cf0a78..f6be4e3 100644
--- a/docs/_sources/install/from_source.rst.txt
+++ b/docs/_sources/install/from_source.rst.txt
@@ -248,7 +248,7 @@ like ``virtualenv``.
 
    .. code:: bash
 
-       pip3 install --user tornado psutil xgboost
+       pip3 install --user tornado psutil xgboost cloudpickle
 
 
 Install Contrib Libraries
diff --git a/docs/_sources/langref/relay_pattern.rst.txt b/docs/_sources/langref/relay_pattern.rst.txt
index 8b34b76..ff02e50 100644
--- a/docs/_sources/langref/relay_pattern.rst.txt
+++ b/docs/_sources/langref/relay_pattern.rst.txt
@@ -167,6 +167,19 @@ The next example is matching a pattern of batch_norm -> get(0) -> relu. Note tha
         out = relay.nn.relu(tuple_get_item_node)
         pat.match(out)
 
+If we have a pattern that crosses a function boundary, we might want to match the Function itself
+
+
+.. code-block:: python
+
+  def test_match_func():
+      x = relay.var("x")
+      y = relay.var("y")
+      wc1 = wildcard()
+      wc2 = wildcard()
+      func_pattern = FunctionPattern([wc1, wc2], wc1 + wc2)
+      assert func_pattern.match(relay.Function([x, y], x + y))
+
 The next example is matching a constant node regarding its values. This is useful to check
 if a specific parameter in a subgraph has been bound or not.
 
@@ -283,6 +296,7 @@ The high level design is to introduce a language of patterns for now we propose
             | is_tuple_get_item(pattern, index = None)
             | pattern1 `|` pattern2
             | dominates(parent_pattern, path_pattern, child_pattern)
+            | FunctionPattern(params, body)
 
 The above language then provides a matching interface with both can select sub-graphs as well as verify that the graph does match the pattern.
 
@@ -332,6 +346,11 @@ Domination
 
 Match child pattern, find a match for the parent pattern, insuring that the child ultimately dominates the parrent (i.e., no nodes outside the pattern use outputs of the parent), and that ever node betwen the child and the pattern matches the path pattern.
 
+Function Pattern
+****************
+
+Match a Function with a body and parameters
+
 Applications
 ============
 
diff --git a/docs/_sources/tutorials/auto_scheduler/sg_execution_times.rst.txt b/docs/_sources/tutorials/auto_scheduler/sg_execution_times.rst.txt
index cb40e53..85589b9 100644
--- a/docs/_sources/tutorials/auto_scheduler/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/sg_execution_times.rst.txt
@@ -5,9 +5,10 @@
 
 Computation times
 =================
-**04:05.271** total execution time for **tutorials_auto_scheduler** files:
+**02:15.304** total execution time for **tutorials_auto_scheduler** files:
 
-- **01:50.410**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_matmul_x86.py` (``tune_matmul_x86.py``)
-- **01:34.433**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``)
-- **00:23.367**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_network_cuda.py` (``tune_network_cuda.py``)
-- **00:17.061**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_network_x86.py` (``tune_network_x86.py``)
+- **00:56.863**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_conv2d_layer_cuda.py` (``tune_conv2d_layer_cuda.py``)
+- **00:45.779**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_matmul_x86.py` (``tune_matmul_x86.py``)
+- **00:17.821**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_network_cuda.py` (``tune_network_cuda.py``)
+- **00:12.409**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_network_x86.py` (``tune_network_x86.py``)
+- **00:02.431**: :ref:`sphx_glr_tutorials_auto_scheduler_tune_network_mali.py` (``tune_network_mali.py``)
diff --git a/docs/_sources/tutorials/auto_scheduler/tune_conv2d_layer_cuda.rst.txt b/docs/_sources/tutorials/auto_scheduler/tune_conv2d_layer_cuda.rst.txt
index ea80d89..2620c3f 100644
--- a/docs/_sources/tutorials/auto_scheduler/tune_conv2d_layer_cuda.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/tune_conv2d_layer_cuda.rst.txt
@@ -220,482 +220,67 @@ cooperative fetching, unrolling and operator fusion.
                  kernel: Buffer(kernel_2: Pointer(float32), float32, [512, 512, 3, 3], []),
                  data: Buffer(data_2: Pointer(float32), float32, [1, 512, 7, 7], [])}
       buffer_map = {data_1: data, kernel_1: kernel, bias_1: bias, compute_1: compute} {
-      attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 28;
+      attr [IterVar(blockIdx.x: int32, (nullptr), "ThreadIndex", "blockIdx.x")] "thread_extent" = 112;
       attr [compute_3: Pointer(float32)] "storage_scope" = "local";
-      allocate(compute_3, float32, [14]);
+      allocate(compute_3, float32, [4]);
       attr [pad_temp.shared: Pointer(float32)] "storage_scope" = "shared";
-      allocate(pad_temp.shared, float32, [72]);
+      allocate(pad_temp.shared, float32, [54]);
       attr [kernel.shared: Pointer(float32)] "storage_scope" = "shared";
-      allocate(kernel.shared, float32, [3072]);
-      attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64 {
+      allocate(kernel.shared, float32, [576]);
+      attr [IterVar(threadIdx.x: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56 {
         compute_3[0] = 0f32
-        compute_3[1] = 0f32
         compute_3[2] = 0f32
+        compute_3[1] = 0f32
         compute_3[3] = 0f32
-        compute_3[4] = 0f32
-        compute_3[5] = 0f32
-        compute_3[6] = 0f32
-        compute_3[7] = 0f32
-        compute_3[8] = 0f32
-        compute_3[9] = 0f32
-        compute_3[10] = 0f32
-        compute_3[11] = 0f32
-        compute_3[12] = 0f32
-        compute_3[13] = 0f32
-        for (rc.outer.outer: int32, 0, 64) {
-          for (ry.outer.outer: int32, 0, 3) {
-            attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64 {
-              if @tir.likely((threadIdx.x_1 < 18), dtype=bool) {
-                pad_temp.shared[(threadIdx.x_1*4)] = @tir.if_then_else(((((1 <= (ry.outer.outer + floormod(blockIdx.x, 7))) && ((ry.outer.outer + floormod(blockIdx.x, 7)) < 8)) && (1 <= floormod((threadIdx.x_1*4), 9))) && (floormod((threadIdx.x_1*4), 9) < 8)), (float32*)data_2[((((((rc.outer.outer*392) + (floordiv((threadIdx.x_1*4), 9)*49)) + (ry.outer.outer*7)) + (floormod(blockIdx.x, 7)*7)) + floormod((threadIdx.x_1*4), 9)) - 8)], 0f32, dtype=float32)
-              }
-              if @tir.likely((threadIdx.x_1 < 18), dtype=bool) {
-                pad_temp.shared[((threadIdx.x_1*4) + 1)] = @tir.if_then_else(((((1 <= (ry.outer.outer + floormod(blockIdx.x, 7))) && ((ry.outer.outer + floormod(blockIdx.x, 7)) < 8)) && (1 <= floormod(((threadIdx.x_1*4) + 1), 9))) && (floormod(((threadIdx.x_1*4) + 1), 9) < 8)), (float32*)data_2[((((((rc.outer.outer*392) + (floordiv(((threadIdx.x_1*4) + 1), 9)*49)) + (ry.outer.outer*7)) + (floormod(blockIdx.x, 7)*7)) + floormod(((threadIdx.x_1*4) + 1), 9)) - 8)], 0f32, dtype=float32)
-              }
-              if @tir.likely((threadIdx.x_1 < 18), dtype=bool) {
-                pad_temp.shared[((threadIdx.x_1*4) + 2)] = @tir.if_then_else(((((1 <= (ry.outer.outer + floormod(blockIdx.x, 7))) && ((ry.outer.outer + floormod(blockIdx.x, 7)) < 8)) && (1 <= floormod(((threadIdx.x_1*4) + 2), 9))) && (floormod(((threadIdx.x_1*4) + 2), 9) < 8)), (float32*)data_2[((((((rc.outer.outer*392) + (floordiv(((threadIdx.x_1*4) + 2), 9)*49)) + (ry.outer.outer*7)) + (floormod(blockIdx.x, 7)*7)) + floormod(((threadIdx.x_1*4) + 2), 9)) - 8)], 0f32, dtype=float32)
-              }
-              if @tir.likely((threadIdx.x_1 < 18), dtype=bool) {
-                pad_temp.shared[((threadIdx.x_1*4) + 3)] = @tir.if_then_else(((((1 <= (ry.outer.outer + floormod(blockIdx.x, 7))) && ((ry.outer.outer + floormod(blockIdx.x, 7)) < 8)) && (1 <= floormod(((threadIdx.x_1*4) + 3), 9))) && (floormod(((threadIdx.x_1*4) + 3), 9) < 8)), (float32*)data_2[((((((rc.outer.outer*392) + (floordiv(((threadIdx.x_1*4) + 3), 9)*49)) + (ry.outer.outer*7)) + (floormod(blockIdx.x, 7)*7)) + floormod(((threadIdx.x_1*4) + 3), 9)) - 8)], 0f32, dtype=float32)
-              }
+        for (rc.outer.outer: int32, 0, 256) {
+          attr [IterVar(threadIdx.x_1: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          if @tir.likely((threadIdx.x_1 < 54), dtype=bool) {
+            pad_temp.shared[threadIdx.x_1] = @tir.if_then_else(((((1 <= (floordiv(floormod(threadIdx.x_1, 27), 9) + floormod(blockIdx.x, 7))) && ((floordiv(floormod(threadIdx.x_1, 27), 9) + floormod(blockIdx.x, 7)) < 8)) && (1 <= floormod(threadIdx.x_1, 9))) && (floormod(threadIdx.x_1, 9) < 8)), (float32*)data_2[((((((rc.outer.outer*98) + (floordiv(threadIdx.x_1, 27)*49)) + (floordiv(floormod(threadIdx.x_1, 27), 9)*7)) + (floormod(blockIdx.x, 7)*7)) + floormod(threadIdx.x_1, 9)) - 8)], 0 [...]
+          }
+          attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[threadIdx.x_2] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv(threadIdx.x_2, 18)*4608)) + (rc.outer.outer*18)) + floormod(threadIdx.x_2, 18))]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[(threadIdx.x_2 + 56)] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv((threadIdx.x_2 + 56), 18)*4608)) + (rc.outer.outer*18)) + floormod((threadIdx.x_2 + 2), 18))]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[(threadIdx.x_2 + 112)] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv((threadIdx.x_2 + 112), 18)*4608)) + (rc.outer.outer*18)) + floormod((threadIdx.x_2 + 4), 18))]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[(threadIdx.x_2 + 168)] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv((threadIdx.x_2 + 168), 18)*4608)) + (rc.outer.outer*18)) + floormod((threadIdx.x_2 + 6), 18))]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[(threadIdx.x_2 + 224)] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv((threadIdx.x_2 + 224), 18)*4608)) + (rc.outer.outer*18)) + floormod((threadIdx.x_2 + 8), 18))]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[(threadIdx.x_2 + 280)] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv((threadIdx.x_2 + 280), 18)*4608)) + (rc.outer.outer*18)) + floormod((threadIdx.x_2 + 10), 18))]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[(threadIdx.x_2 + 336)] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv((threadIdx.x_2 + 336), 18)*4608)) + (rc.outer.outer*18)) + floormod((threadIdx.x_2 + 12), 18))]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[(threadIdx.x_2 + 392)] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv((threadIdx.x_2 + 392), 18)*4608)) + (rc.outer.outer*18)) + floormod((threadIdx.x_2 + 14), 18))]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[(threadIdx.x_2 + 448)] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv((threadIdx.x_2 + 448), 18)*4608)) + (rc.outer.outer*18)) + floormod((threadIdx.x_2 + 16), 18))]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          kernel.shared[(threadIdx.x_2 + 504)] = (float32*)kernel_2[(((((floordiv(blockIdx.x, 7)*147456) + (floordiv(threadIdx.x_2, 18)*4608)) + (rc.outer.outer*18)) + floormod(threadIdx.x_2, 18)) + 129024)]
+          attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 56;
+          if @tir.likely((threadIdx.x_2 < 16), dtype=bool) {
+            kernel.shared[(threadIdx.x_2 + 560)] = (float32*)kernel_2[((((floordiv(blockIdx.x, 7)*147456) + (floordiv((threadIdx.x_2 + 560), 18)*4608)) + (rc.outer.outer*18)) + floormod((threadIdx.x_2 + 2), 18))]
+          }
+          for (rc.outer.inner: int32, 0, 2) {
+            for (rx.outer.inner: int32, 0, 3) {
+              compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[(((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7))]*(float32*)kernel.shared[(((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner)]))
+              compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[(((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7))]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 288)]))
+              compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[((((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7)) + 9)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 3)]))
+              compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[((((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7)) + 9)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 291)]))
+              compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[((((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7)) + 18)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 6)]))
+              compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[((((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7)) + 18)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 294)]))
+              compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[(((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7))]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 18)]))
+              compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[(((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7))]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 306)]))
+              compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[((((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7)) + 9)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 21)]))
+              compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[((((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7)) + 9)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 309)]))
+              compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[((((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7)) + 18)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 24)]))
+              compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[((((rc.outer.inner*27) + rx.outer.inner) + floormod(threadIdx.x, 7)) + 18)]*(float32*)kernel.shared[((((floordiv(threadIdx.x, 7)*36) + (rc.outer.inner*9)) + rx.outer.inner) + 312)]))
             }
-            attr [IterVar(threadIdx.x_2: int32, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[threadIdx.x_2] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 64)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 64), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 128)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 128), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 192)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 36864)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 256)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 256), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 320)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 320), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 384)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 73728)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 448)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 448), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 512)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 512), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 576)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 110592)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 640)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 640), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 704)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 704), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 768)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 147456)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 832)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 832), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 896)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 896), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 960)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 184320)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1024)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1024), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1088)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1088), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1152)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 221184)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1216)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1216), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1280)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1280), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1344)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 258048)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1408)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1408), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1472)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1472), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1536)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 294912)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1600)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1600), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1664)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1664), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1728)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 331776)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1792)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1792), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1856)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1856), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1920)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 368640)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 1984)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 1984), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2048)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2048), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2112)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 405504)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2176)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2176), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2240)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2240), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2304)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 442368)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2368)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2368), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2432)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2432), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2496)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 479232)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2560)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2560), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2624)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2624), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2688)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 516096)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2752)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2752), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2816)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2816), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2880)] = (float32*)kernel_2[(((((((floordiv(blockIdx.x, 7)*589824) + (floordiv(threadIdx.x_2, 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod(threadIdx.x_2, 24), 3)*9)) + (ry.outer.outer*3)) + floormod(threadIdx.x_2, 3)) + 552960)]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 2944)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 2944), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 16), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 1), 3))]
-            attr [IterVar(threadIdx.x_2, (nullptr), "ThreadIndex", "threadIdx.x")] "thread_extent" = 64;
-            kernel.shared[(threadIdx.x_2 + 3008)] = (float32*)kernel_2[((((((floordiv(blockIdx.x, 7)*589824) + (floordiv((threadIdx.x_2 + 3008), 24)*4608)) + (rc.outer.outer*72)) + (floordiv(floormod((threadIdx.x_2 + 8), 24), 3)*9)) + (ry.outer.outer*3)) + floormod((threadIdx.x_2 + 2), 3))]
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[0]*(float32*)kernel.shared[(threadIdx.x*48)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[9]*(float32*)kernel.shared[((threadIdx.x*48) + 3)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[1]*(float32*)kernel.shared[(threadIdx.x*48)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[10]*(float32*)kernel.shared[((threadIdx.x*48) + 3)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[2]*(float32*)kernel.shared[(threadIdx.x*48)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[11]*(float32*)kernel.shared[((threadIdx.x*48) + 3)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[3]*(float32*)kernel.shared[(threadIdx.x*48)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[12]*(float32*)kernel.shared[((threadIdx.x*48) + 3)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[4]*(float32*)kernel.shared[(threadIdx.x*48)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[13]*(float32*)kernel.shared[((threadIdx.x*48) + 3)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[5]*(float32*)kernel.shared[(threadIdx.x*48)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[14]*(float32*)kernel.shared[((threadIdx.x*48) + 3)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[6]*(float32*)kernel.shared[(threadIdx.x*48)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[15]*(float32*)kernel.shared[((threadIdx.x*48) + 3)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[0]*(float32*)kernel.shared[((threadIdx.x*48) + 24)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[9]*(float32*)kernel.shared[((threadIdx.x*48) + 27)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[1]*(float32*)kernel.shared[((threadIdx.x*48) + 24)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[10]*(float32*)kernel.shared[((threadIdx.x*48) + 27)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[2]*(float32*)kernel.shared[((threadIdx.x*48) + 24)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[11]*(float32*)kernel.shared[((threadIdx.x*48) + 27)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[3]*(float32*)kernel.shared[((threadIdx.x*48) + 24)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[12]*(float32*)kernel.shared[((threadIdx.x*48) + 27)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[4]*(float32*)kernel.shared[((threadIdx.x*48) + 24)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[13]*(float32*)kernel.shared[((threadIdx.x*48) + 27)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[5]*(float32*)kernel.shared[((threadIdx.x*48) + 24)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[14]*(float32*)kernel.shared[((threadIdx.x*48) + 27)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[6]*(float32*)kernel.shared[((threadIdx.x*48) + 24)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[15]*(float32*)kernel.shared[((threadIdx.x*48) + 27)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[1]*(float32*)kernel.shared[((threadIdx.x*48) + 1)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[10]*(float32*)kernel.shared[((threadIdx.x*48) + 4)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[2]*(float32*)kernel.shared[((threadIdx.x*48) + 1)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[11]*(float32*)kernel.shared[((threadIdx.x*48) + 4)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[3]*(float32*)kernel.shared[((threadIdx.x*48) + 1)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[12]*(float32*)kernel.shared[((threadIdx.x*48) + 4)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[4]*(float32*)kernel.shared[((threadIdx.x*48) + 1)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[13]*(float32*)kernel.shared[((threadIdx.x*48) + 4)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[5]*(float32*)kernel.shared[((threadIdx.x*48) + 1)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[14]*(float32*)kernel.shared[((threadIdx.x*48) + 4)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[6]*(float32*)kernel.shared[((threadIdx.x*48) + 1)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[15]*(float32*)kernel.shared[((threadIdx.x*48) + 4)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[7]*(float32*)kernel.shared[((threadIdx.x*48) + 1)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[16]*(float32*)kernel.shared[((threadIdx.x*48) + 4)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[1]*(float32*)kernel.shared[((threadIdx.x*48) + 25)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[10]*(float32*)kernel.shared[((threadIdx.x*48) + 28)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[2]*(float32*)kernel.shared[((threadIdx.x*48) + 25)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[11]*(float32*)kernel.shared[((threadIdx.x*48) + 28)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[3]*(float32*)kernel.shared[((threadIdx.x*48) + 25)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[12]*(float32*)kernel.shared[((threadIdx.x*48) + 28)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[4]*(float32*)kernel.shared[((threadIdx.x*48) + 25)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[13]*(float32*)kernel.shared[((threadIdx.x*48) + 28)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[5]*(float32*)kernel.shared[((threadIdx.x*48) + 25)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[14]*(float32*)kernel.shared[((threadIdx.x*48) + 28)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[6]*(float32*)kernel.shared[((threadIdx.x*48) + 25)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[15]*(float32*)kernel.shared[((threadIdx.x*48) + 28)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[7]*(float32*)kernel.shared[((threadIdx.x*48) + 25)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[16]*(float32*)kernel.shared[((threadIdx.x*48) + 28)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[2]*(float32*)kernel.shared[((threadIdx.x*48) + 2)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[11]*(float32*)kernel.shared[((threadIdx.x*48) + 5)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[3]*(float32*)kernel.shared[((threadIdx.x*48) + 2)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[12]*(float32*)kernel.shared[((threadIdx.x*48) + 5)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[4]*(float32*)kernel.shared[((threadIdx.x*48) + 2)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[13]*(float32*)kernel.shared[((threadIdx.x*48) + 5)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[5]*(float32*)kernel.shared[((threadIdx.x*48) + 2)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[14]*(float32*)kernel.shared[((threadIdx.x*48) + 5)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[6]*(float32*)kernel.shared[((threadIdx.x*48) + 2)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[15]*(float32*)kernel.shared[((threadIdx.x*48) + 5)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[7]*(float32*)kernel.shared[((threadIdx.x*48) + 2)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[16]*(float32*)kernel.shared[((threadIdx.x*48) + 5)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[8]*(float32*)kernel.shared[((threadIdx.x*48) + 2)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[17]*(float32*)kernel.shared[((threadIdx.x*48) + 5)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[2]*(float32*)kernel.shared[((threadIdx.x*48) + 26)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[11]*(float32*)kernel.shared[((threadIdx.x*48) + 29)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[3]*(float32*)kernel.shared[((threadIdx.x*48) + 26)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[12]*(float32*)kernel.shared[((threadIdx.x*48) + 29)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[4]*(float32*)kernel.shared[((threadIdx.x*48) + 26)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[13]*(float32*)kernel.shared[((threadIdx.x*48) + 29)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[5]*(float32*)kernel.shared[((threadIdx.x*48) + 26)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[14]*(float32*)kernel.shared[((threadIdx.x*48) + 29)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[6]*(float32*)kernel.shared[((threadIdx.x*48) + 26)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[15]*(float32*)kernel.shared[((threadIdx.x*48) + 29)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[7]*(float32*)kernel.shared[((threadIdx.x*48) + 26)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[16]*(float32*)kernel.shared[((threadIdx.x*48) + 29)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[8]*(float32*)kernel.shared[((threadIdx.x*48) + 26)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[17]*(float32*)kernel.shared[((threadIdx.x*48) + 29)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[18]*(float32*)kernel.shared[((threadIdx.x*48) + 6)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[27]*(float32*)kernel.shared[((threadIdx.x*48) + 9)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[19]*(float32*)kernel.shared[((threadIdx.x*48) + 6)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[28]*(float32*)kernel.shared[((threadIdx.x*48) + 9)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[20]*(float32*)kernel.shared[((threadIdx.x*48) + 6)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[29]*(float32*)kernel.shared[((threadIdx.x*48) + 9)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[21]*(float32*)kernel.shared[((threadIdx.x*48) + 6)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[30]*(float32*)kernel.shared[((threadIdx.x*48) + 9)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[22]*(float32*)kernel.shared[((threadIdx.x*48) + 6)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[31]*(float32*)kernel.shared[((threadIdx.x*48) + 9)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[23]*(float32*)kernel.shared[((threadIdx.x*48) + 6)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[32]*(float32*)kernel.shared[((threadIdx.x*48) + 9)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[24]*(float32*)kernel.shared[((threadIdx.x*48) + 6)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[33]*(float32*)kernel.shared[((threadIdx.x*48) + 9)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[18]*(float32*)kernel.shared[((threadIdx.x*48) + 30)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[27]*(float32*)kernel.shared[((threadIdx.x*48) + 33)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[19]*(float32*)kernel.shared[((threadIdx.x*48) + 30)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[28]*(float32*)kernel.shared[((threadIdx.x*48) + 33)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[20]*(float32*)kernel.shared[((threadIdx.x*48) + 30)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[29]*(float32*)kernel.shared[((threadIdx.x*48) + 33)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[21]*(float32*)kernel.shared[((threadIdx.x*48) + 30)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[30]*(float32*)kernel.shared[((threadIdx.x*48) + 33)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[22]*(float32*)kernel.shared[((threadIdx.x*48) + 30)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[31]*(float32*)kernel.shared[((threadIdx.x*48) + 33)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[23]*(float32*)kernel.shared[((threadIdx.x*48) + 30)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[32]*(float32*)kernel.shared[((threadIdx.x*48) + 33)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[24]*(float32*)kernel.shared[((threadIdx.x*48) + 30)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[33]*(float32*)kernel.shared[((threadIdx.x*48) + 33)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[19]*(float32*)kernel.shared[((threadIdx.x*48) + 7)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[28]*(float32*)kernel.shared[((threadIdx.x*48) + 10)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[20]*(float32*)kernel.shared[((threadIdx.x*48) + 7)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[29]*(float32*)kernel.shared[((threadIdx.x*48) + 10)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[21]*(float32*)kernel.shared[((threadIdx.x*48) + 7)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[30]*(float32*)kernel.shared[((threadIdx.x*48) + 10)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[22]*(float32*)kernel.shared[((threadIdx.x*48) + 7)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[31]*(float32*)kernel.shared[((threadIdx.x*48) + 10)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[23]*(float32*)kernel.shared[((threadIdx.x*48) + 7)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[32]*(float32*)kernel.shared[((threadIdx.x*48) + 10)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[24]*(float32*)kernel.shared[((threadIdx.x*48) + 7)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[33]*(float32*)kernel.shared[((threadIdx.x*48) + 10)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[25]*(float32*)kernel.shared[((threadIdx.x*48) + 7)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[34]*(float32*)kernel.shared[((threadIdx.x*48) + 10)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[19]*(float32*)kernel.shared[((threadIdx.x*48) + 31)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[28]*(float32*)kernel.shared[((threadIdx.x*48) + 34)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[20]*(float32*)kernel.shared[((threadIdx.x*48) + 31)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[29]*(float32*)kernel.shared[((threadIdx.x*48) + 34)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[21]*(float32*)kernel.shared[((threadIdx.x*48) + 31)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[30]*(float32*)kernel.shared[((threadIdx.x*48) + 34)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[22]*(float32*)kernel.shared[((threadIdx.x*48) + 31)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[31]*(float32*)kernel.shared[((threadIdx.x*48) + 34)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[23]*(float32*)kernel.shared[((threadIdx.x*48) + 31)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[32]*(float32*)kernel.shared[((threadIdx.x*48) + 34)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[24]*(float32*)kernel.shared[((threadIdx.x*48) + 31)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[33]*(float32*)kernel.shared[((threadIdx.x*48) + 34)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[25]*(float32*)kernel.shared[((threadIdx.x*48) + 31)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[34]*(float32*)kernel.shared[((threadIdx.x*48) + 34)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[20]*(float32*)kernel.shared[((threadIdx.x*48) + 8)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[29]*(float32*)kernel.shared[((threadIdx.x*48) + 11)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[21]*(float32*)kernel.shared[((threadIdx.x*48) + 8)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[30]*(float32*)kernel.shared[((threadIdx.x*48) + 11)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[22]*(float32*)kernel.shared[((threadIdx.x*48) + 8)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[31]*(float32*)kernel.shared[((threadIdx.x*48) + 11)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[23]*(float32*)kernel.shared[((threadIdx.x*48) + 8)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[32]*(float32*)kernel.shared[((threadIdx.x*48) + 11)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[24]*(float32*)kernel.shared[((threadIdx.x*48) + 8)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[33]*(float32*)kernel.shared[((threadIdx.x*48) + 11)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[25]*(float32*)kernel.shared[((threadIdx.x*48) + 8)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[34]*(float32*)kernel.shared[((threadIdx.x*48) + 11)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[26]*(float32*)kernel.shared[((threadIdx.x*48) + 8)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[35]*(float32*)kernel.shared[((threadIdx.x*48) + 11)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[20]*(float32*)kernel.shared[((threadIdx.x*48) + 32)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[29]*(float32*)kernel.shared[((threadIdx.x*48) + 35)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[21]*(float32*)kernel.shared[((threadIdx.x*48) + 32)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[30]*(float32*)kernel.shared[((threadIdx.x*48) + 35)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[22]*(float32*)kernel.shared[((threadIdx.x*48) + 32)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[31]*(float32*)kernel.shared[((threadIdx.x*48) + 35)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[23]*(float32*)kernel.shared[((threadIdx.x*48) + 32)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[32]*(float32*)kernel.shared[((threadIdx.x*48) + 35)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[24]*(float32*)kernel.shared[((threadIdx.x*48) + 32)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[33]*(float32*)kernel.shared[((threadIdx.x*48) + 35)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[25]*(float32*)kernel.shared[((threadIdx.x*48) + 32)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[34]*(float32*)kernel.shared[((threadIdx.x*48) + 35)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[26]*(float32*)kernel.shared[((threadIdx.x*48) + 32)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[35]*(float32*)kernel.shared[((threadIdx.x*48) + 35)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[36]*(float32*)kernel.shared[((threadIdx.x*48) + 12)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[45]*(float32*)kernel.shared[((threadIdx.x*48) + 15)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[37]*(float32*)kernel.shared[((threadIdx.x*48) + 12)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[46]*(float32*)kernel.shared[((threadIdx.x*48) + 15)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[38]*(float32*)kernel.shared[((threadIdx.x*48) + 12)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[47]*(float32*)kernel.shared[((threadIdx.x*48) + 15)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[39]*(float32*)kernel.shared[((threadIdx.x*48) + 12)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[48]*(float32*)kernel.shared[((threadIdx.x*48) + 15)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[40]*(float32*)kernel.shared[((threadIdx.x*48) + 12)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[49]*(float32*)kernel.shared[((threadIdx.x*48) + 15)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[41]*(float32*)kernel.shared[((threadIdx.x*48) + 12)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[50]*(float32*)kernel.shared[((threadIdx.x*48) + 15)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[42]*(float32*)kernel.shared[((threadIdx.x*48) + 12)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[51]*(float32*)kernel.shared[((threadIdx.x*48) + 15)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[36]*(float32*)kernel.shared[((threadIdx.x*48) + 36)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[45]*(float32*)kernel.shared[((threadIdx.x*48) + 39)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[37]*(float32*)kernel.shared[((threadIdx.x*48) + 36)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[46]*(float32*)kernel.shared[((threadIdx.x*48) + 39)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[38]*(float32*)kernel.shared[((threadIdx.x*48) + 36)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[47]*(float32*)kernel.shared[((threadIdx.x*48) + 39)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[39]*(float32*)kernel.shared[((threadIdx.x*48) + 36)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[48]*(float32*)kernel.shared[((threadIdx.x*48) + 39)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[40]*(float32*)kernel.shared[((threadIdx.x*48) + 36)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[49]*(float32*)kernel.shared[((threadIdx.x*48) + 39)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[41]*(float32*)kernel.shared[((threadIdx.x*48) + 36)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[50]*(float32*)kernel.shared[((threadIdx.x*48) + 39)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[42]*(float32*)kernel.shared[((threadIdx.x*48) + 36)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[51]*(float32*)kernel.shared[((threadIdx.x*48) + 39)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[37]*(float32*)kernel.shared[((threadIdx.x*48) + 13)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[46]*(float32*)kernel.shared[((threadIdx.x*48) + 16)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[38]*(float32*)kernel.shared[((threadIdx.x*48) + 13)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[47]*(float32*)kernel.shared[((threadIdx.x*48) + 16)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[39]*(float32*)kernel.shared[((threadIdx.x*48) + 13)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[48]*(float32*)kernel.shared[((threadIdx.x*48) + 16)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[40]*(float32*)kernel.shared[((threadIdx.x*48) + 13)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[49]*(float32*)kernel.shared[((threadIdx.x*48) + 16)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[41]*(float32*)kernel.shared[((threadIdx.x*48) + 13)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[50]*(float32*)kernel.shared[((threadIdx.x*48) + 16)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[42]*(float32*)kernel.shared[((threadIdx.x*48) + 13)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[51]*(float32*)kernel.shared[((threadIdx.x*48) + 16)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[43]*(float32*)kernel.shared[((threadIdx.x*48) + 13)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[52]*(float32*)kernel.shared[((threadIdx.x*48) + 16)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[37]*(float32*)kernel.shared[((threadIdx.x*48) + 37)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[46]*(float32*)kernel.shared[((threadIdx.x*48) + 40)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[38]*(float32*)kernel.shared[((threadIdx.x*48) + 37)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[47]*(float32*)kernel.shared[((threadIdx.x*48) + 40)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[39]*(float32*)kernel.shared[((threadIdx.x*48) + 37)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[48]*(float32*)kernel.shared[((threadIdx.x*48) + 40)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[40]*(float32*)kernel.shared[((threadIdx.x*48) + 37)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[49]*(float32*)kernel.shared[((threadIdx.x*48) + 40)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[41]*(float32*)kernel.shared[((threadIdx.x*48) + 37)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[50]*(float32*)kernel.shared[((threadIdx.x*48) + 40)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[42]*(float32*)kernel.shared[((threadIdx.x*48) + 37)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[51]*(float32*)kernel.shared[((threadIdx.x*48) + 40)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[43]*(float32*)kernel.shared[((threadIdx.x*48) + 37)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[52]*(float32*)kernel.shared[((threadIdx.x*48) + 40)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[38]*(float32*)kernel.shared[((threadIdx.x*48) + 14)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[47]*(float32*)kernel.shared[((threadIdx.x*48) + 17)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[39]*(float32*)kernel.shared[((threadIdx.x*48) + 14)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[48]*(float32*)kernel.shared[((threadIdx.x*48) + 17)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[40]*(float32*)kernel.shared[((threadIdx.x*48) + 14)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[49]*(float32*)kernel.shared[((threadIdx.x*48) + 17)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[41]*(float32*)kernel.shared[((threadIdx.x*48) + 14)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[50]*(float32*)kernel.shared[((threadIdx.x*48) + 17)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[42]*(float32*)kernel.shared[((threadIdx.x*48) + 14)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[51]*(float32*)kernel.shared[((threadIdx.x*48) + 17)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[43]*(float32*)kernel.shared[((threadIdx.x*48) + 14)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[52]*(float32*)kernel.shared[((threadIdx.x*48) + 17)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[44]*(float32*)kernel.shared[((threadIdx.x*48) + 14)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[53]*(float32*)kernel.shared[((threadIdx.x*48) + 17)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[38]*(float32*)kernel.shared[((threadIdx.x*48) + 38)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[47]*(float32*)kernel.shared[((threadIdx.x*48) + 41)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[39]*(float32*)kernel.shared[((threadIdx.x*48) + 38)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[48]*(float32*)kernel.shared[((threadIdx.x*48) + 41)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[40]*(float32*)kernel.shared[((threadIdx.x*48) + 38)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[49]*(float32*)kernel.shared[((threadIdx.x*48) + 41)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[41]*(float32*)kernel.shared[((threadIdx.x*48) + 38)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[50]*(float32*)kernel.shared[((threadIdx.x*48) + 41)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[42]*(float32*)kernel.shared[((threadIdx.x*48) + 38)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[51]*(float32*)kernel.shared[((threadIdx.x*48) + 41)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[43]*(float32*)kernel.shared[((threadIdx.x*48) + 38)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[52]*(float32*)kernel.shared[((threadIdx.x*48) + 41)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[44]*(float32*)kernel.shared[((threadIdx.x*48) + 38)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[53]*(float32*)kernel.shared[((threadIdx.x*48) + 41)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[54]*(float32*)kernel.shared[((threadIdx.x*48) + 18)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[63]*(float32*)kernel.shared[((threadIdx.x*48) + 21)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[55]*(float32*)kernel.shared[((threadIdx.x*48) + 18)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[64]*(float32*)kernel.shared[((threadIdx.x*48) + 21)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[56]*(float32*)kernel.shared[((threadIdx.x*48) + 18)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[65]*(float32*)kernel.shared[((threadIdx.x*48) + 21)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[57]*(float32*)kernel.shared[((threadIdx.x*48) + 18)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[66]*(float32*)kernel.shared[((threadIdx.x*48) + 21)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[58]*(float32*)kernel.shared[((threadIdx.x*48) + 18)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[67]*(float32*)kernel.shared[((threadIdx.x*48) + 21)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[59]*(float32*)kernel.shared[((threadIdx.x*48) + 18)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[68]*(float32*)kernel.shared[((threadIdx.x*48) + 21)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[60]*(float32*)kernel.shared[((threadIdx.x*48) + 18)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[69]*(float32*)kernel.shared[((threadIdx.x*48) + 21)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[54]*(float32*)kernel.shared[((threadIdx.x*48) + 42)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[63]*(float32*)kernel.shared[((threadIdx.x*48) + 45)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[55]*(float32*)kernel.shared[((threadIdx.x*48) + 42)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[64]*(float32*)kernel.shared[((threadIdx.x*48) + 45)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[56]*(float32*)kernel.shared[((threadIdx.x*48) + 42)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[65]*(float32*)kernel.shared[((threadIdx.x*48) + 45)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[57]*(float32*)kernel.shared[((threadIdx.x*48) + 42)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[66]*(float32*)kernel.shared[((threadIdx.x*48) + 45)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[58]*(float32*)kernel.shared[((threadIdx.x*48) + 42)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[67]*(float32*)kernel.shared[((threadIdx.x*48) + 45)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[59]*(float32*)kernel.shared[((threadIdx.x*48) + 42)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[68]*(float32*)kernel.shared[((threadIdx.x*48) + 45)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[60]*(float32*)kernel.shared[((threadIdx.x*48) + 42)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[69]*(float32*)kernel.shared[((threadIdx.x*48) + 45)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[55]*(float32*)kernel.shared[((threadIdx.x*48) + 19)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[64]*(float32*)kernel.shared[((threadIdx.x*48) + 22)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[56]*(float32*)kernel.shared[((threadIdx.x*48) + 19)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[65]*(float32*)kernel.shared[((threadIdx.x*48) + 22)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[57]*(float32*)kernel.shared[((threadIdx.x*48) + 19)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[66]*(float32*)kernel.shared[((threadIdx.x*48) + 22)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[58]*(float32*)kernel.shared[((threadIdx.x*48) + 19)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[67]*(float32*)kernel.shared[((threadIdx.x*48) + 22)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[59]*(float32*)kernel.shared[((threadIdx.x*48) + 19)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[68]*(float32*)kernel.shared[((threadIdx.x*48) + 22)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[60]*(float32*)kernel.shared[((threadIdx.x*48) + 19)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[69]*(float32*)kernel.shared[((threadIdx.x*48) + 22)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[61]*(float32*)kernel.shared[((threadIdx.x*48) + 19)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[70]*(float32*)kernel.shared[((threadIdx.x*48) + 22)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[55]*(float32*)kernel.shared[((threadIdx.x*48) + 43)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[64]*(float32*)kernel.shared[((threadIdx.x*48) + 46)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[56]*(float32*)kernel.shared[((threadIdx.x*48) + 43)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[65]*(float32*)kernel.shared[((threadIdx.x*48) + 46)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[57]*(float32*)kernel.shared[((threadIdx.x*48) + 43)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[66]*(float32*)kernel.shared[((threadIdx.x*48) + 46)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[58]*(float32*)kernel.shared[((threadIdx.x*48) + 43)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[67]*(float32*)kernel.shared[((threadIdx.x*48) + 46)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[59]*(float32*)kernel.shared[((threadIdx.x*48) + 43)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[68]*(float32*)kernel.shared[((threadIdx.x*48) + 46)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[60]*(float32*)kernel.shared[((threadIdx.x*48) + 43)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[69]*(float32*)kernel.shared[((threadIdx.x*48) + 46)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[61]*(float32*)kernel.shared[((threadIdx.x*48) + 43)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[70]*(float32*)kernel.shared[((threadIdx.x*48) + 46)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[56]*(float32*)kernel.shared[((threadIdx.x*48) + 20)]))
-            compute_3[0] = ((float32*)compute_3[0] + ((float32*)pad_temp.shared[65]*(float32*)kernel.shared[((threadIdx.x*48) + 23)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[57]*(float32*)kernel.shared[((threadIdx.x*48) + 20)]))
-            compute_3[1] = ((float32*)compute_3[1] + ((float32*)pad_temp.shared[66]*(float32*)kernel.shared[((threadIdx.x*48) + 23)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[58]*(float32*)kernel.shared[((threadIdx.x*48) + 20)]))
-            compute_3[2] = ((float32*)compute_3[2] + ((float32*)pad_temp.shared[67]*(float32*)kernel.shared[((threadIdx.x*48) + 23)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[59]*(float32*)kernel.shared[((threadIdx.x*48) + 20)]))
-            compute_3[3] = ((float32*)compute_3[3] + ((float32*)pad_temp.shared[68]*(float32*)kernel.shared[((threadIdx.x*48) + 23)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[60]*(float32*)kernel.shared[((threadIdx.x*48) + 20)]))
-            compute_3[4] = ((float32*)compute_3[4] + ((float32*)pad_temp.shared[69]*(float32*)kernel.shared[((threadIdx.x*48) + 23)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[61]*(float32*)kernel.shared[((threadIdx.x*48) + 20)]))
-            compute_3[5] = ((float32*)compute_3[5] + ((float32*)pad_temp.shared[70]*(float32*)kernel.shared[((threadIdx.x*48) + 23)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[62]*(float32*)kernel.shared[((threadIdx.x*48) + 20)]))
-            compute_3[6] = ((float32*)compute_3[6] + ((float32*)pad_temp.shared[71]*(float32*)kernel.shared[((threadIdx.x*48) + 23)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[56]*(float32*)kernel.shared[((threadIdx.x*48) + 44)]))
-            compute_3[7] = ((float32*)compute_3[7] + ((float32*)pad_temp.shared[65]*(float32*)kernel.shared[((threadIdx.x*48) + 47)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[57]*(float32*)kernel.shared[((threadIdx.x*48) + 44)]))
-            compute_3[8] = ((float32*)compute_3[8] + ((float32*)pad_temp.shared[66]*(float32*)kernel.shared[((threadIdx.x*48) + 47)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[58]*(float32*)kernel.shared[((threadIdx.x*48) + 44)]))
-            compute_3[9] = ((float32*)compute_3[9] + ((float32*)pad_temp.shared[67]*(float32*)kernel.shared[((threadIdx.x*48) + 47)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[59]*(float32*)kernel.shared[((threadIdx.x*48) + 44)]))
-            compute_3[10] = ((float32*)compute_3[10] + ((float32*)pad_temp.shared[68]*(float32*)kernel.shared[((threadIdx.x*48) + 47)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[60]*(float32*)kernel.shared[((threadIdx.x*48) + 44)]))
-            compute_3[11] = ((float32*)compute_3[11] + ((float32*)pad_temp.shared[69]*(float32*)kernel.shared[((threadIdx.x*48) + 47)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[61]*(float32*)kernel.shared[((threadIdx.x*48) + 44)]))
-            compute_3[12] = ((float32*)compute_3[12] + ((float32*)pad_temp.shared[70]*(float32*)kernel.shared[((threadIdx.x*48) + 47)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[62]*(float32*)kernel.shared[((threadIdx.x*48) + 44)]))
-            compute_3[13] = ((float32*)compute_3[13] + ((float32*)pad_temp.shared[71]*(float32*)kernel.shared[((threadIdx.x*48) + 47)]))
           }
         }
         for (i1.inner: int32, 0, 2) {
-          for (i3.inner: int32, 0, 7) {
-            compute_2[(((((floordiv(blockIdx.x, 7)*6272) + (threadIdx.x*98)) + (i1.inner*49)) + (floormod(blockIdx.x, 7)*7)) + i3.inner)] = max(((float32*)compute_3[((i1.inner*7) + i3.inner)] + (float32*)bias_2[(((floordiv(blockIdx.x, 7)*128) + (threadIdx.x*2)) + i1.inner)]), 0f32)
-          }
+          compute_2[(((((floordiv(blockIdx.x, 7)*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(blockIdx.x, 7)*7)) + floormod(threadIdx.x, 7))] = max(((float32*)compute_3[i1.inner] + (float32*)bias_2[(((floordiv(blockIdx.x, 7)*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner)]), 0f32)
+          compute_2[((((((floordiv(blockIdx.x, 7)*1568) + (floordiv(threadIdx.x, 7)*98)) + (i1.inner*49)) + (floormod(blockIdx.x, 7)*7)) + floormod(threadIdx.x, 7)) + 784)] = max(((float32*)compute_3[(i1.inner + 2)] + (float32*)bias_2[((((floordiv(blockIdx.x, 7)*32) + (floordiv(threadIdx.x, 7)*2)) + i1.inner) + 16)]), 0f32)
         }
       }
     }
@@ -748,7 +333,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 0.417 ms
+    Execution time of this operator: 0.289 ms
 
 
 
@@ -794,19 +379,19 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
     compute_nn_o_o_o_o, compute_nn_o_o_o_i = s[compute].split(compute_nn_o_o_o_i, factor=1)
     compute_ff_o_i, compute_ff_i = s[compute].split(compute_ff, factor=1)
     compute_ff_o_o_i, compute_ff_o_i = s[compute].split(compute_ff_o_i, factor=2)
-    compute_ff_o_o_o_i, compute_ff_o_o_i = s[compute].split(compute_ff_o_o_i, factor=64)
-    compute_ff_o_o_o_o, compute_ff_o_o_o_i = s[compute].split(compute_ff_o_o_o_i, factor=1)
+    compute_ff_o_o_o_i, compute_ff_o_o_i = s[compute].split(compute_ff_o_o_i, factor=8)
+    compute_ff_o_o_o_o, compute_ff_o_o_o_i = s[compute].split(compute_ff_o_o_o_i, factor=2)
     compute_yy_o_i, compute_yy_i = s[compute].split(compute_yy, factor=1)
     compute_yy_o_o_i, compute_yy_o_i = s[compute].split(compute_yy_o_i, factor=1)
     compute_yy_o_o_o_i, compute_yy_o_o_i = s[compute].split(compute_yy_o_o_i, factor=1)
     compute_yy_o_o_o_o, compute_yy_o_o_o_i = s[compute].split(compute_yy_o_o_o_i, factor=1)
     compute_xx_o_i, compute_xx_i = s[compute].split(compute_xx, factor=1)
-    compute_xx_o_o_i, compute_xx_o_i = s[compute].split(compute_xx_o_i, factor=7)
-    compute_xx_o_o_o_i, compute_xx_o_o_i = s[compute].split(compute_xx_o_o_i, factor=1)
+    compute_xx_o_o_i, compute_xx_o_i = s[compute].split(compute_xx_o_i, factor=1)
+    compute_xx_o_o_o_i, compute_xx_o_o_i = s[compute].split(compute_xx_o_o_i, factor=7)
     compute_xx_o_o_o_o, compute_xx_o_o_o_i = s[compute].split(compute_xx_o_o_o_i, factor=1)
-    compute_rc_o_i, compute_rc_i = s[compute].split(compute_rc, factor=2)
-    compute_rc_o_o, compute_rc_o_i = s[compute].split(compute_rc_o_i, factor=4)
-    compute_ry_o_i, compute_ry_i = s[compute].split(compute_ry, factor=1)
+    compute_rc_o_i, compute_rc_i = s[compute].split(compute_rc, factor=1)
+    compute_rc_o_o, compute_rc_o_i = s[compute].split(compute_rc_o_i, factor=2)
+    compute_ry_o_i, compute_ry_i = s[compute].split(compute_ry, factor=3)
     compute_ry_o_o, compute_ry_o_i = s[compute].split(compute_ry_o_i, factor=1)
     compute_rx_o_i, compute_rx_i = s[compute].split(compute_rx, factor=1)
     compute_rx_o_o, compute_rx_o_i = s[compute].split(compute_rx_o_i, factor=3)
@@ -815,13 +400,13 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
     compute_i0_o_o_i, compute_i0_o_i = s[compute].split(compute_i0_o_i, factor=1)
     compute_i0_o_o_o, compute_i0_o_o_i = s[compute].split(compute_i0_o_o_i, factor=1)
     compute_i1_o_i, compute_i1_i = s[compute].split(compute_i1, factor=2)
-    compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=64)
-    compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=1)
+    compute_i1_o_o_i, compute_i1_o_i = s[compute].split(compute_i1_o_i, factor=8)
+    compute_i1_o_o_o, compute_i1_o_o_i = s[compute].split(compute_i1_o_o_i, factor=2)
     compute_i2_o_i, compute_i2_i = s[compute].split(compute_i2, factor=1)
     compute_i2_o_o_i, compute_i2_o_i = s[compute].split(compute_i2_o_i, factor=1)
     compute_i2_o_o_o, compute_i2_o_o_i = s[compute].split(compute_i2_o_o_i, factor=1)
-    compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=7)
-    compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=1)
+    compute_i3_o_i, compute_i3_i = s[compute].split(compute_i3, factor=1)
+    compute_i3_o_o_i, compute_i3_o_i = s[compute].split(compute_i3_o_i, factor=7)
     compute_i3_o_o_o, compute_i3_o_o_i = s[compute].split(compute_i3_o_o_i, factor=1)
     s[compute].reorder(compute_i0_o_o_o, compute_i1_o_o_o, compute_i2_o_o_o, compute_i3_o_o_o, compute_i0_o_o_i, compute_i1_o_o_i, compute_i2_o_o_i, compute_i3_o_o_i, compute_i0_o_i, compute_i1_o_i, compute_i2_o_i, compute_i3_o_i, compute_i0_i, compute_i1_i, compute_i2_i, compute_i3_i)
     s[compute].compute_at(s[compute], compute_i3_o_i)
@@ -841,441 +426,64 @@ They can be used for debugging and learning the behavior of the auto-scheduler.
     kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[kernel_shared].fuse(kernel_shared_ax0, kernel_shared_ax1, kernel_shared_ax2, kernel_shared_ax3)
     kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
     s[kernel_shared].vectorize(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
+    kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[kernel_shared].split(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
     s[kernel_shared].bind(kernel_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
     pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused = s[pad_temp_shared].fuse(pad_temp_shared_ax0, pad_temp_shared_ax1, pad_temp_shared_ax2, pad_temp_shared_ax3)
-    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=4)
+    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused, factor=1)
     s[pad_temp_shared].vectorize(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_i)
-    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=64)
+    pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_o, pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i = s[pad_temp_shared].split(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o, factor=56)
     s[pad_temp_shared].bind(pad_temp_shared_ax0_ax1_fused_ax2_fused_ax3_fused_o_i, te.thread_axis("threadIdx.x"))
-    s[compute].pragma(compute_nn_o_o_o_o, "auto_unroll_max_step", 512)
+    s[compute].pragma(compute_nn_o_o_o_o, "auto_unroll_max_step", 16)
     s[compute].pragma(compute_nn_o_o_o_o, "unroll_explicit", True)
 
     CUDA source code:
     extern "C" __global__ void default_function_kernel0(float* __restrict__ data, float* __restrict__ kernel, float* __restrict__ compute, float* __restrict__ bias) {
-      float compute1[14];
-      __shared__ float pad_temp_shared[72];
-      __shared__ float kernel_shared[3072];
+      float compute1[4];
+      __shared__ float pad_temp_shared[54];
+      __shared__ float kernel_shared[576];
       compute1[(0)] = 0.000000e+00f;
-      compute1[(1)] = 0.000000e+00f;
       compute1[(2)] = 0.000000e+00f;
+      compute1[(1)] = 0.000000e+00f;
       compute1[(3)] = 0.000000e+00f;
-      compute1[(4)] = 0.000000e+00f;
-      compute1[(5)] = 0.000000e+00f;
-      compute1[(6)] = 0.000000e+00f;
-      compute1[(7)] = 0.000000e+00f;
-      compute1[(8)] = 0.000000e+00f;
-      compute1[(9)] = 0.000000e+00f;
-      compute1[(10)] = 0.000000e+00f;
-      compute1[(11)] = 0.000000e+00f;
-      compute1[(12)] = 0.000000e+00f;
-      compute1[(13)] = 0.000000e+00f;
-      for (int rc_outer_outer = 0; rc_outer_outer < 64; ++rc_outer_outer) {
-        for (int ry_outer_outer = 0; ry_outer_outer < 3; ++ry_outer_outer) {
-          __syncthreads();
-          if (((int)threadIdx.x) < 18) {
-            pad_temp_shared[((((int)threadIdx.x) * 4))] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= ((((int)threadIdx.x) * 4) % 9))) && (((((int)threadIdx.x) * 4) % 9) < 8)) ? data[(((((((rc_outer_outer * 392) + (((((int)threadIdx.x) * 4) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + ((((int)threadIdx.x) * 4) % 9)) - 8))] : 0.000000e+00f);
-          }
-          if (((int)threadIdx.x) < 18) {
-            pad_temp_shared[(((((int)threadIdx.x) * 4) + 1))] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((((int)threadIdx.x) * 4) + 1) % 9))) && ((((((int)threadIdx.x) * 4) + 1) % 9) < 8)) ? data[(((((((rc_outer_outer * 392) + ((((((int)threadIdx.x) * 4) + 1) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((((int)threadIdx.x) * 4) + 1) % 9)) - 8))] : 0.000000e+00f);
-          }
-          if (((int)threadIdx.x) < 18) {
-            pad_temp_shared[(((((int)threadIdx.x) * 4) + 2))] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((((int)threadIdx.x) * 4) + 2) % 9))) && ((((((int)threadIdx.x) * 4) + 2) % 9) < 8)) ? data[(((((((rc_outer_outer * 392) + ((((((int)threadIdx.x) * 4) + 2) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((((int)threadIdx.x) * 4) + 2) % 9)) - 8))] : 0.000000e+00f);
-          }
-          if (((int)threadIdx.x) < 18) {
-            pad_temp_shared[(((((int)threadIdx.x) * 4) + 3))] = (((((1 <= (ry_outer_outer + (((int)blockIdx.x) % 7))) && ((ry_outer_outer + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((((int)threadIdx.x) * 4) + 3) % 9))) && ((((((int)threadIdx.x) * 4) + 3) % 9) < 8)) ? data[(((((((rc_outer_outer * 392) + ((((((int)threadIdx.x) * 4) + 3) / 9) * 49)) + (ry_outer_outer * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((((int)threadIdx.x) * 4) + 3) % 9)) - 8))] : 0.000000e+00f);
+      for (int rc_outer_outer = 0; rc_outer_outer < 256; ++rc_outer_outer) {
+        __syncthreads();
+        if (((int)threadIdx.x) < 54) {
+          pad_temp_shared[(((int)threadIdx.x))] = (((((1 <= (((((int)threadIdx.x) % 27) / 9) + (((int)blockIdx.x) % 7))) && ((((((int)threadIdx.x) % 27) / 9) + (((int)blockIdx.x) % 7)) < 8)) && (1 <= (((int)threadIdx.x) % 9))) && ((((int)threadIdx.x) % 9) < 8)) ? data[(((((((rc_outer_outer * 98) + ((((int)threadIdx.x) / 27) * 49)) + (((((int)threadIdx.x) % 27) / 9) * 7)) + ((((int)blockIdx.x) % 7) * 7)) + (((int)threadIdx.x) % 9)) - 8))] : 0.000000e+00f);
+        }
+        kernel_shared[(((int)threadIdx.x))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + ((((int)threadIdx.x) / 18) * 4608)) + (rc_outer_outer * 18)) + (((int)threadIdx.x) % 18)))];
+        kernel_shared[((((int)threadIdx.x) + 56))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + (((((int)threadIdx.x) + 56) / 18) * 4608)) + (rc_outer_outer * 18)) + ((((int)threadIdx.x) + 2) % 18)))];
+        kernel_shared[((((int)threadIdx.x) + 112))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + (((((int)threadIdx.x) + 112) / 18) * 4608)) + (rc_outer_outer * 18)) + ((((int)threadIdx.x) + 4) % 18)))];
+        kernel_shared[((((int)threadIdx.x) + 168))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + (((((int)threadIdx.x) + 168) / 18) * 4608)) + (rc_outer_outer * 18)) + ((((int)threadIdx.x) + 6) % 18)))];
+        kernel_shared[((((int)threadIdx.x) + 224))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + (((((int)threadIdx.x) + 224) / 18) * 4608)) + (rc_outer_outer * 18)) + ((((int)threadIdx.x) + 8) % 18)))];
+        kernel_shared[((((int)threadIdx.x) + 280))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + (((((int)threadIdx.x) + 280) / 18) * 4608)) + (rc_outer_outer * 18)) + ((((int)threadIdx.x) + 10) % 18)))];
+        kernel_shared[((((int)threadIdx.x) + 336))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + (((((int)threadIdx.x) + 336) / 18) * 4608)) + (rc_outer_outer * 18)) + ((((int)threadIdx.x) + 12) % 18)))];
+        kernel_shared[((((int)threadIdx.x) + 392))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + (((((int)threadIdx.x) + 392) / 18) * 4608)) + (rc_outer_outer * 18)) + ((((int)threadIdx.x) + 14) % 18)))];
+        kernel_shared[((((int)threadIdx.x) + 448))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + (((((int)threadIdx.x) + 448) / 18) * 4608)) + (rc_outer_outer * 18)) + ((((int)threadIdx.x) + 16) % 18)))];
+        kernel_shared[((((int)threadIdx.x) + 504))] = kernel[(((((((((int)blockIdx.x) / 7) * 147456) + ((((int)threadIdx.x) / 18) * 4608)) + (rc_outer_outer * 18)) + (((int)threadIdx.x) % 18)) + 129024))];
+        if (((int)threadIdx.x) < 16) {
+          kernel_shared[((((int)threadIdx.x) + 560))] = kernel[((((((((int)blockIdx.x) / 7) * 147456) + (((((int)threadIdx.x) + 560) / 18) * 4608)) + (rc_outer_outer * 18)) + (((int)threadIdx.x) + 2)))];
+        }
+        __syncthreads();
+        for (int rc_outer_inner = 0; rc_outer_inner < 2; ++rc_outer_inner) {
+          for (int rx_outer_inner = 0; rx_outer_inner < 3; ++rx_outer_inner) {
+            compute1[(0)] = (compute1[(0)] + (pad_temp_shared[((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)))] * kernel_shared[(((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner))]));
+            compute1[(2)] = (compute1[(2)] + (pad_temp_shared[((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 288))]));
+            compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)) + 9))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 3))]));
+            compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)) + 9))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 291))]));
+            compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)) + 18))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 6))]));
+            compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)) + 18))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 294))]));
+            compute1[(1)] = (compute1[(1)] + (pad_temp_shared[((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 18))]));
+            compute1[(3)] = (compute1[(3)] + (pad_temp_shared[((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 306))]));
+            compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)) + 9))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 21))]));
+            compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)) + 9))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 309))]));
+            compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)) + 18))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 24))]));
+            compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(((((rc_outer_inner * 27) + rx_outer_inner) + (((int)threadIdx.x) % 7)) + 18))] * kernel_shared[((((((((int)threadIdx.x) / 7) * 36) + (rc_outer_inner * 9)) + rx_outer_inner) + 312))]));
           }
-          kernel_shared[(((int)threadIdx.x))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 64))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 64) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 128))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 128) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 192))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 36864))];
-          kernel_shared[((((int)threadIdx.x) + 256))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 256) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 320))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 320) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 384))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 73728))];
-          kernel_shared[((((int)threadIdx.x) + 448))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 448) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 512))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 512) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 576))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 110592))];
-          kernel_shared[((((int)threadIdx.x) + 640))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 640) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 704))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 704) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 768))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 147456))];
-          kernel_shared[((((int)threadIdx.x) + 832))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 832) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 896))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 896) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 960))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 184320))];
-          kernel_shared[((((int)threadIdx.x) + 1024))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1024) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1088))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1088) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1152))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 221184))];
-          kernel_shared[((((int)threadIdx.x) + 1216))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1216) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1280))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1280) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1344))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 258048))];
-          kernel_shared[((((int)threadIdx.x) + 1408))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1408) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1472))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1472) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1536))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 294912))];
-          kernel_shared[((((int)threadIdx.x) + 1600))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1600) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1664))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1664) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1728))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 331776))];
-          kernel_shared[((((int)threadIdx.x) + 1792))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1792) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1856))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1856) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 1920))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 368640))];
-          kernel_shared[((((int)threadIdx.x) + 1984))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 1984) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2048))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2048) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2112))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 405504))];
-          kernel_shared[((((int)threadIdx.x) + 2176))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2176) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2240))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2240) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2304))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 442368))];
-          kernel_shared[((((int)threadIdx.x) + 2368))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2368) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2432))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2432) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2496))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 479232))];
-          kernel_shared[((((int)threadIdx.x) + 2560))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2560) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2624))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2624) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2688))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 516096))];
-          kernel_shared[((((int)threadIdx.x) + 2752))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2752) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2816))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2816) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 2880))] = kernel[(((((((((((int)blockIdx.x) / 7) * 589824) + ((((int)threadIdx.x) / 24) * 4608)) + (rc_outer_outer * 72)) + (((((int)threadIdx.x) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + (((int)threadIdx.x) % 3)) + 552960))];
-          kernel_shared[((((int)threadIdx.x) + 2944))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 2944) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 16) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 1) % 3)))];
-          kernel_shared[((((int)threadIdx.x) + 3008))] = kernel[((((((((((int)blockIdx.x) / 7) * 589824) + (((((int)threadIdx.x) + 3008) / 24) * 4608)) + (rc_outer_outer * 72)) + ((((((int)threadIdx.x) + 8) % 24) / 3) * 9)) + (ry_outer_outer * 3)) + ((((int)threadIdx.x) + 2) % 3)))];
-          __syncthreads();
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(0)] * kernel_shared[((((int)threadIdx.x) * 48))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(9)] * kernel_shared[(((((int)threadIdx.x) * 48) + 3))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(1)] * kernel_shared[((((int)threadIdx.x) * 48))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(10)] * kernel_shared[(((((int)threadIdx.x) * 48) + 3))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(2)] * kernel_shared[((((int)threadIdx.x) * 48))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(11)] * kernel_shared[(((((int)threadIdx.x) * 48) + 3))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(3)] * kernel_shared[((((int)threadIdx.x) * 48))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(12)] * kernel_shared[(((((int)threadIdx.x) * 48) + 3))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(4)] * kernel_shared[((((int)threadIdx.x) * 48))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(13)] * kernel_shared[(((((int)threadIdx.x) * 48) + 3))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(5)] * kernel_shared[((((int)threadIdx.x) * 48))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(14)] * kernel_shared[(((((int)threadIdx.x) * 48) + 3))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(6)] * kernel_shared[((((int)threadIdx.x) * 48))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(15)] * kernel_shared[(((((int)threadIdx.x) * 48) + 3))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(0)] * kernel_shared[(((((int)threadIdx.x) * 48) + 24))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(9)] * kernel_shared[(((((int)threadIdx.x) * 48) + 27))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(1)] * kernel_shared[(((((int)threadIdx.x) * 48) + 24))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(10)] * kernel_shared[(((((int)threadIdx.x) * 48) + 27))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(2)] * kernel_shared[(((((int)threadIdx.x) * 48) + 24))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(11)] * kernel_shared[(((((int)threadIdx.x) * 48) + 27))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(3)] * kernel_shared[(((((int)threadIdx.x) * 48) + 24))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(12)] * kernel_shared[(((((int)threadIdx.x) * 48) + 27))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(4)] * kernel_shared[(((((int)threadIdx.x) * 48) + 24))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(13)] * kernel_shared[(((((int)threadIdx.x) * 48) + 27))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(5)] * kernel_shared[(((((int)threadIdx.x) * 48) + 24))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(14)] * kernel_shared[(((((int)threadIdx.x) * 48) + 27))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(6)] * kernel_shared[(((((int)threadIdx.x) * 48) + 24))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(15)] * kernel_shared[(((((int)threadIdx.x) * 48) + 27))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(1)] * kernel_shared[(((((int)threadIdx.x) * 48) + 1))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(10)] * kernel_shared[(((((int)threadIdx.x) * 48) + 4))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(2)] * kernel_shared[(((((int)threadIdx.x) * 48) + 1))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(11)] * kernel_shared[(((((int)threadIdx.x) * 48) + 4))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(3)] * kernel_shared[(((((int)threadIdx.x) * 48) + 1))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(12)] * kernel_shared[(((((int)threadIdx.x) * 48) + 4))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(4)] * kernel_shared[(((((int)threadIdx.x) * 48) + 1))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(13)] * kernel_shared[(((((int)threadIdx.x) * 48) + 4))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(5)] * kernel_shared[(((((int)threadIdx.x) * 48) + 1))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(14)] * kernel_shared[(((((int)threadIdx.x) * 48) + 4))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(6)] * kernel_shared[(((((int)threadIdx.x) * 48) + 1))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(15)] * kernel_shared[(((((int)threadIdx.x) * 48) + 4))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(7)] * kernel_shared[(((((int)threadIdx.x) * 48) + 1))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(16)] * kernel_shared[(((((int)threadIdx.x) * 48) + 4))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(1)] * kernel_shared[(((((int)threadIdx.x) * 48) + 25))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(10)] * kernel_shared[(((((int)threadIdx.x) * 48) + 28))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(2)] * kernel_shared[(((((int)threadIdx.x) * 48) + 25))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(11)] * kernel_shared[(((((int)threadIdx.x) * 48) + 28))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(3)] * kernel_shared[(((((int)threadIdx.x) * 48) + 25))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(12)] * kernel_shared[(((((int)threadIdx.x) * 48) + 28))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(4)] * kernel_shared[(((((int)threadIdx.x) * 48) + 25))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(13)] * kernel_shared[(((((int)threadIdx.x) * 48) + 28))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(5)] * kernel_shared[(((((int)threadIdx.x) * 48) + 25))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(14)] * kernel_shared[(((((int)threadIdx.x) * 48) + 28))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(6)] * kernel_shared[(((((int)threadIdx.x) * 48) + 25))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(15)] * kernel_shared[(((((int)threadIdx.x) * 48) + 28))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(7)] * kernel_shared[(((((int)threadIdx.x) * 48) + 25))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(16)] * kernel_shared[(((((int)threadIdx.x) * 48) + 28))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(2)] * kernel_shared[(((((int)threadIdx.x) * 48) + 2))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(11)] * kernel_shared[(((((int)threadIdx.x) * 48) + 5))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(3)] * kernel_shared[(((((int)threadIdx.x) * 48) + 2))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(12)] * kernel_shared[(((((int)threadIdx.x) * 48) + 5))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(4)] * kernel_shared[(((((int)threadIdx.x) * 48) + 2))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(13)] * kernel_shared[(((((int)threadIdx.x) * 48) + 5))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(5)] * kernel_shared[(((((int)threadIdx.x) * 48) + 2))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(14)] * kernel_shared[(((((int)threadIdx.x) * 48) + 5))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(6)] * kernel_shared[(((((int)threadIdx.x) * 48) + 2))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(15)] * kernel_shared[(((((int)threadIdx.x) * 48) + 5))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(7)] * kernel_shared[(((((int)threadIdx.x) * 48) + 2))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(16)] * kernel_shared[(((((int)threadIdx.x) * 48) + 5))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(8)] * kernel_shared[(((((int)threadIdx.x) * 48) + 2))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(17)] * kernel_shared[(((((int)threadIdx.x) * 48) + 5))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(2)] * kernel_shared[(((((int)threadIdx.x) * 48) + 26))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(11)] * kernel_shared[(((((int)threadIdx.x) * 48) + 29))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(3)] * kernel_shared[(((((int)threadIdx.x) * 48) + 26))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(12)] * kernel_shared[(((((int)threadIdx.x) * 48) + 29))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(4)] * kernel_shared[(((((int)threadIdx.x) * 48) + 26))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(13)] * kernel_shared[(((((int)threadIdx.x) * 48) + 29))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(5)] * kernel_shared[(((((int)threadIdx.x) * 48) + 26))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(14)] * kernel_shared[(((((int)threadIdx.x) * 48) + 29))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(6)] * kernel_shared[(((((int)threadIdx.x) * 48) + 26))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(15)] * kernel_shared[(((((int)threadIdx.x) * 48) + 29))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(7)] * kernel_shared[(((((int)threadIdx.x) * 48) + 26))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(16)] * kernel_shared[(((((int)threadIdx.x) * 48) + 29))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(8)] * kernel_shared[(((((int)threadIdx.x) * 48) + 26))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(17)] * kernel_shared[(((((int)threadIdx.x) * 48) + 29))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(18)] * kernel_shared[(((((int)threadIdx.x) * 48) + 6))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(27)] * kernel_shared[(((((int)threadIdx.x) * 48) + 9))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(19)] * kernel_shared[(((((int)threadIdx.x) * 48) + 6))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(28)] * kernel_shared[(((((int)threadIdx.x) * 48) + 9))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(20)] * kernel_shared[(((((int)threadIdx.x) * 48) + 6))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(29)] * kernel_shared[(((((int)threadIdx.x) * 48) + 9))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(21)] * kernel_shared[(((((int)threadIdx.x) * 48) + 6))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(30)] * kernel_shared[(((((int)threadIdx.x) * 48) + 9))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(22)] * kernel_shared[(((((int)threadIdx.x) * 48) + 6))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(31)] * kernel_shared[(((((int)threadIdx.x) * 48) + 9))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(23)] * kernel_shared[(((((int)threadIdx.x) * 48) + 6))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(32)] * kernel_shared[(((((int)threadIdx.x) * 48) + 9))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(24)] * kernel_shared[(((((int)threadIdx.x) * 48) + 6))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(33)] * kernel_shared[(((((int)threadIdx.x) * 48) + 9))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(18)] * kernel_shared[(((((int)threadIdx.x) * 48) + 30))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(27)] * kernel_shared[(((((int)threadIdx.x) * 48) + 33))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(19)] * kernel_shared[(((((int)threadIdx.x) * 48) + 30))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(28)] * kernel_shared[(((((int)threadIdx.x) * 48) + 33))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(20)] * kernel_shared[(((((int)threadIdx.x) * 48) + 30))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(29)] * kernel_shared[(((((int)threadIdx.x) * 48) + 33))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(21)] * kernel_shared[(((((int)threadIdx.x) * 48) + 30))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(30)] * kernel_shared[(((((int)threadIdx.x) * 48) + 33))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(22)] * kernel_shared[(((((int)threadIdx.x) * 48) + 30))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(31)] * kernel_shared[(((((int)threadIdx.x) * 48) + 33))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(23)] * kernel_shared[(((((int)threadIdx.x) * 48) + 30))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(32)] * kernel_shared[(((((int)threadIdx.x) * 48) + 33))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(24)] * kernel_shared[(((((int)threadIdx.x) * 48) + 30))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(33)] * kernel_shared[(((((int)threadIdx.x) * 48) + 33))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(19)] * kernel_shared[(((((int)threadIdx.x) * 48) + 7))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(28)] * kernel_shared[(((((int)threadIdx.x) * 48) + 10))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(20)] * kernel_shared[(((((int)threadIdx.x) * 48) + 7))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(29)] * kernel_shared[(((((int)threadIdx.x) * 48) + 10))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(21)] * kernel_shared[(((((int)threadIdx.x) * 48) + 7))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(30)] * kernel_shared[(((((int)threadIdx.x) * 48) + 10))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(22)] * kernel_shared[(((((int)threadIdx.x) * 48) + 7))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(31)] * kernel_shared[(((((int)threadIdx.x) * 48) + 10))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(23)] * kernel_shared[(((((int)threadIdx.x) * 48) + 7))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(32)] * kernel_shared[(((((int)threadIdx.x) * 48) + 10))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(24)] * kernel_shared[(((((int)threadIdx.x) * 48) + 7))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(33)] * kernel_shared[(((((int)threadIdx.x) * 48) + 10))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(25)] * kernel_shared[(((((int)threadIdx.x) * 48) + 7))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(34)] * kernel_shared[(((((int)threadIdx.x) * 48) + 10))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(19)] * kernel_shared[(((((int)threadIdx.x) * 48) + 31))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(28)] * kernel_shared[(((((int)threadIdx.x) * 48) + 34))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(20)] * kernel_shared[(((((int)threadIdx.x) * 48) + 31))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(29)] * kernel_shared[(((((int)threadIdx.x) * 48) + 34))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(21)] * kernel_shared[(((((int)threadIdx.x) * 48) + 31))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(30)] * kernel_shared[(((((int)threadIdx.x) * 48) + 34))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(22)] * kernel_shared[(((((int)threadIdx.x) * 48) + 31))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(31)] * kernel_shared[(((((int)threadIdx.x) * 48) + 34))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(23)] * kernel_shared[(((((int)threadIdx.x) * 48) + 31))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(32)] * kernel_shared[(((((int)threadIdx.x) * 48) + 34))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(24)] * kernel_shared[(((((int)threadIdx.x) * 48) + 31))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(33)] * kernel_shared[(((((int)threadIdx.x) * 48) + 34))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(25)] * kernel_shared[(((((int)threadIdx.x) * 48) + 31))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(34)] * kernel_shared[(((((int)threadIdx.x) * 48) + 34))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(20)] * kernel_shared[(((((int)threadIdx.x) * 48) + 8))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(29)] * kernel_shared[(((((int)threadIdx.x) * 48) + 11))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(21)] * kernel_shared[(((((int)threadIdx.x) * 48) + 8))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(30)] * kernel_shared[(((((int)threadIdx.x) * 48) + 11))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(22)] * kernel_shared[(((((int)threadIdx.x) * 48) + 8))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(31)] * kernel_shared[(((((int)threadIdx.x) * 48) + 11))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(23)] * kernel_shared[(((((int)threadIdx.x) * 48) + 8))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(32)] * kernel_shared[(((((int)threadIdx.x) * 48) + 11))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(24)] * kernel_shared[(((((int)threadIdx.x) * 48) + 8))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(33)] * kernel_shared[(((((int)threadIdx.x) * 48) + 11))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(25)] * kernel_shared[(((((int)threadIdx.x) * 48) + 8))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(34)] * kernel_shared[(((((int)threadIdx.x) * 48) + 11))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(26)] * kernel_shared[(((((int)threadIdx.x) * 48) + 8))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(35)] * kernel_shared[(((((int)threadIdx.x) * 48) + 11))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(20)] * kernel_shared[(((((int)threadIdx.x) * 48) + 32))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(29)] * kernel_shared[(((((int)threadIdx.x) * 48) + 35))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(21)] * kernel_shared[(((((int)threadIdx.x) * 48) + 32))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(30)] * kernel_shared[(((((int)threadIdx.x) * 48) + 35))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(22)] * kernel_shared[(((((int)threadIdx.x) * 48) + 32))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(31)] * kernel_shared[(((((int)threadIdx.x) * 48) + 35))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(23)] * kernel_shared[(((((int)threadIdx.x) * 48) + 32))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(32)] * kernel_shared[(((((int)threadIdx.x) * 48) + 35))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(24)] * kernel_shared[(((((int)threadIdx.x) * 48) + 32))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(33)] * kernel_shared[(((((int)threadIdx.x) * 48) + 35))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(25)] * kernel_shared[(((((int)threadIdx.x) * 48) + 32))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(34)] * kernel_shared[(((((int)threadIdx.x) * 48) + 35))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(26)] * kernel_shared[(((((int)threadIdx.x) * 48) + 32))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(35)] * kernel_shared[(((((int)threadIdx.x) * 48) + 35))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(36)] * kernel_shared[(((((int)threadIdx.x) * 48) + 12))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(45)] * kernel_shared[(((((int)threadIdx.x) * 48) + 15))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(37)] * kernel_shared[(((((int)threadIdx.x) * 48) + 12))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(46)] * kernel_shared[(((((int)threadIdx.x) * 48) + 15))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(38)] * kernel_shared[(((((int)threadIdx.x) * 48) + 12))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(47)] * kernel_shared[(((((int)threadIdx.x) * 48) + 15))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(39)] * kernel_shared[(((((int)threadIdx.x) * 48) + 12))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(48)] * kernel_shared[(((((int)threadIdx.x) * 48) + 15))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(40)] * kernel_shared[(((((int)threadIdx.x) * 48) + 12))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(49)] * kernel_shared[(((((int)threadIdx.x) * 48) + 15))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(41)] * kernel_shared[(((((int)threadIdx.x) * 48) + 12))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(50)] * kernel_shared[(((((int)threadIdx.x) * 48) + 15))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(42)] * kernel_shared[(((((int)threadIdx.x) * 48) + 12))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(51)] * kernel_shared[(((((int)threadIdx.x) * 48) + 15))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(36)] * kernel_shared[(((((int)threadIdx.x) * 48) + 36))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(45)] * kernel_shared[(((((int)threadIdx.x) * 48) + 39))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(37)] * kernel_shared[(((((int)threadIdx.x) * 48) + 36))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(46)] * kernel_shared[(((((int)threadIdx.x) * 48) + 39))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(38)] * kernel_shared[(((((int)threadIdx.x) * 48) + 36))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(47)] * kernel_shared[(((((int)threadIdx.x) * 48) + 39))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(39)] * kernel_shared[(((((int)threadIdx.x) * 48) + 36))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(48)] * kernel_shared[(((((int)threadIdx.x) * 48) + 39))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(40)] * kernel_shared[(((((int)threadIdx.x) * 48) + 36))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(49)] * kernel_shared[(((((int)threadIdx.x) * 48) + 39))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(41)] * kernel_shared[(((((int)threadIdx.x) * 48) + 36))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(50)] * kernel_shared[(((((int)threadIdx.x) * 48) + 39))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(42)] * kernel_shared[(((((int)threadIdx.x) * 48) + 36))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(51)] * kernel_shared[(((((int)threadIdx.x) * 48) + 39))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(37)] * kernel_shared[(((((int)threadIdx.x) * 48) + 13))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(46)] * kernel_shared[(((((int)threadIdx.x) * 48) + 16))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(38)] * kernel_shared[(((((int)threadIdx.x) * 48) + 13))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(47)] * kernel_shared[(((((int)threadIdx.x) * 48) + 16))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(39)] * kernel_shared[(((((int)threadIdx.x) * 48) + 13))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(48)] * kernel_shared[(((((int)threadIdx.x) * 48) + 16))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(40)] * kernel_shared[(((((int)threadIdx.x) * 48) + 13))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(49)] * kernel_shared[(((((int)threadIdx.x) * 48) + 16))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(41)] * kernel_shared[(((((int)threadIdx.x) * 48) + 13))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(50)] * kernel_shared[(((((int)threadIdx.x) * 48) + 16))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(42)] * kernel_shared[(((((int)threadIdx.x) * 48) + 13))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(51)] * kernel_shared[(((((int)threadIdx.x) * 48) + 16))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(43)] * kernel_shared[(((((int)threadIdx.x) * 48) + 13))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(52)] * kernel_shared[(((((int)threadIdx.x) * 48) + 16))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(37)] * kernel_shared[(((((int)threadIdx.x) * 48) + 37))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(46)] * kernel_shared[(((((int)threadIdx.x) * 48) + 40))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(38)] * kernel_shared[(((((int)threadIdx.x) * 48) + 37))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(47)] * kernel_shared[(((((int)threadIdx.x) * 48) + 40))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(39)] * kernel_shared[(((((int)threadIdx.x) * 48) + 37))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(48)] * kernel_shared[(((((int)threadIdx.x) * 48) + 40))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(40)] * kernel_shared[(((((int)threadIdx.x) * 48) + 37))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(49)] * kernel_shared[(((((int)threadIdx.x) * 48) + 40))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(41)] * kernel_shared[(((((int)threadIdx.x) * 48) + 37))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(50)] * kernel_shared[(((((int)threadIdx.x) * 48) + 40))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(42)] * kernel_shared[(((((int)threadIdx.x) * 48) + 37))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(51)] * kernel_shared[(((((int)threadIdx.x) * 48) + 40))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(43)] * kernel_shared[(((((int)threadIdx.x) * 48) + 37))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(52)] * kernel_shared[(((((int)threadIdx.x) * 48) + 40))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(38)] * kernel_shared[(((((int)threadIdx.x) * 48) + 14))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(47)] * kernel_shared[(((((int)threadIdx.x) * 48) + 17))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(39)] * kernel_shared[(((((int)threadIdx.x) * 48) + 14))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(48)] * kernel_shared[(((((int)threadIdx.x) * 48) + 17))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(40)] * kernel_shared[(((((int)threadIdx.x) * 48) + 14))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(49)] * kernel_shared[(((((int)threadIdx.x) * 48) + 17))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(41)] * kernel_shared[(((((int)threadIdx.x) * 48) + 14))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(50)] * kernel_shared[(((((int)threadIdx.x) * 48) + 17))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(42)] * kernel_shared[(((((int)threadIdx.x) * 48) + 14))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(51)] * kernel_shared[(((((int)threadIdx.x) * 48) + 17))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(43)] * kernel_shared[(((((int)threadIdx.x) * 48) + 14))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(52)] * kernel_shared[(((((int)threadIdx.x) * 48) + 17))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(44)] * kernel_shared[(((((int)threadIdx.x) * 48) + 14))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(53)] * kernel_shared[(((((int)threadIdx.x) * 48) + 17))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(38)] * kernel_shared[(((((int)threadIdx.x) * 48) + 38))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(47)] * kernel_shared[(((((int)threadIdx.x) * 48) + 41))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(39)] * kernel_shared[(((((int)threadIdx.x) * 48) + 38))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(48)] * kernel_shared[(((((int)threadIdx.x) * 48) + 41))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(40)] * kernel_shared[(((((int)threadIdx.x) * 48) + 38))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(49)] * kernel_shared[(((((int)threadIdx.x) * 48) + 41))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(41)] * kernel_shared[(((((int)threadIdx.x) * 48) + 38))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(50)] * kernel_shared[(((((int)threadIdx.x) * 48) + 41))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(42)] * kernel_shared[(((((int)threadIdx.x) * 48) + 38))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(51)] * kernel_shared[(((((int)threadIdx.x) * 48) + 41))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(43)] * kernel_shared[(((((int)threadIdx.x) * 48) + 38))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(52)] * kernel_shared[(((((int)threadIdx.x) * 48) + 41))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(44)] * kernel_shared[(((((int)threadIdx.x) * 48) + 38))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(53)] * kernel_shared[(((((int)threadIdx.x) * 48) + 41))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(54)] * kernel_shared[(((((int)threadIdx.x) * 48) + 18))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(63)] * kernel_shared[(((((int)threadIdx.x) * 48) + 21))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(55)] * kernel_shared[(((((int)threadIdx.x) * 48) + 18))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(64)] * kernel_shared[(((((int)threadIdx.x) * 48) + 21))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(56)] * kernel_shared[(((((int)threadIdx.x) * 48) + 18))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(65)] * kernel_shared[(((((int)threadIdx.x) * 48) + 21))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(57)] * kernel_shared[(((((int)threadIdx.x) * 48) + 18))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(66)] * kernel_shared[(((((int)threadIdx.x) * 48) + 21))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(58)] * kernel_shared[(((((int)threadIdx.x) * 48) + 18))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(67)] * kernel_shared[(((((int)threadIdx.x) * 48) + 21))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(59)] * kernel_shared[(((((int)threadIdx.x) * 48) + 18))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(68)] * kernel_shared[(((((int)threadIdx.x) * 48) + 21))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(60)] * kernel_shared[(((((int)threadIdx.x) * 48) + 18))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(69)] * kernel_shared[(((((int)threadIdx.x) * 48) + 21))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(54)] * kernel_shared[(((((int)threadIdx.x) * 48) + 42))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(63)] * kernel_shared[(((((int)threadIdx.x) * 48) + 45))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(55)] * kernel_shared[(((((int)threadIdx.x) * 48) + 42))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(64)] * kernel_shared[(((((int)threadIdx.x) * 48) + 45))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(56)] * kernel_shared[(((((int)threadIdx.x) * 48) + 42))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(65)] * kernel_shared[(((((int)threadIdx.x) * 48) + 45))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(57)] * kernel_shared[(((((int)threadIdx.x) * 48) + 42))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(66)] * kernel_shared[(((((int)threadIdx.x) * 48) + 45))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(58)] * kernel_shared[(((((int)threadIdx.x) * 48) + 42))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(67)] * kernel_shared[(((((int)threadIdx.x) * 48) + 45))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(59)] * kernel_shared[(((((int)threadIdx.x) * 48) + 42))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(68)] * kernel_shared[(((((int)threadIdx.x) * 48) + 45))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(60)] * kernel_shared[(((((int)threadIdx.x) * 48) + 42))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(69)] * kernel_shared[(((((int)threadIdx.x) * 48) + 45))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(55)] * kernel_shared[(((((int)threadIdx.x) * 48) + 19))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(64)] * kernel_shared[(((((int)threadIdx.x) * 48) + 22))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(56)] * kernel_shared[(((((int)threadIdx.x) * 48) + 19))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(65)] * kernel_shared[(((((int)threadIdx.x) * 48) + 22))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(57)] * kernel_shared[(((((int)threadIdx.x) * 48) + 19))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(66)] * kernel_shared[(((((int)threadIdx.x) * 48) + 22))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(58)] * kernel_shared[(((((int)threadIdx.x) * 48) + 19))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(67)] * kernel_shared[(((((int)threadIdx.x) * 48) + 22))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(59)] * kernel_shared[(((((int)threadIdx.x) * 48) + 19))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(68)] * kernel_shared[(((((int)threadIdx.x) * 48) + 22))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(60)] * kernel_shared[(((((int)threadIdx.x) * 48) + 19))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(69)] * kernel_shared[(((((int)threadIdx.x) * 48) + 22))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(61)] * kernel_shared[(((((int)threadIdx.x) * 48) + 19))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(70)] * kernel_shared[(((((int)threadIdx.x) * 48) + 22))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(55)] * kernel_shared[(((((int)threadIdx.x) * 48) + 43))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(64)] * kernel_shared[(((((int)threadIdx.x) * 48) + 46))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(56)] * kernel_shared[(((((int)threadIdx.x) * 48) + 43))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(65)] * kernel_shared[(((((int)threadIdx.x) * 48) + 46))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(57)] * kernel_shared[(((((int)threadIdx.x) * 48) + 43))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(66)] * kernel_shared[(((((int)threadIdx.x) * 48) + 46))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(58)] * kernel_shared[(((((int)threadIdx.x) * 48) + 43))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(67)] * kernel_shared[(((((int)threadIdx.x) * 48) + 46))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(59)] * kernel_shared[(((((int)threadIdx.x) * 48) + 43))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(68)] * kernel_shared[(((((int)threadIdx.x) * 48) + 46))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(60)] * kernel_shared[(((((int)threadIdx.x) * 48) + 43))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(69)] * kernel_shared[(((((int)threadIdx.x) * 48) + 46))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(61)] * kernel_shared[(((((int)threadIdx.x) * 48) + 43))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(70)] * kernel_shared[(((((int)threadIdx.x) * 48) + 46))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(56)] * kernel_shared[(((((int)threadIdx.x) * 48) + 20))]));
-          compute1[(0)] = (compute1[(0)] + (pad_temp_shared[(65)] * kernel_shared[(((((int)threadIdx.x) * 48) + 23))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(57)] * kernel_shared[(((((int)threadIdx.x) * 48) + 20))]));
-          compute1[(1)] = (compute1[(1)] + (pad_temp_shared[(66)] * kernel_shared[(((((int)threadIdx.x) * 48) + 23))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(58)] * kernel_shared[(((((int)threadIdx.x) * 48) + 20))]));
-          compute1[(2)] = (compute1[(2)] + (pad_temp_shared[(67)] * kernel_shared[(((((int)threadIdx.x) * 48) + 23))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(59)] * kernel_shared[(((((int)threadIdx.x) * 48) + 20))]));
-          compute1[(3)] = (compute1[(3)] + (pad_temp_shared[(68)] * kernel_shared[(((((int)threadIdx.x) * 48) + 23))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(60)] * kernel_shared[(((((int)threadIdx.x) * 48) + 20))]));
-          compute1[(4)] = (compute1[(4)] + (pad_temp_shared[(69)] * kernel_shared[(((((int)threadIdx.x) * 48) + 23))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(61)] * kernel_shared[(((((int)threadIdx.x) * 48) + 20))]));
-          compute1[(5)] = (compute1[(5)] + (pad_temp_shared[(70)] * kernel_shared[(((((int)threadIdx.x) * 48) + 23))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(62)] * kernel_shared[(((((int)threadIdx.x) * 48) + 20))]));
-          compute1[(6)] = (compute1[(6)] + (pad_temp_shared[(71)] * kernel_shared[(((((int)threadIdx.x) * 48) + 23))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(56)] * kernel_shared[(((((int)threadIdx.x) * 48) + 44))]));
-          compute1[(7)] = (compute1[(7)] + (pad_temp_shared[(65)] * kernel_shared[(((((int)threadIdx.x) * 48) + 47))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(57)] * kernel_shared[(((((int)threadIdx.x) * 48) + 44))]));
-          compute1[(8)] = (compute1[(8)] + (pad_temp_shared[(66)] * kernel_shared[(((((int)threadIdx.x) * 48) + 47))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(58)] * kernel_shared[(((((int)threadIdx.x) * 48) + 44))]));
-          compute1[(9)] = (compute1[(9)] + (pad_temp_shared[(67)] * kernel_shared[(((((int)threadIdx.x) * 48) + 47))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(59)] * kernel_shared[(((((int)threadIdx.x) * 48) + 44))]));
-          compute1[(10)] = (compute1[(10)] + (pad_temp_shared[(68)] * kernel_shared[(((((int)threadIdx.x) * 48) + 47))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(60)] * kernel_shared[(((((int)threadIdx.x) * 48) + 44))]));
-          compute1[(11)] = (compute1[(11)] + (pad_temp_shared[(69)] * kernel_shared[(((((int)threadIdx.x) * 48) + 47))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(61)] * kernel_shared[(((((int)threadIdx.x) * 48) + 44))]));
-          compute1[(12)] = (compute1[(12)] + (pad_temp_shared[(70)] * kernel_shared[(((((int)threadIdx.x) * 48) + 47))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(62)] * kernel_shared[(((((int)threadIdx.x) * 48) + 44))]));
-          compute1[(13)] = (compute1[(13)] + (pad_temp_shared[(71)] * kernel_shared[(((((int)threadIdx.x) * 48) + 47))]));
         }
       }
       for (int i1_inner = 0; i1_inner < 2; ++i1_inner) {
-        for (int i3_inner = 0; i3_inner < 7; ++i3_inner) {
-          compute[(((((((((int)blockIdx.x) / 7) * 6272) + (((int)threadIdx.x) * 98)) + (i1_inner * 49)) + ((((int)blockIdx.x) % 7) * 7)) + i3_inner))] = max((compute1[(((i1_inner * 7) + i3_inner))] + bias[(((((((int)blockIdx.x) / 7) * 128) + (((int)threadIdx.x) * 2)) + i1_inner))]), 0.000000e+00f);
-        }
+        compute[(((((((((int)blockIdx.x) / 7) * 1568) + ((((int)threadIdx.x) / 7) * 98)) + (i1_inner * 49)) + ((((int)blockIdx.x) % 7) * 7)) + (((int)threadIdx.x) % 7)))] = max((compute1[(i1_inner)] + bias[(((((((int)blockIdx.x) / 7) * 32) + ((((int)threadIdx.x) / 7) * 2)) + i1_inner))]), 0.000000e+00f);
+        compute[((((((((((int)blockIdx.x) / 7) * 1568) + ((((int)threadIdx.x) / 7) * 98)) + (i1_inner * 49)) + ((((int)blockIdx.x) % 7) * 7)) + (((int)threadIdx.x) % 7)) + 784))] = max((compute1[((i1_inner + 2))] + bias[((((((((int)blockIdx.x) / 7) * 32) + ((((int)threadIdx.x) / 7) * 2)) + i1_inner) + 16))]), 0.000000e+00f);
       }
     }
 
@@ -1292,21 +500,27 @@ In the example below we resume the status and do more 5 trials.
 .. code-block:: default
 
 
-    cost_model = auto_scheduler.XGBModel()
-    cost_model.update_from_file(log_file)
-    search_policy = auto_scheduler.SketchPolicy(
-        task, cost_model, init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file)]
-    )
-    measure_ctx = auto_scheduler.LocalRPCMeasureContext(min_repeat_ms=300)
-    tune_option = auto_scheduler.TuningOptions(
-        num_measure_trials=5,
-        runner=measure_ctx.runner,
-        measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
-    )
-    task.tune(tune_option, search_policy=search_policy)
 
-    # Kill the measurement process
-    del measure_ctx
+    def resume_search(task, log_file):
+        print("Resume search:")
+        cost_model = auto_scheduler.XGBModel()
+        cost_model.update_from_file(log_file)
+        search_policy = auto_scheduler.SketchPolicy(
+            task, cost_model, init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file)]
+        )
+        measure_ctx = auto_scheduler.LocalRPCMeasureContext(min_repeat_ms=300)
+        tune_option = auto_scheduler.TuningOptions(
+            num_measure_trials=5,
+            runner=measure_ctx.runner,
+            measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
+        )
+        task.tune(tune_option, search_policy=search_policy)
+
+        # Kill the measurement process
+        del measure_ctx
+
+
+    resume_search(task, log_file)
 
 
 
@@ -1317,17 +531,13 @@ In the example below we resume the status and do more 5 trials.
 
  .. code-block:: none
 
+    Resume search:
     Get devices for measurement successfully!
 
 
 
 
 
-.. rst-class:: sphx-glr-timing
-
-   **Total running time of the script:** ( 1 minutes  34.433 seconds)
-
-
 .. _sphx_glr_download_tutorials_auto_scheduler_tune_conv2d_layer_cuda.py:
 
 
diff --git a/docs/_sources/tutorials/auto_scheduler/tune_matmul_x86.rst.txt b/docs/_sources/tutorials/auto_scheduler/tune_matmul_x86.rst.txt
index 814fe56..865775b 100644
--- a/docs/_sources/tutorials/auto_scheduler/tune_matmul_x86.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/tune_matmul_x86.rst.txt
@@ -169,7 +169,7 @@ file and apply it.
 
  .. code-block:: none
 
-    *T*T*T*T*T*T*T*T*T*T
+
 
 
 
@@ -198,8 +198,8 @@ layout transformation, parallelization, vectorization, unrolling, and operator f
     primfn(A_1: handle, B_1: handle, C_1: handle, out_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
       buffers = {out: Buffer(out_2: Pointer(float32), float32, [1024, 1024], []),
-                 C: Buffer(C_2: Pointer(float32), float32, [1024, 1024], []),
                  B: Buffer(B_2: Pointer(float32), float32, [1024, 1024], []),
+                 C: Buffer(C_2: Pointer(float32), float32, [1024, 1024], []),
                  A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C, out_1: out} {
       attr [auto_scheduler_layout_transform: Pointer(float32)] "storage_scope" = "global";
@@ -279,7 +279,7 @@ We build the binary and check its correctness and performance.
 
  .. code-block:: none
 
-    Execution time of this operator: 22.426 ms
+    Execution time of this operator: 16.683 ms
 
 
 
@@ -347,49 +347,34 @@ In the example below we resume the status and do more 5 trials.
 
 
 
-    def resume_search(task, log_file_name):
+    def resume_search(task, log_file):
+        print("Resume search:")
         cost_model = auto_scheduler.XGBModel()
-        cost_model.update_from_file(log_file_name)
+        cost_model.update_from_file(log_file)
         search_policy = auto_scheduler.SketchPolicy(
-            task,
-            cost_model,
-            init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file_name)],
+            task, cost_model, init_search_callbacks=[auto_scheduler.PreloadMeasuredStates(log_file)]
         )
         tune_option = auto_scheduler.TuningOptions(
-            num_measure_trials=5, measure_callbacks=[auto_scheduler.RecordToFile(log_file_name)]
+            num_measure_trials=5, measure_callbacks=[auto_scheduler.RecordToFile(log_file)]
         )
         task.tune(tune_option, search_policy=search_policy)
 
 
-    # resume_search(task, log_file)
+    resume_search(task, log_file)
 
 
 
 
+.. rst-class:: sphx-glr-script-out
 
+ Out:
+
+ .. code-block:: none
+
+    Resume search:
+    *T
 
 
-.. note::
-  We cannot run the line above because of the conflict between
-  python's multiprocessing and tvm's thread pool.
-  After running a tvm generated binary the python's multiprocessing library
-  will hang forever. You have to make sure that you don't run any tvm
-  generated binaries before calling auot-scheduler's search.
-  To run the function above, you should comment out all code in
-  "Check correctness and evaluate performance" section.
-
-  You should be careful about this problem in your applications.
-  There are other workarounds for this problem.
-  For example, you can start a new thread/process (with the builtin python library
-  threading or multiprocessing) and run the tvm binaries in the new thread/process.
-  This provides an isolation and avoids the conflict in the main thread/process.
-  You can also use :any:`auto_scheduler.LocalRPCMeasureContext` for auto-scheduler,
-  as shown in the GPU tutorial (:ref:`auto-scheduler-conv-gpu`).
-
-
-.. rst-class:: sphx-glr-timing
-
-   **Total running time of the script:** ( 1 minutes  50.410 seconds)
 
 
 .. _sphx_glr_download_tutorials_auto_scheduler_tune_matmul_x86.py:
diff --git a/docs/_sources/tutorials/auto_scheduler/tune_network_cuda.rst.txt b/docs/_sources/tutorials/auto_scheduler/tune_network_cuda.rst.txt
index 2110410..afab490 100644
--- a/docs/_sources/tutorials/auto_scheduler/tune_network_cuda.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/tune_network_cuda.rst.txt
@@ -614,7 +614,7 @@ so we can read the log file and load the best schedules.
 
     Compile...
     Evaluate inference time cost...
-    Mean inference time (std dev): 3.28 ms (0.01 ms)
+    Mean inference time (std dev): 2.42 ms (0.00 ms)
 
 
 
@@ -630,7 +630,7 @@ Other Tips
    in function :code:`run_tuning`. Say,
    :code:`tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)`
 4. If you have multiple target GPUs, you can use all of them for measurements to
-   parallelize the measurements. Check this :ref:`section <tutorials-autotvm-rpc-tracker>`
+   parallelize the measurements. Check this :ref:`section <tutorials-autotvm-scale-up-rpc-tracker>`
    to learn how to use the RPC Tracker and RPC Server.
    To use the RPC Tracker in auto-scheduler, replace the runner in :code:`TuningOptions`
    with :any:`auto_scheduler.RPCRunner`.
diff --git a/docs/_sources/tutorials/auto_scheduler/tune_network_x86.rst.txt b/docs/_sources/tutorials/auto_scheduler/tune_network_mali.rst.txt
similarity index 57%
copy from docs/_sources/tutorials/auto_scheduler/tune_network_x86.rst.txt
copy to docs/_sources/tutorials/auto_scheduler/tune_network_mali.rst.txt
index 9965acf..8e73cb9 100644
--- a/docs/_sources/tutorials/auto_scheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/tune_network_mali.rst.txt
@@ -1,19 +1,19 @@
 .. note::
     :class: sphx-glr-download-link-note
 
-    Click :ref:`here <sphx_glr_download_tutorials_auto_scheduler_tune_network_x86.py>` to download the full example code
+    Click :ref:`here <sphx_glr_download_tutorials_auto_scheduler_tune_network_mali.py>` to download the full example code
 .. rst-class:: sphx-glr-example-title
 
-.. _sphx_glr_tutorials_auto_scheduler_tune_network_x86.py:
+.. _sphx_glr_tutorials_auto_scheduler_tune_network_mali.py:
 
 
-Auto-scheduling a Neural Network for x86 CPU
-============================================
-**Author**: `Lianmin Zheng <https://github.com/merrymercy>`_
+Auto-scheduling a Neural Network for mali GPU
+=============================================
+**Author**: `Zhao Wu <https://github.com/FrozenGene>`_
 
 Auto-tuning for specific devices and workloads is critical for getting the
 best performance. This is a tutorial on how to tune a whole neural
-network for x86 CPU with the auto-scheduler.
+network for mali GPU with the auto-scheduler.
 
 To auto-tune a neural network, we partition the network into small subgraphs and 
 tune them independently. Each subgraph is treated as one search task.
@@ -45,6 +45,7 @@ __name__ == "__main__":` block.
     from tvm import relay, auto_scheduler
     import tvm.relay.testing
     from tvm.contrib import graph_runtime
+    import os
 
 
 
@@ -135,12 +136,15 @@ You can use :ref:`ConvertLayout <convert-layout-usage>` pass to do the layout co
 
 
     # Define the neural network and compilation target.
-    # If the target machine supports avx512 instructions, replace the
-    # "llvm -mcpu=core-avx2" with "llvm -mcpu=skylake-avx512"
-    network = "resnet-50"
+    network = "mobilenet"
     batch_size = 1
     layout = "NHWC"
-    target = tvm.target.Target("llvm -mcpu=core-avx2")
+    # Set this to True if you use ndk tools for cross compiling
+    use_ndk = True
+    # Path to cross compiler
+    os.environ["TVM_NDK_CC"] = "/usr/bin/aarch64-linux-gnu-g++"
+    target_host = tvm.target.Target("llvm -mtriple=aarch64-linux-gnu")
+    target = tvm.target.Target("opencl -device=mali")
     dtype = "float32"
     log_file = "%s-%s-B%d-%s.json" % (network, layout, batch_size, target.kind.name)
 
@@ -150,6 +154,27 @@ You can use :ref:`ConvertLayout <convert-layout-usage>` pass to do the layout co
 
 
 
+
+Start an RPC Tracker and Register Devices to the Tracker
+--------------------------------------------------------
+Please refer to the "Start RPC Tracker" and "Register Devices to RPC Tracker" setions
+in this :ref:`tutorial <tutorials-autotvm-start-rpc-tracker>` to start an RPC tracker
+and register devices to the tracker.
+
+
+.. code-block:: default
+
+
+    # Replace this with the device key in your tracker
+    device_key = "rk3399"
+
+
+
+
+
+
+
+
 Extract Search Tasks
 --------------------
 Next, we extract the search tasks and their weights from a network.
@@ -167,7 +192,7 @@ The task scheduler will just optimize this objective.
     # Extract tasks from the network
     print("Extract tasks...")
     mod, params, input_shape, output_shape = get_network(network, batch_size, layout, dtype=dtype)
-    tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, target)
+    tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, target, target_host)
 
     for idx, task in enumerate(tasks):
         print("========== Task %d  (workload key: %s) ==========" % (idx, task.workload_key))
@@ -176,7 +201,6 @@ The task scheduler will just optimize this objective.
 
 
 
-
 .. rst-class:: sphx-glr-script-out
 
  Out:
@@ -184,6 +208,7 @@ The task scheduler will just optimize this objective.
  .. code-block:: none
 
     Extract tasks...
+
    ...2%, 0.01 MB, 43 KB/s, 0 seconds passed
    ...5%, 0.02 MB, 86 KB/s, 0 seconds passed
    ...7%, 0.02 MB, 128 KB/s, 0 seconds passed
    ...10%, 0.03 MB, 171 KB/s, 0 seconds passed
    ...12%, 0.04 MB, 214 KB/s, 0 seconds passed
    ...15%, 0.05 MB, 256 KB/s, 0 seconds passed
    ...18%, 0.05 MB, 299 KB/s, 0 seconds passed
    ...20%, 0.06 MB, 341 KB/s, 0 seconds passed
    ...23%, 0.07 MB, 383 KB/s, 0 seconds passed
    ...25%, 0.08 MB, 425 KB/s, 0 seconds passed
    ...28%, 0.09 MB, 467 KB/s, 0 seconds passed
    ...30%, 0.09 MB, 509 KB/s, 0 seconds passed
    ...33%, 0.10 MB, 550 KB/s, 0 seconds passed
    ...36%, 0.11 MB, 592 KB/s, 0 seconds passed
    ...38%, 0.12 MB, 634 KB/s, 0 seconds passed
    ...41%, 0.12 MB, 676 KB/s, 0 seconds passed
    ...43%, 0.13 MB, 717 KB/s, 0 seconds passed
    ...46%, 0.14 MB, 759 KB/s, 0 seconds passed
    ...48%, 0.15 MB, 800 KB/s, 0 seconds passed
    ...51%, 0.16 MB, 842 KB/s, 0 seconds passed
    ...54%, 0.16 MB, 883 KB/s, 0 seconds 
 passed
    ...56%, 0.17 MB, 924 KB/s, 0 seconds passed
    ...59%, 0.18 MB, 966 KB/s, 0 seconds passed
    ...61%, 0.19 MB, 1007 KB/s, 0 seconds passed
    ...64%, 0.20 MB, 1048 KB/s, 0 seconds passed
    ...66%, 0.20 MB, 1089 KB/s, 0 seconds passed
    ...69%, 0.21 MB, 1128 KB/s, 0 seconds passed
    ...72%, 0.22 MB, 1169 KB/s, 0 seconds passed
    ...74%, 0.23 MB, 1210 KB/s, 0 seconds passed
    ...77%, 0.23 MB, 1251 KB/s, 0 seconds passed
    ...79%, 0.24 MB, 1292 KB/s, 0 seconds passed
    ...82%, 0.25 MB, 1333 KB/s, 0 seconds passed
    ...84%, 0.26 MB, 1372 KB/s, 0 seconds passed
    ...87%, 0.27 MB, 1413 KB/s, 0 seconds passed
    ...90%, 0.27 MB, 1453 KB/s, 0 seconds passed
    ...92%, 0.28 MB, 1494 KB/s, 0 seconds passed
    ...95%, 0.29 MB, 1534 KB/s, 0 seconds passed
    ...97%, 0.30 MB, 1575 KB/s, 0 seconds passed
    ...100%, 0.30 MB, 1614 KB/s, 0 seconds passed
     ========== Task 0  (workload key: ["b32ed43fb351136894c322ee49097a1a"]) ==========
     placeholder = PLACEHOLDER [1, 1000]
     T_softmax_maxelem(i0) max= placeholder[i0, k]
@@ -191,252 +216,222 @@ The task scheduler will just optimize this objective.
     T_softmax_expsum(i0) += T_softmax_exp[i0, k]
     T_softmax_norm(i0, i1) = (T_softmax_exp[i0, i1]/T_softmax_expsum[i0])
 
-    ========== Task 1  (workload key: ["6129df1a3d5f6326c8393a8d17160199"]) ==========
-    placeholder = PLACEHOLDER [1, 2048]
-    placeholder = PLACEHOLDER [1000, 2048]
-    compute(z, y, x) += (placeholder[z, ((k*16) + x)]*placeholder[y, ((k*16) + x)])
-    compute(y, x) += compute[y, x, kk]
+    ========== Task 1  (workload key: ["35552028f3076f68df3063174e40b59f"]) ==========
+    placeholder = PLACEHOLDER [1, 1024]
+    placeholder = PLACEHOLDER [1000, 1024]
+    T_dense(i, j) += (placeholder[i, k]*placeholder[j, k])
     placeholder = PLACEHOLDER [1000]
-    T_add(ax0, ax1) = (compute[ax0, ax1] + placeholder[ax1])
+    T_add(ax0, ax1) = (T_dense[ax0, ax1] + placeholder[ax1])
 
-    ========== Task 2  (workload key: ["36ee2798ed60bae3bcd1bb89a0285fe8"]) ==========
-    placeholder = PLACEHOLDER [1, 7, 7, 2048]
+    ========== Task 2  (workload key: ["cf95f3a14294b5393f63b280d0ec0ab6"]) ==========
+    placeholder = PLACEHOLDER [1, 7, 7, 1024]
     tensor(ax0, ax1, ax2, ax3) += placeholder[ax0, ((ax1*7) + rv0), ((ax2*7) + rv1), ax3]
     tensor(ax0, ax1, ax2, ax3) = (tensor[ax0, ax1, ax2, ax3]/(float32((select((bool)1, ((ax1 + 1)*7), (((ax1 + 1)*7) + 1)) - (ax1*7)))*float32((select((bool)1, ((ax2 + 1)*7), (((ax2 + 1)*7) + 1)) - (ax2*7)))))
 
-    ========== Task 3  (workload key: ["dcf6fcf5f56fa614bf9aef0c82382caf"]) ==========
-    placeholder = PLACEHOLDER [1, 7, 7, 512]
+    ========== Task 3  (workload key: ["baa3a42d3cb6ab30685b0a7894b95da9"]) ==========
+    placeholder = PLACEHOLDER [1, 7, 7, 1024]
     PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 512, 2048]
+    placeholder = PLACEHOLDER [1, 1, 1024, 1024]
     Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 7, 7, 2048]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
-    placeholder = PLACEHOLDER [1, 1, 1, 2048]
-    T_multiply(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3]*placeholder[ax0, 0, 0, ax3])
-    placeholder = PLACEHOLDER [1, 1, 1, 2048]
-    T_add(ax0, ax1, ax2, ax3) = (T_multiply[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    placeholder = PLACEHOLDER [1, 1, 1, 1024]
+    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 4  (workload key: ["7e3f0cf5a6dd80d36dab1a3dad92674a"]) ==========
-    placeholder = PLACEHOLDER [1, 7, 7, 512]
+    ========== Task 4  (workload key: ["089861a00a7dfcc7196c2b6b5c807855"]) ==========
+    placeholder = PLACEHOLDER [1, 7, 7, 1024]
     PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 8)) && (i2 >= 1)) && (i2 < 8)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
-    placeholder = PLACEHOLDER [3, 3, 512, 512]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 1, 1, 512]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    placeholder = PLACEHOLDER [3, 3, 1024, 1]
+    DepthwiseConv2d(b, i, j, c) += (PaddedInput[b, (i + di), (j + dj), c]*placeholder[di, dj, c, 0])
+    placeholder = PLACEHOLDER [1, 1, 1, 1024]
+    T_add(ax0, ax1, ax2, ax3) = (DepthwiseConv2d[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 5  (workload key: ["e0a9eb3795b531085e0ebb772e7e800c"]) ==========
-    placeholder = PLACEHOLDER [1, 7, 7, 2048]
+    ========== Task 5  (workload key: ["e7ff95f121397b87a0ca12ef428aef59"]) ==========
+    placeholder = PLACEHOLDER [1, 7, 7, 512]
     PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 2048, 512]
+    placeholder = PLACEHOLDER [1, 1, 512, 1024]
     Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 1, 1, 512]
+    placeholder = PLACEHOLDER [1, 1, 1, 1024]
     T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 6  (workload key: ["03614e726dc588d11887eb0953a77e53"]) ==========
-    placeholder = PLACEHOLDER [1, 7, 7, 512]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 512, 2048]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 7, 7, 2048]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
+    ========== Task 6  (workload key: ["c3831fcb49bfdfc679be0bbfb987da82"]) ==========
+    placeholder = PLACEHOLDER [1, 14, 14, 512]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 15)) && (i2 >= 1)) && (i2 < 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 512, 1]
+    DepthwiseConv2d(b, i, j, c) += (PaddedInput[b, ((i*2) + di), ((j*2) + dj), c]*placeholder[di, dj, c, 0])
+    placeholder = PLACEHOLDER [1, 1, 1, 512]
+    T_add(ax0, ax1, ax2, ax3) = (DepthwiseConv2d[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 7  (workload key: ["7657f886f5e9d8b5f19a5fd2c5b90d8d"]) ==========
-    placeholder = PLACEHOLDER [1, 14, 14, 1024]
+    ========== Task 7  (workload key: ["33bb900cb60276282852b4b9c1346fe9"]) ==========
+    placeholder = PLACEHOLDER [1, 14, 14, 512]
     PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 1024, 512]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+    placeholder = PLACEHOLDER [1, 1, 512, 512]
+    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
     placeholder = PLACEHOLDER [1, 1, 1, 512]
     T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 8  (workload key: ["7e09b626cf077cd419190fee02091dd6"]) ==========
+    ========== Task 8  (workload key: ["f2a48dd923600da67abb78b4895f8f7b"]) ==========
+    placeholder = PLACEHOLDER [1, 14, 14, 512]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 15)) && (i2 >= 1)) && (i2 < 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 512, 1]
+    DepthwiseConv2d(b, i, j, c) += (PaddedInput[b, (i + di), (j + dj), c]*placeholder[di, dj, c, 0])
+    placeholder = PLACEHOLDER [1, 1, 1, 512]
+    T_add(ax0, ax1, ax2, ax3) = (DepthwiseConv2d[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
+
+    ========== Task 9  (workload key: ["f6906ccbe2258e70648ea15f3c037ca0"]) ==========
     placeholder = PLACEHOLDER [1, 14, 14, 256]
     PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 256, 1024]
+    placeholder = PLACEHOLDER [1, 1, 256, 512]
     Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 14, 14, 1024]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
-    placeholder = PLACEHOLDER [1, 1, 1, 1024]
-    T_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    placeholder = PLACEHOLDER [1, 1, 1, 512]
+    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 9  (workload key: ["95bf49cc8cf7a351e974b2359702aac0"]) ==========
-    placeholder = PLACEHOLDER [1, 14, 14, 256]
-    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 15)) && (i2 >= 1)) && (i2 < 15)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
-    placeholder = PLACEHOLDER [3, 3, 256, 256]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
+    ========== Task 10  (workload key: ["381331b022e1b4ddc705aa66c2cb90c8"]) ==========
+    placeholder = PLACEHOLDER [1, 28, 28, 256]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 29)) && (i2 >= 1)) && (i2 < 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 256, 1]
+    DepthwiseConv2d(b, i, j, c) += (PaddedInput[b, ((i*2) + di), ((j*2) + dj), c]*placeholder[di, dj, c, 0])
     placeholder = PLACEHOLDER [1, 1, 1, 256]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_add(ax0, ax1, ax2, ax3) = (DepthwiseConv2d[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 10  (workload key: ["e043f834cc7f19597227e09dc7f59503"]) ==========
-    placeholder = PLACEHOLDER [1, 14, 14, 1024]
+    ========== Task 11  (workload key: ["413e7c2a210f0fbf2fadeb2686aba8ee"]) ==========
+    placeholder = PLACEHOLDER [1, 28, 28, 256]
     PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 1024, 256]
+    placeholder = PLACEHOLDER [1, 1, 256, 256]
     Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
     placeholder = PLACEHOLDER [1, 1, 1, 256]
     T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 11  (workload key: ["cd7c4a374fb2bbc0d075c8cae638ad14"]) ==========
-    placeholder = PLACEHOLDER [1, 14, 14, 256]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 256, 1024]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 14, 14, 1024]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
-
-    ========== Task 12  (workload key: ["1dce2c5e4269b8a12dfc50cd4dd23ff1"]) ==========
-    placeholder = PLACEHOLDER [1, 28, 28, 512]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 512, 256]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+    ========== Task 12  (workload key: ["2b4a9c43c1bcbb5c68742378a4e72f74"]) ==========
+    placeholder = PLACEHOLDER [1, 28, 28, 256]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 29)) && (i2 >= 1)) && (i2 < 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 256, 1]
+    DepthwiseConv2d(b, i, j, c) += (PaddedInput[b, (i + di), (j + dj), c]*placeholder[di, dj, c, 0])
     placeholder = PLACEHOLDER [1, 1, 1, 256]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_add(ax0, ax1, ax2, ax3) = (DepthwiseConv2d[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 13  (workload key: ["d3b36ce001dc24d693facfbdae1979b4"]) ==========
+    ========== Task 13  (workload key: ["017340b550a0bda8bd8ec1933bc32756"]) ==========
     placeholder = PLACEHOLDER [1, 28, 28, 128]
     PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 128, 512]
+    placeholder = PLACEHOLDER [1, 1, 128, 256]
     Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 28, 28, 512]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
-    placeholder = PLACEHOLDER [1, 1, 1, 512]
-    T_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    placeholder = PLACEHOLDER [1, 1, 1, 256]
+    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 14  (workload key: ["0fb1dfcdb5b755e2dab290ed0129dcf2"]) ==========
-    placeholder = PLACEHOLDER [1, 28, 28, 128]
-    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 29)) && (i2 >= 1)) && (i2 < 29)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
-    placeholder = PLACEHOLDER [3, 3, 128, 128]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
+    ========== Task 14  (workload key: ["539b0d6ae7b6e1610e29ae571b8b8c25"]) ==========
+    placeholder = PLACEHOLDER [1, 56, 56, 128]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 57)) && (i2 >= 1)) && (i2 < 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 128, 1]
+    DepthwiseConv2d(b, i, j, c) += (PaddedInput[b, ((i*2) + di), ((j*2) + dj), c]*placeholder[di, dj, c, 0])
     placeholder = PLACEHOLDER [1, 1, 1, 128]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_add(ax0, ax1, ax2, ax3) = (DepthwiseConv2d[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 15  (workload key: ["45acfc473c772458684f36a34549d8aa"]) ==========
-    placeholder = PLACEHOLDER [1, 28, 28, 512]
+    ========== Task 15  (workload key: ["80b2e789f7bce126bde2176640ca76a4"]) ==========
+    placeholder = PLACEHOLDER [1, 56, 56, 128]
     PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 512, 128]
+    placeholder = PLACEHOLDER [1, 1, 128, 128]
     Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
     placeholder = PLACEHOLDER [1, 1, 1, 128]
     T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 16  (workload key: ["5e3ceb6e23ae8c351d5a1770d5fc6c7c"]) ==========
-    placeholder = PLACEHOLDER [1, 28, 28, 128]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 128, 512]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 28, 28, 512]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
-
-    ========== Task 17  (workload key: ["a085717fb3dcb046e5c4c2c04d3dc541"]) ==========
-    placeholder = PLACEHOLDER [1, 56, 56, 256]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 256, 128]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+    ========== Task 16  (workload key: ["3dba2989a90e19861af284d74e40f5cd"]) ==========
+    placeholder = PLACEHOLDER [1, 56, 56, 128]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 57)) && (i2 >= 1)) && (i2 < 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 128, 1]
+    DepthwiseConv2d(b, i, j, c) += (PaddedInput[b, (i + di), (j + dj), c]*placeholder[di, dj, c, 0])
     placeholder = PLACEHOLDER [1, 1, 1, 128]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_add(ax0, ax1, ax2, ax3) = (DepthwiseConv2d[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 18  (workload key: ["691feef049c8693bbe91bd5e7c9cdf34"]) ==========
+    ========== Task 17  (workload key: ["3b9a17584b6afa25229ef34c6f417660"]) ==========
     placeholder = PLACEHOLDER [1, 56, 56, 64]
     PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 64, 256]
+    placeholder = PLACEHOLDER [1, 1, 64, 128]
     Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 56, 56, 256]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
-    placeholder = PLACEHOLDER [1, 1, 1, 256]
-    T_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
-    T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
-
-    ========== Task 19  (workload key: ["a9e632e5167afb60fbe29e7aeef1d152"]) ==========
-    placeholder = PLACEHOLDER [1, 56, 56, 64]
-    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 57)) && (i2 >= 1)) && (i2 < 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
-    placeholder = PLACEHOLDER [3, 3, 64, 64]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 1, 1, 64]
+    placeholder = PLACEHOLDER [1, 1, 1, 128]
     T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 20  (workload key: ["b51e06c1131d4cded40d1b215f722a4e"]) ==========
-    placeholder = PLACEHOLDER [1, 56, 56, 256]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 256, 64]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
+    ========== Task 18  (workload key: ["a4553ca6a00b6c8adb555bcde25d95c4"]) ==========
+    placeholder = PLACEHOLDER [1, 112, 112, 64]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 113)) && (i2 >= 1)) && (i2 < 113)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 64, 1]
+    DepthwiseConv2d(b, i, j, c) += (PaddedInput[b, ((i*2) + di), ((j*2) + dj), c]*placeholder[di, dj, c, 0])
     placeholder = PLACEHOLDER [1, 1, 1, 64]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    T_add(ax0, ax1, ax2, ax3) = (DepthwiseConv2d[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 21  (workload key: ["8fcee68a4342c38248a827f1c6c69177"]) ==========
-    placeholder = PLACEHOLDER [1, 56, 56, 64]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 64, 256]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 56, 56, 256]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])
-
-    ========== Task 22  (workload key: ["8dd7d81db440763f622f03fdc99e6d46"]) ==========
-    placeholder = PLACEHOLDER [1, 56, 56, 64]
+    ========== Task 19  (workload key: ["63672689bf8f678a0abe0854828cbd3b"]) ==========
+    placeholder = PLACEHOLDER [1, 112, 112, 32]
     PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 64, 64]
+    placeholder = PLACEHOLDER [1, 1, 32, 64]
     Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
     placeholder = PLACEHOLDER [1, 1, 1, 64]
     T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 23  (workload key: ["ba2026d923536b75e9b4faed89287d5f"]) ==========
-    placeholder = PLACEHOLDER [1, 112, 112, 64]
-    pad_temp(ax0, ax1, ax2, ax3) = tir.if_then_else(((((ax1 >= 1) && (ax1 < 113)) && (ax2 >= 1)) && (ax2 < 113)), placeholder[ax0, (ax1 - 1), (ax2 - 1), ax3], -3.40282e+38f)
-    tensor(ax0, ax1, ax2, ax3) max= pad_temp[ax0, ((ax1*2) + dh), ((ax2*2) + dw), ax3]
-    placeholder = PLACEHOLDER [1, 1, 1, 64]
-    T_add(ax0, ax1, ax2, ax3) = (tensor[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    ========== Task 20  (workload key: ["1ceacb63c63eaa3da881bff2858acdbf"]) ==========
+    placeholder = PLACEHOLDER [1, 112, 112, 32]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 113)) && (i2 >= 1)) && (i2 < 113)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 32, 1]
+    DepthwiseConv2d(b, i, j, c) += (PaddedInput[b, (i + di), (j + dj), c]*placeholder[di, dj, c, 0])
+    placeholder = PLACEHOLDER [1, 1, 1, 32]
+    T_add(ax0, ax1, ax2, ax3) = (DepthwiseConv2d[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 24  (workload key: ["a0eb8d6048282a4a0986cc2ccf14eaa2"]) ==========
+    ========== Task 21  (workload key: ["2c2147047fd6dafd3d66d75165843f67"]) ==========
     placeholder = PLACEHOLDER [1, 224, 224, 3]
-    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 3) && (i1 < 227)) && (i2 >= 3)) && (i2 < 227)), placeholder[i0, (i1 - 3), (i2 - 3), i3], 0f)
-    placeholder = PLACEHOLDER [7, 7, 3, 64]
+    PaddedInput(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 225)) && (i2 >= 1)) && (i2 < 225)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
+    placeholder = PLACEHOLDER [3, 3, 3, 32]
     Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
-    placeholder = PLACEHOLDER [1, 1, 1, 64]
-    T_add(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, 0, 0, ax3])
+    placeholder = PLACEHOLDER [1, 112, 1, 1]
+    T_multiply(ax0, ax1, ax2, ax3) = (Conv2dOutput[ax0, ax1, ax2, ax3]*placeholder[ax0, ax1, 0, 0])
+    placeholder = PLACEHOLDER [1, 112, 1, 1]
+    T_add(ax0, ax1, ax2, ax3) = (T_multiply[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, 0, 0])
     T_relu(ax0, ax1, ax2, ax3) = max(T_add[ax0, ax1, ax2, ax3], 0f)
 
-    ========== Task 25  (workload key: ["45b4de07687dee43ee1cbde9f516b2bf"]) ==========
-    placeholder = PLACEHOLDER [1, 56, 56, 64]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 64, 256]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, ff])
 
-    ========== Task 26  (workload key: ["b2010aa63c95dedf1f58f3fe8bc78634"]) ==========
-    placeholder = PLACEHOLDER [1, 56, 56, 256]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 256, 512]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
 
-    ========== Task 27  (workload key: ["4d7e646d99bfa3cea8245bd7100369cb"]) ==========
-    placeholder = PLACEHOLDER [1, 28, 28, 512]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 512, 1024]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
 
-    ========== Task 28  (workload key: ["537c8642716948c33a6eaaabc86b159d"]) ==========
-    placeholder = PLACEHOLDER [1, 14, 14, 1024]
-    PaddedInput(i0, i1, i2, i3) = placeholder[i0, i1, i2, i3]
-    placeholder = PLACEHOLDER [1, 1, 1024, 2048]
-    Conv2dOutput(nn, yy, xx, ff) += (PaddedInput[nn, ((yy*2) + ry), ((xx*2) + rx), rc]*placeholder[ry, rx, rc, ff])
+.. note:: How to get the hardware parameters from remote device
+
+  .. code-block:: python
 
+    from tvm.auto_scheduler.utils import request_remote
+    remote = request_remote(device_key, "0.0.0.0", 9190)
+    ctx = remote.cl()
+    max_shared_memory_per_block = ctx.max_shared_memory_per_block
+    # There is no explicit local memory limition
+    # so we can use INT32_MAX to disalbe the check on local_memory.
+    max_local_memory_per_block = 2147483647 # INT32_MAX
+    max_threads_per_block = ctx.max_threads_per_block
+    max_vthread_extent = int(ctx.warp_size / 4) if int(ctx.warp_size / 4) > 1 else ctx.warp_size
+    warp_size = ctx.warp_size
+    hardware_params = auto_scheduler.HardwareParams(-1, 16, 64,
+                                                    max_shared_memory_per_block, max_local_memory_per_block,
+                                                    max_threads_per_block, max_vthread_extent, warp_size)
 
+ Now you could pass it to search task and tune
 
+  .. code-block:: python
 
-Begin Tuning
-------------
-Now, we set some options for tuning and launch the search tasks
+    tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, target, target_host, hardware_params)
+
+
+Tuning and Evaluate
+-------------------
+Now, we set some options for tuning, launch the search tasks and evaluate the end-to-end performance
 
 * :code:`num_measure_trials` is the number of measurement trials we can use during the tuning.
   You can set it to a small number (e.g., 200) for a fast demonstrative run.
@@ -456,23 +451,60 @@ Now, we set some options for tuning and launch the search tasks
 
 
 
-    def run_tuning():
+    def tune_and_evaluate():
         print("Begin tuning...")
         tuner = auto_scheduler.TaskScheduler(tasks, task_weights)
         tune_option = auto_scheduler.TuningOptions(
             num_measure_trials=200,  # change this to 20000 to achieve the best performance
-            runner=auto_scheduler.LocalRunner(repeat=10, enable_cpu_cache_flush=True),
+            builder=auto_scheduler.LocalBuilder(build_func="ndk" if use_ndk else "default"),
+            runner=auto_scheduler.RPCRunner(
+                device_key, host="0.0.0.0", port=9190, repeat=3, timeout=50
+            ),
             measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
         )
 
         tuner.tune(tune_option)
 
+        # Compile the whole network
+        print("Compile...")
+        with auto_scheduler.ApplyHistoryBest(log_file):
+            with tvm.transform.PassContext(
+                opt_level=3, config={"relay.backend.use_auto_scheduler": True}
+            ):
+                lib = relay.build(mod, target=target, target_host=target_host, params=params)
+
+        # Create graph runtime
+        print("=============== Request Remote ===============")
+        from tvm.auto_scheduler.utils import request_remote
+
+        remote = request_remote(device_key, "0.0.0.0", 9190)
+        ctx = remote.cl()
+        from tvm.contrib import utils, ndk
+
+        temp = utils.tempdir()
+        filename = "deploy_lib.so"
+        path_lib = temp.relpath(filename)
+        lib.export_library(path_lib, ndk.create_shared)
+        remote.upload(path_lib)
+        loaded_lib = remote.load_module(filename)
+        module = graph_runtime.GraphModule(loaded_lib["default"](ctx))
+        data = (np.random.uniform(size=input_shape)).astype(dtype)
+        data_tvm = tvm.nd.array(data)
+        module.set_input("data", data_tvm)
+
+        # Evaluate
+        print("Evaluate inference time cost...")
+        ftimer = module.module.time_evaluator("run", ctx, repeat=3, min_repeat_ms=500)
+        prof_res = np.array(ftimer().results) * 1e3  # convert to millisecond
+        print(
+            "Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), np.std(prof_res))
+        )
 
-    # We do not run the tuning in our webpage server since it takes too long.
-    # Uncomment the following line to run it by yourself.
 
-    # run_tuning()
+    # We do not run the tuning in our webpage server since server doesn't have mali gpu.
+    # Uncomment the following line to run it by yourself.
 
+    # tune_and_evaluate()
 
 
 
@@ -546,51 +578,6 @@ Now, we set some options for tuning and launch the search tasks
   you should be able to do the compilation (the secion below).
 
 
-Compile and Evaluate
---------------------
-After auto-tuning, we can compile the network with the best schedules we found.
-All measurement records are dumped into the log file during auto-tuning,
-so we can read the log file and load the best schedules.
-
-
-.. code-block:: default
-
-
-    # Compile with the history best
-    print("Compile...")
-    with auto_scheduler.ApplyHistoryBest(log_file):
-        with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}):
-            lib = relay.build(mod, target=target, params=params)
-
-    # Create graph runtime
-    ctx = tvm.context(str(target), 0)
-    module = graph_runtime.GraphModule(lib["default"](ctx))
-    data_tvm = tvm.nd.array((np.random.uniform(size=input_shape)).astype(dtype))
-    module.set_input("data", data_tvm)
-
-    # Evaluate
-    print("Evaluate inference time cost...")
-    ftimer = module.module.time_evaluator("run", ctx, repeat=3, min_repeat_ms=500)
-    prof_res = np.array(ftimer().results) * 1e3  # convert to millisecond
-    print("Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), np.std(prof_res)))
-
-
-
-
-
-
-.. rst-class:: sphx-glr-script-out
-
- Out:
-
- .. code-block:: none
-
-    Compile...
-    Evaluate inference time cost...
-    Mean inference time (std dev): 30.72 ms (0.09 ms)
-
-
-
 Other Tips
 ----------
 1. During the tuning, the auto-scheduler needs to compile many programs and
@@ -602,14 +589,14 @@ Other Tips
    add a new argument :code:`load_log_file` when creating the task scheduler
    in function :code:`run_tuning`. Say,
    :code:`tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)`
-4. If you have multiple target CPUs, you can use all of them for measurements to
-   parallelize the measurements. Check this :ref:`section <tutorials-autotvm-rpc-tracker>`
+4. If you have multiple target GPUs, you can use all of them for measurements to
+   parallelize the measurements. Check this :ref:`section <tutorials-autotvm-scale-up-rpc-tracker>`
    to learn how to use the RPC Tracker and RPC Server.
    To use the RPC Tracker in auto-scheduler, replace the runner in :code:`TuningOptions`
    with :any:`auto_scheduler.RPCRunner`.
 
 
-.. _sphx_glr_download_tutorials_auto_scheduler_tune_network_x86.py:
+.. _sphx_glr_download_tutorials_auto_scheduler_tune_network_mali.py:
 
 
 .. only :: html
@@ -621,13 +608,13 @@ Other Tips
 
   .. container:: sphx-glr-download
 
-     :download:`Download Python source code: tune_network_x86.py <tune_network_x86.py>`
+     :download:`Download Python source code: tune_network_mali.py <tune_network_mali.py>`
 
 
 
   .. container:: sphx-glr-download
 
-     :download:`Download Jupyter notebook: tune_network_x86.ipynb <tune_network_x86.ipynb>`
+     :download:`Download Jupyter notebook: tune_network_mali.ipynb <tune_network_mali.ipynb>`
 
 
 .. only:: html
diff --git a/docs/_sources/tutorials/auto_scheduler/tune_network_x86.rst.txt b/docs/_sources/tutorials/auto_scheduler/tune_network_x86.rst.txt
index 9965acf..38ef937 100644
--- a/docs/_sources/tutorials/auto_scheduler/tune_network_x86.rst.txt
+++ b/docs/_sources/tutorials/auto_scheduler/tune_network_x86.rst.txt
@@ -191,13 +191,12 @@ The task scheduler will just optimize this objective.
     T_softmax_expsum(i0) += T_softmax_exp[i0, k]
     T_softmax_norm(i0, i1) = (T_softmax_exp[i0, i1]/T_softmax_expsum[i0])
 
-    ========== Task 1  (workload key: ["6129df1a3d5f6326c8393a8d17160199"]) ==========
+    ========== Task 1  (workload key: ["eca51cb8a8335304c6e670bdb115a9b7"]) ==========
     placeholder = PLACEHOLDER [1, 2048]
     placeholder = PLACEHOLDER [1000, 2048]
-    compute(z, y, x) += (placeholder[z, ((k*16) + x)]*placeholder[y, ((k*16) + x)])
-    compute(y, x) += compute[y, x, kk]
+    T_dense(i, j) += (placeholder[i, k]*placeholder[j, k])
     placeholder = PLACEHOLDER [1000]
-    T_add(ax0, ax1) = (compute[ax0, ax1] + placeholder[ax1])
+    T_add(ax0, ax1) = (T_dense[ax0, ax1] + placeholder[ax1])
 
     ========== Task 2  (workload key: ["36ee2798ed60bae3bcd1bb89a0285fe8"]) ==========
     placeholder = PLACEHOLDER [1, 7, 7, 2048]
@@ -587,7 +586,7 @@ so we can read the log file and load the best schedules.
 
     Compile...
     Evaluate inference time cost...
-    Mean inference time (std dev): 30.72 ms (0.09 ms)
+    Mean inference time (std dev): 32.65 ms (0.16 ms)
 
 
 
@@ -603,7 +602,7 @@ Other Tips
    in function :code:`run_tuning`. Say,
    :code:`tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)`
 4. If you have multiple target CPUs, you can use all of them for measurements to
-   parallelize the measurements. Check this :ref:`section <tutorials-autotvm-rpc-tracker>`
+   parallelize the measurements. Check this :ref:`section <tutorials-autotvm-scale-up-rpc-tracker>`
    to learn how to use the RPC Tracker and RPC Server.
    To use the RPC Tracker in auto-scheduler, replace the runner in :code:`TuningOptions`
    with :any:`auto_scheduler.RPCRunner`.
diff --git a/docs/_sources/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/tutorials/autotvm/sg_execution_times.rst.txt
index e02f798..cf0a87b 100644
--- a/docs/_sources/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,11 +5,11 @@
 
 Computation times
 =================
-**00:59.256** total execution time for **tutorials_autotvm** files:
-
-- **00:30.101**: :ref:`sphx_glr_tutorials_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)
-- **00:28.464**: :ref:`sphx_glr_tutorials_autotvm_tune_simple_template.py` (``tune_simple_template.py``)
-- **00:00.203**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)
-- **00:00.173**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)
-- **00:00.158**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)
-- **00:00.157**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``)
+**00:53.623** total execution time for **tutorials_autotvm** files:
+
+- **00:27.863**: :ref:`sphx_glr_tutorials_autotvm_tune_simple_template.py` (``tune_simple_template.py``)
+- **00:25.136**: :ref:`sphx_glr_tutorials_autotvm_tune_conv2d_cuda.py` (``tune_conv2d_cuda.py``)
+- **00:00.165**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_x86.py` (``tune_relay_x86.py``)
+- **00:00.157**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_cuda.py` (``tune_relay_cuda.py``)
+- **00:00.154**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_mobile_gpu.py` (``tune_relay_mobile_gpu.py``)
+- **00:00.148**: :ref:`sphx_glr_tutorials_autotvm_tune_relay_arm.py` (``tune_relay_arm.py``)
diff --git a/docs/_sources/tutorials/autotvm/tune_conv2d_cuda.rst.txt b/docs/_sources/tutorials/autotvm/tune_conv2d_cuda.rst.txt
index a2ea4e0..84f5c5c 100644
--- a/docs/_sources/tutorials/autotvm/tune_conv2d_cuda.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_conv2d_cuda.rst.txt
@@ -26,7 +26,7 @@ To use autotvm package in tvm, we need to install some extra dependencies.
 
 .. code-block:: bash
 
-  pip3 install --user psutil xgboost tornado
+  pip3 install --user psutil xgboost tornado cloudpickle
 
 To make TVM run faster in tuning, it is recommended to use cython
 as FFI of tvm. In the root directory of tvm, execute
@@ -241,26 +241,26 @@ for this template
        7 unroll_explicit: OtherOption([0, 1]) len=2
     )
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 202.22/202.22   result: MeasureResult(costs=(0.0011447773775510204,), error_no=0, all_cost=1.6461100578308105, timestamp=1607225801.8082893)    [('tile_f', [-1, 2, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4881186
-    No: 2   GFLOPS: 0.00/202.22     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 3   GFLOPS: 180.05/202.22   result: MeasureResult(costs=(0.0012857949919354839,), error_no=0, all_cost=1.6061229705810547, timestamp=1607225803.2143183)    [('tile_f', [-1, 4, 32, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3605182
-    No: 4   GFLOPS: 0.00/202.22     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 5   GFLOPS: 0.00/202.22     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 6   GFLOPS: 0.00/202.22     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 7   GFLOPS: 0.00/202.22     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 8   GFLOPS: 1.76/202.22     result: MeasureResult(costs=(0.13166369325,), error_no=0, all_cost=3.334230899810791, timestamp=1607225806.5215275)     [('tile_f', [-1, 2, 4, 64]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2716108
-    No: 9   GFLOPS: 15.17/202.22    result: MeasureResult(costs=(0.015256713555555555,), error_no=0, all_cost=1.7692039012908936, timestamp=1607225809.5385563)     [('tile_f', [-1, 1, 4, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1263092
-    No: 10  GFLOPS: 228.21/228.21   result: MeasureResult(costs=(0.001014435909090909,), error_no=0, all_cost=1.5810611248016357, timestamp=1607225810.5847104)     [('tile_f', [-1, 1, 32, 4]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 16, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8921130
-    No: 11  GFLOPS: 0.00/228.21     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 12  GFLOPS: 120.55/228.21   result: MeasureResult(costs=(0.0019204238301886794,), error_no=0, all_cost=1.3215758800506592, timestamp=1607225811.7186823)    [('tile_f', [-1, 2, 32, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,5036371
-    No: 13  GFLOPS: 0.00/228.21     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 14  GFLOPS: 0.00/228.21     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 15  GFLOPS: 82.94/228.21    result: MeasureResult(costs=(0.0027913426315789476,), error_no=0, all_cost=1.4726567268371582, timestamp=1607225813.1304166)    [('tile_f', [-1, 1, 1, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3582580
-    No: 16  GFLOPS: 0.00/228.21     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 17  GFLOPS: 0.00/228.21     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 18  GFLOPS: 0.00/228.21     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
-    No: 19  GFLOPS: 17.48/228.21    result: MeasureResult(costs=(0.013246279555555554,), error_no=0, all_cost=1.6513993740081787, timestamp=1607225816.3279276)     [('tile_f', [-1, 8, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4107668
-    No: 20  GFLOPS: 0.00/228.21     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fc35dba3f31]\n  [bt] (3) /workspace/build/libtvm.so(+0x6b1c67) [0x7fc35cf6ac67]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7fc35cf6768d]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 1   GFLOPS: 458.47/458.47   result: MeasureResult(costs=(0.0005049419962264151,), error_no=0, all_cost=1.5375077724456787, timestamp=1608956281.1430748)    [('tile_f', [-1, 2, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4881186
+    No: 2   GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 3   GFLOPS: 220.31/458.47   result: MeasureResult(costs=(0.0010508116209150325,), error_no=0, all_cost=1.6342782974243164, timestamp=1608956282.443572)     [('tile_f', [-1, 4, 32, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 1]), ('tile_rc', [-1, 1, 16]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3605182
+    No: 4   GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 5   GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 6   GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 7   GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 8   GFLOPS: 2.18/458.47     result: MeasureResult(costs=(0.10624710275,), error_no=0, all_cost=3.20185923576355, timestamp=1608956285.2293513)      [('tile_f', [-1, 2, 4, 64]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 512), ('unroll_explicit', 0)],None,2716108
+    No: 9   GFLOPS: 20.91/458.47    result: MeasureResult(costs=(0.0110708657,), error_no=0, all_cost=1.4918665885925293, timestamp=1608956286.1990628)     [('tile_f', [-1, 1, 4, 2]), ('tile_y', [-1, 7, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1263092
+    No: 10  GFLOPS: 273.32/458.47   result: MeasureResult(costs=(0.0008469947421052631,), error_no=0, all_cost=1.6795475482940674, timestamp=1608956287.2305887)    [('tile_f', [-1, 1, 32, 4]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 16, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8921130
+    No: 11  GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 12  GFLOPS: 148.81/458.47   result: MeasureResult(costs=(0.0015557267846153847,), error_no=0, all_cost=1.2524280548095703, timestamp=1608956288.236084)     [('tile_f', [-1, 2, 32, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 1]), ('tile_ry', [-1, 1, 3]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,5036371
+    No: 13  GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 14  GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 15  GFLOPS: 108.29/458.47   result: MeasureResult(costs=(0.0021377897021276596,), error_no=0, all_cost=1.2944214344024658, timestamp=1608956289.4396245)    [('tile_f', [-1, 1, 1, 4]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 1, 8]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,3582580
+    No: 16  GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 17  GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 18  GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
+    No: 19  GFLOPS: 15.65/458.47    result: MeasureResult(costs=(0.014792380444444444,), error_no=0, all_cost=1.4618678092956543, timestamp=1608956292.6708024)     [('tile_f', [-1, 8, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 1, 7]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4107668
+    No: 20  GFLOPS: 0.00/458.47     result: MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  [bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (3) /workspace/build/libtvm.so(+0x6bf2d7) [0x7f3501ef82d7]\n  [bt] (2) /workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x3ed) [0x7f3501ef4cfd]\n  [bt] (1) /workspace/build/libtvm.so(tvm::tir::transform::PrimFunc [...]
 
 
 
@@ -312,8 +312,8 @@ and measure running time.
 
 
     Best config:
-    [('tile_f', [-1, 1, 32, 4]), ('tile_y', [-1, 1, 7, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 16, 1]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 1]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 1)],None,8921130
-    Time cost of this operator: 0.001354
+    [('tile_f', [-1, 2, 64, 1]), ('tile_y', [-1, 1, 1, 7]), ('tile_x', [-1, 1, 7, 1]), ('tile_rc', [-1, 2, 2]), ('tile_ry', [-1, 3, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 1500), ('unroll_explicit', 0)],None,4881186
+    Time cost of this operator: 0.000595
 
 
 
diff --git a/docs/_sources/tutorials/autotvm/tune_relay_arm.rst.txt b/docs/_sources/tutorials/autotvm/tune_relay_arm.rst.txt
index 9ccc545..52b33b0 100644
--- a/docs/_sources/tutorials/autotvm/tune_relay_arm.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_relay_arm.rst.txt
@@ -39,7 +39,7 @@ To use the autotvm package in tvm, we need to install some extra dependencies.
 
 .. code-block:: bash
 
-  pip3 install --user psutil xgboost tornado
+  pip3 install --user psutil xgboost tornado cloudpickle
 
 To make TVM run faster during tuning, it is recommended to use cython
 as FFI of TVM. In the root directory of TVM, execute
@@ -155,7 +155,7 @@ The expected output is
 
   INFO:RPCTracker:bind to 0.0.0.0:9190
 
-Register devices to RPC Tracker
+Register Devices to RPC Tracker
 -----------------------------------
 Now we can register our devices to the tracker. The first step is to
 build the TVM runtime for the ARM devices.
diff --git a/docs/_sources/tutorials/autotvm/tune_relay_cuda.rst.txt b/docs/_sources/tutorials/autotvm/tune_relay_cuda.rst.txt
index 282e528..5ad864d 100644
--- a/docs/_sources/tutorials/autotvm/tune_relay_cuda.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_relay_cuda.rst.txt
@@ -37,7 +37,7 @@ To use the autotvm package in tvm, we need to install some extra dependencies.
 
 .. code-block:: bash
 
-  pip3 install --user psutil xgboost tornado
+  pip3 install --user psutil xgboost tornado cloudpickle
 
 To make TVM run faster during tuning, it is recommended to use cython
 as FFI of tvm. In the root directory of tvm, execute:
@@ -342,10 +342,10 @@ As a reference baseline, the time cost of MXNet + TensorRT on resnet-18 is 1.30m
 
   Finally, always feel free to ask our community for help on https://discuss.tvm.apache.org
 
+.. _tutorials-autotvm-scale-up-rpc-tracker:
+
 Scale up measurement by using multiple devices
 ----------------------------------------------
-.. _tutorials-autotvm-rpc-tracker:
-
 If you have multiple devices, you can use all of them for measurement.
 TVM uses the RPC Tracker to manage distributed devices.
 The RPC Tracker is a centralized controller node. We can register all devices to
@@ -366,8 +366,8 @@ The expected output is
 
   INFO:RPCTracker:bind to 0.0.0.0:9190
 
-Then open another new terminal for the RPC server. We need to start one server
-for each dedicated device. We use a string key to distinguish the types of devices.
+Then open another new terminal for the RPC server. We need to start one dedicated server
+for each device. We use a string key to distinguish the types of devices.
 You can pick a name you like.
 (Note: For rocm backend, there are some internal errors with the compiler,
 we need to add `--no-fork` to the argument list.)
diff --git a/docs/_sources/tutorials/autotvm/tune_relay_mobile_gpu.rst.txt b/docs/_sources/tutorials/autotvm/tune_relay_mobile_gpu.rst.txt
index b82a4c7..bdfe404 100644
--- a/docs/_sources/tutorials/autotvm/tune_relay_mobile_gpu.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_relay_mobile_gpu.rst.txt
@@ -37,7 +37,7 @@ To use the autotvm package in tvm, we need to install some extra dependencies.
 
 .. code-block:: bash
 
-  pip3 install --user psutil xgboost tornado
+  pip3 install --user psutil xgboost tornado cloudpickle
 
 To make TVM run faster during tuning, it is recommended to use cython
 as FFI of tvm. In the root directory of tvm, execute
@@ -129,6 +129,8 @@ We can also load models from MXNet, ONNX and TensorFlow.
 
 
 
+.. _tutorials-autotvm-start-rpc-tracker:
+
 Start RPC Tracker
 -----------------
 TVM uses RPC session to communicate with ARM boards.
@@ -154,7 +156,7 @@ The expected output is
 
   INFO:RPCTracker:bind to 0.0.0.0:9190
 
-Register devices to RPC Tracker
+Register Devices to RPC Tracker
 -----------------------------------
 Now we can register our devices to the tracker. The first step is to
 build the TVM runtime for the ARM devices.
diff --git a/docs/_sources/tutorials/autotvm/tune_simple_template.rst.txt b/docs/_sources/tutorials/autotvm/tune_simple_template.rst.txt
index 5294a0a..9f786b7 100644
--- a/docs/_sources/tutorials/autotvm/tune_simple_template.rst.txt
+++ b/docs/_sources/tutorials/autotvm/tune_simple_template.rst.txt
@@ -31,7 +31,7 @@ This step (installing xgboost) can be skipped as it doesn't need XGBoost
 
 .. code-block:: bash
 
-  pip3 install --user psutil xgboost
+  pip3 install --user psutil xgboost cloudpickle
 
 To make TVM run faster in tuning, it is recommended to use cython
 as FFI of TVM. In the root directory of TVM, execute
@@ -369,16 +369,16 @@ used to get the best config later.
  .. code-block:: none
 
     Get devices for measurement successfully!
-    No: 1   GFLOPS: 0.52/0.52       result: MeasureResult(costs=(0.5179643672,), error_no=0, all_cost=8.699557542800903, timestamp=1607225778.9184623)      [('tile_y', [-1, 64]), ('tile_x', [-1, 1])],None,6
-    No: 2   GFLOPS: 2.05/2.05       result: MeasureResult(costs=(0.1307110214,), error_no=0, all_cost=2.452157735824585, timestamp=1607225781.4836178)      [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
-    No: 3   GFLOPS: 2.77/2.77       result: MeasureResult(costs=(0.0968108324,), error_no=0, all_cost=2.015434741973877, timestamp=1607225783.5040994)      [('tile_y', [-1, 2]), ('tile_x', [-1, 8])],None,31
-    No: 4   GFLOPS: 7.71/7.71       result: MeasureResult(costs=(0.0348177938,), error_no=0, all_cost=0.9887301921844482, timestamp=1607225784.5313203)     [('tile_y', [-1, 1]), ('tile_x', [-1, 32])],None,50
-    No: 5   GFLOPS: 13.46/13.46     result: MeasureResult(costs=(0.0199451586,), error_no=0, all_cost=0.7833263874053955, timestamp=1607225785.3334467)     [('tile_y', [-1, 256]), ('tile_x', [-1, 64])],None,68
-    No: 6   GFLOPS: 11.91/13.46     result: MeasureResult(costs=(0.0225446656,), error_no=0, all_cost=0.7622959613800049, timestamp=1607225786.1802726)     [('tile_y', [-1, 256]), ('tile_x', [-1, 512])],None,98
-    No: 7   GFLOPS: 0.92/13.46      result: MeasureResult(costs=(0.2913359364,), error_no=0, all_cost=5.074311971664429, timestamp=1607225791.3119547)      [('tile_y', [-1, 128]), ('tile_x', [-1, 2])],None,17
-    No: 8   GFLOPS: 2.37/13.46      result: MeasureResult(costs=(0.1133100596,), error_no=0, all_cost=2.2167930603027344, timestamp=1607225793.595454)      [('tile_y', [-1, 8]), ('tile_x', [-1, 4])],None,23
-    No: 9   GFLOPS: 11.52/13.46     result: MeasureResult(costs=(0.0233022846,), error_no=0, all_cost=0.7279143333435059, timestamp=1607225795.1428313)     [('tile_y', [-1, 256]), ('tile_x', [-1, 32])],None,58
-    No: 10  GFLOPS: 14.67/14.67     result: MeasureResult(costs=(0.0182990712,), error_no=0, all_cost=0.7626948356628418, timestamp=1607225795.9127738)     [('tile_y', [-1, 64]), ('tile_x', [-1, 128])],None,76
+    No: 1   GFLOPS: 0.00/0.00       result: MeasureResult(costs=(RuntimeError('Traceback (most recent call last):\n  [bt] (5) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f3502b3d7f1]\n  [bt] (4) /workspace/build/libtvm.so(+0x1350b92) [0x7f3502b89b92]\n  [bt] (3) /workspace/build/libtvm.so(tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x246) [0x7f3502b8c5c6]\n  [bt] (2) /workspace/build/libtvm.so(tvm::runtime::RPCClientSession::Call [...]
+    No: 2   GFLOPS: 5.20/5.20       result: MeasureResult(costs=(0.051649653799999994,), error_no=0, all_cost=1.605952262878418, timestamp=1608956261.9138966)      [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
+    No: 3   GFLOPS: 2.95/5.20       result: MeasureResult(costs=(0.0910745618,), error_no=0, all_cost=2.4689717292785645, timestamp=1608956263.7452652)     [('tile_y', [-1, 2]), ('tile_x', [-1, 8])],None,31
+    No: 4   GFLOPS: 16.65/16.65     result: MeasureResult(costs=(0.0161258324,), error_no=0, all_cost=0.7027585506439209, timestamp=1608956264.4698565)     [('tile_y', [-1, 1]), ('tile_x', [-1, 32])],None,50
+    No: 5   GFLOPS: 21.77/21.77     result: MeasureResult(costs=(0.012327853,), error_no=0, all_cost=0.9769332408905029, timestamp=1608956265.0378294)      [('tile_y', [-1, 256]), ('tile_x', [-1, 64])],None,68
+    No: 6   GFLOPS: 21.07/21.77     result: MeasureResult(costs=(0.0127378022,), error_no=0, all_cost=0.6480433940887451, timestamp=1608956265.6137564)     [('tile_y', [-1, 256]), ('tile_x', [-1, 512])],None,98
+    No: 7   GFLOPS: 0.86/21.77      result: MeasureResult(costs=(0.3122983762,), error_no=0, all_cost=5.539120435714722, timestamp=1608956270.9430459)      [('tile_y', [-1, 128]), ('tile_x', [-1, 2])],None,17
+    No: 8   GFLOPS: 1.52/21.77      result: MeasureResult(costs=(0.1765732038,), error_no=0, all_cost=3.407959222793579, timestamp=1608956274.1904879)      [('tile_y', [-1, 8]), ('tile_x', [-1, 4])],None,23
+    No: 9   GFLOPS: 19.37/21.77     result: MeasureResult(costs=(0.0138569804,), error_no=0, all_cost=0.6956896781921387, timestamp=1608956274.776456)      [('tile_y', [-1, 256]), ('tile_x', [-1, 32])],None,58
+    No: 10  GFLOPS: 24.11/24.11     result: MeasureResult(costs=(0.0111336066,), error_no=0, all_cost=0.6651310920715332, timestamp=1608956275.326466)      [('tile_y', [-1, 64]), ('tile_x', [-1, 128])],None,76
 
 
 
diff --git a/docs/_sources/tutorials/dev/sg_execution_times.rst.txt b/docs/_sources/tutorials/dev/sg_execution_times.rst.txt
index 423392c..176cdd2 100644
--- a/docs/_sources/tutorials/dev/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/dev/sg_execution_times.rst.txt
@@ -5,8 +5,8 @@
 
 Computation times
 =================
-**00:32.270** total execution time for **tutorials_dev** files:
+**00:26.149** total execution time for **tutorials_dev** files:
 
-- **00:31.677**: :ref:`sphx_glr_tutorials_dev_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``)
-- **00:00.396**: :ref:`sphx_glr_tutorials_dev_use_pass_infra.py` (``use_pass_infra.py``)
-- **00:00.198**: :ref:`sphx_glr_tutorials_dev_low_level_custom_pass.py` (``low_level_custom_pass.py``)
+- **00:25.646**: :ref:`sphx_glr_tutorials_dev_bring_your_own_datatypes.py` (``bring_your_own_datatypes.py``)
+- **00:00.332**: :ref:`sphx_glr_tutorials_dev_use_pass_infra.py` (``use_pass_infra.py``)
+- **00:00.171**: :ref:`sphx_glr_tutorials_dev_low_level_custom_pass.py` (``low_level_custom_pass.py``)
diff --git a/docs/_sources/tutorials/frontend/deploy_model_on_android.rst.txt b/docs/_sources/tutorials/frontend/deploy_model_on_android.rst.txt
index 754de4a..8bd3fbc 100644
--- a/docs/_sources/tutorials/frontend/deploy_model_on_android.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_model_on_android.rst.txt
@@ -421,7 +421,7 @@ Execute on TVM
 
     TVM prediction top-1: tiger cat
     Evaluate inference time cost...
-    Mean inference time (std dev): 5.42 ms (0.16 ms)
+    Mean inference time (std dev): 5.64 ms (0.06 ms)
 
 
 
diff --git a/docs/_sources/tutorials/frontend/deploy_object_detection_pytorch.rst.txt b/docs/_sources/tutorials/frontend/deploy_object_detection_pytorch.rst.txt
index d140675..6b57773 100644
--- a/docs/_sources/tutorials/frontend/deploy_object_detection_pytorch.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_object_detection_pytorch.rst.txt
@@ -121,6 +121,9 @@ Load pre-trained maskrcnn from torchvision and do tracing
       for s, s_orig in zip(new_size, original_size)
     /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/roi_heads.py:372: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
       return torch.tensor(M + 2 * padding).to(torch.float32) / torch.tensor(M).to(torch.float32)
+    /usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py:966: TracerWarning: Output nr 2. of the traced function does not match the corresponding output of the Python function. Detailed error:
+    With rtol=1e-05 and atol=1e-05, found 1 element(s) (out of 2) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 2.456456422805786e-05 (0.11357344686985016 vs. 0.1135488823056221), which occurred at index 1.
+      _module_class,
 
 
 
@@ -247,7 +250,7 @@ Get boxes with score larger than 0.9
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  6.630 seconds)
+   **Total running time of the script:** ( 1 minutes  8.702 seconds)
 
 
 .. _sphx_glr_download_tutorials_frontend_deploy_object_detection_pytorch.py:
diff --git a/docs/_sources/tutorials/frontend/deploy_prequantized.rst.txt b/docs/_sources/tutorials/frontend/deploy_prequantized.rst.txt
index 94b4250..e7c07f7 100644
--- a/docs/_sources/tutorials/frontend/deploy_prequantized.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_prequantized.rst.txt
@@ -299,7 +299,7 @@ We should see identical labels printed.
 
  .. code-block:: none
 
-    PyTorch top3 labels: ['tiger cat', 'Egyptian cat', 'lynx, catamount']
+    PyTorch top3 labels: ['tiger cat', 'lynx, catamount', 'Egyptian cat']
     TVM top3 labels: ['tiger cat', 'Egyptian cat', 'tabby, tabby cat']
 
 
@@ -323,7 +323,7 @@ output values are identical out of 1000 outputs from mobilenet v2.
 
  .. code-block:: none
 
-    132 in 1000 raw floating outputs identical.
+    156 in 1000 raw floating outputs identical.
 
 
 
@@ -350,7 +350,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
  .. code-block:: none
 
-    Elapsed average ms: 19.295305520000003
+    Elapsed average ms: 11.881062060000001
 
 
 
diff --git a/docs/_sources/tutorials/frontend/deploy_prequantized_tflite.rst.txt b/docs/_sources/tutorials/frontend/deploy_prequantized_tflite.rst.txt
index a05a4f0..9474092 100644
--- a/docs/_sources/tutorials/frontend/deploy_prequantized_tflite.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_prequantized_tflite.rst.txt
@@ -368,7 +368,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
  .. code-block:: none
 
-    Elapsed average ms: 36.25781896
+    Elapsed average ms: 31.97628599
 
 
 
@@ -401,7 +401,7 @@ Here we give an example of how to measure performance of TVM compiled models.
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 2 minutes  35.942 seconds)
+   **Total running time of the script:** ( 1 minutes  59.235 seconds)
 
 
 .. _sphx_glr_download_tutorials_frontend_deploy_prequantized_tflite.py:
diff --git a/docs/_sources/tutorials/frontend/deploy_ssd_gluoncv.rst.txt b/docs/_sources/tutorials/frontend/deploy_ssd_gluoncv.rst.txt
index 0c9638a..a61563a 100644
--- a/docs/_sources/tutorials/frontend/deploy_ssd_gluoncv.rst.txt
+++ b/docs/_sources/tutorials/frontend/deploy_ssd_gluoncv.rst.txt
@@ -195,7 +195,7 @@ Display result
 
 .. rst-class:: sphx-glr-timing
 
-   **Total running time of the script:** ( 1 minutes  54.470 seconds)
+   **Total running time of the script:** ( 1 minutes  11.935 seconds)
 
 
 .. _sphx_glr_download_tutorials_frontend_deploy_ssd_gluoncv.py:
diff --git a/docs/_sources/tutorials/frontend/from_mxnet.rst.txt b/docs/_sources/tutorials/frontend/from_mxnet.rst.txt
index a610cdd..1e03c39 100644
--- a/docs/_sources/tutorials/frontend/from_mxnet.rst.txt
+++ b/docs/_sources/tutorials/frontend/from_mxnet.rst.txt
@@ -138,6 +138,14 @@ now compile the graph
 
 
 
+.. rst-class:: sphx-glr-script-out
+
+ Out:
+
+ .. code-block:: none
+
+    Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -thread_warp_size=32, workload=('dense_small_batch.cuda', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (1000, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
+
 
 
 Execute the portable graph on TVM
diff --git a/docs/_sources/tutorials/frontend/from_onnx.rst.txt b/docs/_sources/tutorials/frontend/from_onnx.rst.txt
index 9a20476..21ee522 100644
--- a/docs/_sources/tutorials/frontend/from_onnx.rst.txt
+++ b/docs/_sources/tutorials/frontend/from_onnx.rst.txt
@@ -77,7 +77,11 @@ we skip the pytorch model construction part, and download the saved onnx model
 
 Load a test image
 ---------------------------------------------
-A single cat dominates the examples!
+A single cat dominates the examples! This model takes a single input image of size
+224x224 and outputs a scaled image that is 3x greater than the input along each
+axis, a 672x672 image. Re-scale the cat image to fit this input shape then
+convert to `YCbCr`. The super resolution model will then be applied to the
+luminance (`Y`) channel.
 
 
 .. code-block:: default
@@ -107,6 +111,14 @@ A single cat dominates the examples!
 
 Compile the model with relay
 ---------------------------------------------
+Typically ONNX models mix model input values with parameter values, with
+the input having the name `1`. This model dependent, and you should check
+with the documentation for your model to determine the full input and
+parameter name space.
+
+Passing in the shape dictionary to the `relay.frontend.from_onnx` method
+tells relay which ONNX parameters are inputs, and which are parameters, and
+provides a static definition of the input size.
 
 
 .. code-block:: default
@@ -130,7 +142,7 @@ Compile the model with relay
 
  .. code-block:: none
 
-    /workspace/docs/../python/tvm/relay/frontend/onnx.py:2737: UserWarning: Mismatched attribute type in ' : kernel_shape'
+    /workspace/docs/../python/tvm/relay/frontend/onnx.py:3075: UserWarning: Mismatched attribute type in ' : kernel_shape'
 
     ==> Context: Bad node spec: input: "1" input: "2" output: "11" op_type: "Conv" attribute { name: "kernel_shape" ints: 5 ints: 5 } attribute { name: "strides" ints: 1 ints: 1 } attribute { name: "pads" ints: 2 ints: 2 ints: 2 ints: 2 } attribute { name: "dilations" ints: 1 ints: 1 } attribute { name: "group" i: 1 }
       warnings.warn(str(e))
@@ -154,7 +166,9 @@ Execute on TVM
 
 Display results
 ---------------------------------------------
-We put input and output image neck to neck
+We put input and output image neck to neck. The luminance channel, `Y` is the output
+from the model. The chroma channels `Cb` and `Cr` are resized to match with a simple
+bicubic algorithm. The image is then recombined and converted back to `RGB`.
 
 
 .. code-block:: default
@@ -188,6 +202,11 @@ into a static shapes at compile time. If this fails, there may still be dynamic
 operations in the model. Not all TVM kernels currently support dynamic shapes,
 please file an issue on discuss.tvm.apache.org if you hit an error with dynamic kernels.
 
+This particular model was build using an older version of ONNX. During the import
+phase ONNX importer will run the ONNX verifier, which may throw a `Mismatched attribute type`
+warning. Because TVM supports a number of different ONNX versions, the Relay model
+will still be valid.
+
 
 .. _sphx_glr_download_tutorials_frontend_from_onnx.py:
 
diff --git a/docs/_sources/tutorials/frontend/from_pytorch.rst.txt b/docs/_sources/tutorials/frontend/from_pytorch.rst.txt
index cad1d0a..808a4bf 100644
--- a/docs/_sources/tutorials/frontend/from_pytorch.rst.txt
+++ b/docs/_sources/tutorials/frontend/from_pytorch.rst.txt
@@ -155,7 +155,7 @@ Compile the graph to llvm target with given input specification.
 
  .. code-block:: none
 
-
    ...47%, 0.01 MB, 39 KB/s, 0 seconds passed
    ...94%, 0.02 MB, 78 KB/s, 0 seconds passed
    ...100%, 0.02 MB, 117 KB/s, 0 seconds passed
+
    ...47%, 0.01 MB, 661 KB/s, 0 seconds passed
    ...94%, 0.02 MB, 1298 KB/s, 0 seconds passed
    ...100%, 0.02 MB, 1919 KB/s, 0 seconds passed
     Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_nopack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (1000, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
 
 
diff --git a/docs/_sources/tutorials/frontend/from_tensorflow.rst.txt b/docs/_sources/tutorials/frontend/from_tensorflow.rst.txt
index fac03e1..ad4ac5b 100644
--- a/docs/_sources/tutorials/frontend/from_tensorflow.rst.txt
+++ b/docs/_sources/tutorials/frontend/from_tensorflow.rst.txt
@@ -195,1971 +195,10 @@ Results:
 
  .. code-block:: none
 
-    /workspace/docs/../python/tvm/relay/frontend/tensorflow.py:2894: UserWarning: Ignore the passed shape. Shape in graphdef will be used for operator DecodeJpeg/contents.
+    /workspace/docs/../python/tvm/relay/frontend/tensorflow.py:2914: UserWarning: Ignore the passed shape. Shape in graphdef will be used for operator DecodeJpeg/contents.
       "will be used for operator %s." % node.name
     /workspace/docs/../python/tvm/relay/frontend/tensorflow.py:745: UserWarning: DecodeJpeg: It's a pass through, please handle preprocessing before input
       warnings.warn("DecodeJpeg: It's a pass through, please handle preprocessing before input")
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.expand_dims
-    WARNING:root:Attribute T is ignored in relay.sym.expand_dims
-    WARNING:root:Attribute Tdim is ignored in relay.sym.expand_dims
-    WARNING:root:Attribute _node_name is ignored in relay.sym.expand_dims
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.expand_dims
-    WARNING:root:Attribute half_pixel_centers is ignored in relay.sym.resize
-    WARNING:root:Attribute T is ignored in relay.sym.resize
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.resize
-    WARNING:root:Attribute _node_name is ignored in relay.sym.resize
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.resize
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.max_pool2d
-    WARNING:root:Attribute explicit_paddings is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.conv2d
-    WARNING:root:Attribute use_cudnn_on_gpu is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.conv2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.conv2d
-    WARNING:root:Attribute T is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _node_name is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.batch_norm
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.copy
-    WARNING:root:Attribute T is ignored in relay.sym.copy
-    WARNING:root:Attribute message is ignored in relay.sym.copy
-    WARNING:root:Attribute _node_name is ignored in relay.sym.copy
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.copy
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.relu
-    WARNING:root:Attribute T is ignored in relay.sym.relu
-    WARNING:root:Attribute _node_name is ignored in relay.sym.relu
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.relu
-    WARNING:root:Attribute N is ignored in relay.sym.concatenate
-    WARNING:root:Attribute T is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _node_name is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.concatenate
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute ksize is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute T is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _node_name is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.avg_pool2d
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.reshape
-    WARNING:root:Attribute Tshape is ignored in relay.sym.reshape
-    WARNING:root:Attribute T is ignored in relay.sym.reshape
-    WARNING:root:Attribute _node_name is ignored in relay.sym.reshape
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.reshape
-    WARNING:root:Attribute transpose_a is ignored in relay.sym.dense
-    WARNING:root:Attribute transpose_b is ignored in relay.sym.dense
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.dense
-    WARNING:root:Attribute T is ignored in relay.sym.dense
-    WARNING:root:Attribute _node_name is ignored in relay.sym.dense
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.dense
-    WARNING:root:Attribute T is ignored in relay.sym.softmax
-    WARNING:root:Attribute _output_shapes is ignored in relay.sym.softmax
-    WARNING:root:Attribute _node_name is ignored in relay.sym.softmax
-    WARNING:root:Attribute _target_layout is ignored in relay.sym.softmax
     Tensorflow protobuf imported to relay frontend.
 
 
@@ -2184,6 +223,151 @@ Results:
 
 
 
+.. rst-class:: sphx-glr-script-out
+
+ Out:
+
+ .. code-block:: none
+
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_nopack.x86', ('TENSOR', (1, 2048), 'float32'), ('TENSOR', (1008, 2048), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+    conv2d NHWC layout is not optimized for x86 with autotvm.
+
 
 
 Execute the portable graph on TVM
diff --git a/docs/_sources/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/tutorials/frontend/sg_execution_times.rst.txt
index 8eae4e6..7edb568 100644
--- a/docs/_sources/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,24 +5,24 @@
 
 Computation times
 =================
-**10:34.564** total execution time for **tutorials_frontend** files:
+**07:19.891** total execution time for **tutorials_frontend** files:
 
-- **02:35.942**: :ref:`sphx_glr_tutorials_frontend_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)
-- **02:06.630**: :ref:`sphx_glr_tutorials_frontend_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``)
-- **01:54.470**: :ref:`sphx_glr_tutorials_frontend_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)
-- **00:38.674**: :ref:`sphx_glr_tutorials_frontend_from_tensorflow.py` (``from_tensorflow.py``)
-- **00:30.038**: :ref:`sphx_glr_tutorials_frontend_deploy_quantized.py` (``deploy_quantized.py``)
-- **00:28.994**: :ref:`sphx_glr_tutorials_frontend_deploy_prequantized.py` (``deploy_prequantized.py``)
-- **00:24.732**: :ref:`sphx_glr_tutorials_frontend_from_tflite.py` (``from_tflite.py``)
-- **00:24.094**: :ref:`sphx_glr_tutorials_frontend_from_darknet.py` (``from_darknet.py``)
-- **00:16.564**: :ref:`sphx_glr_tutorials_frontend_from_caffe2.py` (``from_caffe2.py``)
-- **00:14.846**: :ref:`sphx_glr_tutorials_frontend_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)
-- **00:12.948**: :ref:`sphx_glr_tutorials_frontend_deploy_model_on_android.py` (``deploy_model_on_android.py``)
-- **00:12.326**: :ref:`sphx_glr_tutorials_frontend_from_pytorch.py` (``from_pytorch.py``)
-- **00:09.828**: :ref:`sphx_glr_tutorials_frontend_from_mxnet.py` (``from_mxnet.py``)
-- **00:09.102**: :ref:`sphx_glr_tutorials_frontend_from_coreml.py` (``from_coreml.py``)
-- **00:08.859**: :ref:`sphx_glr_tutorials_frontend_from_keras.py` (``from_keras.py``)
-- **00:03.407**: :ref:`sphx_glr_tutorials_frontend_using_external_lib.py` (``using_external_lib.py``)
-- **00:01.689**: :ref:`sphx_glr_tutorials_frontend_from_onnx.py` (``from_onnx.py``)
-- **00:01.228**: :ref:`sphx_glr_tutorials_frontend_build_gcn.py` (``build_gcn.py``)
-- **00:00.194**: :ref:`sphx_glr_tutorials_frontend_deploy_sparse.py` (``deploy_sparse.py``)
+- **01:59.235**: :ref:`sphx_glr_tutorials_frontend_deploy_prequantized_tflite.py` (``deploy_prequantized_tflite.py``)
+- **01:11.935**: :ref:`sphx_glr_tutorials_frontend_deploy_ssd_gluoncv.py` (``deploy_ssd_gluoncv.py``)
+- **01:08.702**: :ref:`sphx_glr_tutorials_frontend_deploy_object_detection_pytorch.py` (``deploy_object_detection_pytorch.py``)
+- **00:27.229**: :ref:`sphx_glr_tutorials_frontend_from_tensorflow.py` (``from_tensorflow.py``)
+- **00:22.552**: :ref:`sphx_glr_tutorials_frontend_deploy_prequantized.py` (``deploy_prequantized.py``)
+- **00:22.332**: :ref:`sphx_glr_tutorials_frontend_deploy_quantized.py` (``deploy_quantized.py``)
+- **00:19.043**: :ref:`sphx_glr_tutorials_frontend_from_tflite.py` (``from_tflite.py``)
+- **00:18.636**: :ref:`sphx_glr_tutorials_frontend_from_darknet.py` (``from_darknet.py``)
+- **00:13.115**: :ref:`sphx_glr_tutorials_frontend_from_caffe2.py` (``from_caffe2.py``)
+- **00:11.285**: :ref:`sphx_glr_tutorials_frontend_deploy_model_on_rasp.py` (``deploy_model_on_rasp.py``)
+- **00:10.181**: :ref:`sphx_glr_tutorials_frontend_deploy_model_on_android.py` (``deploy_model_on_android.py``)
+- **00:08.842**: :ref:`sphx_glr_tutorials_frontend_from_pytorch.py` (``from_pytorch.py``)
+- **00:07.599**: :ref:`sphx_glr_tutorials_frontend_from_mxnet.py` (``from_mxnet.py``)
+- **00:07.342**: :ref:`sphx_glr_tutorials_frontend_from_coreml.py` (``from_coreml.py``)
+- **00:06.778**: :ref:`sphx_glr_tutorials_frontend_from_keras.py` (``from_keras.py``)
+- **00:02.578**: :ref:`sphx_glr_tutorials_frontend_using_external_lib.py` (``using_external_lib.py``)
+- **00:01.377**: :ref:`sphx_glr_tutorials_frontend_from_onnx.py` (``from_onnx.py``)
+- **00:00.980**: :ref:`sphx_glr_tutorials_frontend_build_gcn.py` (``build_gcn.py``)
+- **00:00.150**: :ref:`sphx_glr_tutorials_frontend_deploy_sparse.py` (``deploy_sparse.py``)
diff --git a/docs/_sources/tutorials/get_started/cross_compilation_and_rpc.rst.txt b/docs/_sources/tutorials/get_started/cross_compilation_and_rpc.rst.txt
index f4add43..26cab3f 100644
--- a/docs/_sources/tutorials/get_started/cross_compilation_and_rpc.rst.txt
+++ b/docs/_sources/tutorials/get_started/cross_compilation_and_rpc.rst.txt
@@ -235,7 +235,7 @@ device and returns the measured cost. Network overhead is excluded.
 
  .. code-block:: none
 
-    1.182e-07 secs/op
+    1.944e-07 secs/op
 
 
 
diff --git a/docs/_sources/tutorials/get_started/relay_quick_start.rst.txt b/docs/_sources/tutorials/get_started/relay_quick_start.rst.txt
index 2965b65..a83cb40 100644
--- a/docs/_sources/tutorials/get_started/relay_quick_start.rst.txt
+++ b/docs/_sources/tutorials/get_started/relay_quick_start.rst.txt
@@ -224,7 +224,7 @@ in this example. Then the machine code will be generated as the module library.
 
  .. code-block:: none
 
-
    ...1%, 0.01 MB, 39 KB/s, 0 seconds passed
    ...3%, 0.02 MB, 79 KB/s, 0 seconds passed
    ...5%, 0.02 MB, 119 KB/s, 0 seconds passed
    ...6%, 0.03 MB, 158 KB/s, 0 seconds passed
    ...8%, 0.04 MB, 192 KB/s, 0 seconds passed
    ...10%, 0.05 MB, 230 KB/s, 0 seconds passed
    ...11%, 0.05 MB, 268 KB/s, 0 seconds passed
    ...13%, 0.06 MB, 306 KB/s, 0 seconds passed
    ...15%, 0.07 MB, 344 KB/s, 0 seconds passed
    ...16%, 0.08 MB, 382 KB/s, 0 seconds passed
    ...18%, 0.09 MB, 418 KB/s, 0 seconds passed
    ...20%, 0.09 MB, 456 KB/s, 0 seconds passed
    ...21%, 0.10 MB, 492 KB/s, 0 seconds passed
    ...23%, 0.11 MB, 530 KB/s, 0 seconds passed
    ...25%, 0.12 MB, 556 KB/s, 0 seconds passed
    ...26%, 0.12 MB, 593 KB/s, 0 seconds passed
    ...28%, 0.13 MB, 630 KB/s, 0 seconds passed
    ...30%, 0.14 MB, 666 KB/s, 0 seconds passed
    ...31%, 0.15 MB, 703 KB/s, 0 seconds passed
    ...33%, 0.16 MB, 738 KB/s, 0 seconds passed
    ...35%, 0.16 MB, 774 KB/s, 0 seconds pa
 ssed
    ...36%, 0.17 MB, 811 KB/s, 0 seconds passed
    ...38%, 0.18 MB, 846 KB/s, 0 seconds passed
    ...40%, 0.19 MB, 882 KB/s, 0 seconds passed
    ...41%, 0.20 MB, 918 KB/s, 0 seconds passed
    ...43%, 0.20 MB, 954 KB/s, 0 seconds passed
    ...45%, 0.21 MB, 987 KB/s, 0 seconds passed
    ...46%, 0.22 MB, 1023 KB/s, 0 seconds passed
    ...48%, 0.23 MB, 1059 KB/s, 0 seconds passed
    ...50%, 0.23 MB, 1095 KB/s, 0 seconds passed
    ...51%, 0.24 MB, 1129 KB/s, 0 seconds passed
    ...53%, 0.25 MB, 1165 KB/s, 0 seconds passed
    ...55%, 0.26 MB, 1201 KB/s, 0 seconds passed
    ...56%, 0.27 MB, 1237 KB/s, 0 seconds passed
    ...58%, 0.27 MB, 1254 KB/s, 0 seconds passed
    ...60%, 0.28 MB, 1290 KB/s, 0 seconds passed
    ...61%, 0.29 MB, 1325 KB/s, 0 seconds passed
    ...63%, 0.30 MB, 1360 KB/s, 0 seconds passed
    ...65%, 0.30 MB, 1390 KB/s, 0 seconds passed
    ...66%, 0.31 MB, 1425 KB/s, 0 seconds passed
    ...68%, 0.32 MB, 1460 KB/s, 0 seconds passed
    ...70%, 0.33 M
 B, 1496 KB/s, 0 seconds passed
    ...71%, 0.34 MB, 1531 KB/s, 0 seconds passed
    ...73%, 0.34 MB, 1566 KB/s, 0 seconds passed
    ...75%, 0.35 MB, 1601 KB/s, 0 seconds passed
    ...76%, 0.36 MB, 1636 KB/s, 0 seconds passed
    ...78%, 0.37 MB, 1667 KB/s, 0 seconds passed
    ...80%, 0.38 MB, 1702 KB/s, 0 seconds passed
    ...81%, 0.38 MB, 1737 KB/s, 0 seconds passed
    ...83%, 0.39 MB, 1772 KB/s, 0 seconds passed
    ...85%, 0.40 MB, 1807 KB/s, 0 seconds passed
    ...86%, 0.41 MB, 1842 KB/s, 0 seconds passed
    ...88%, 0.41 MB, 1877 KB/s, 0 seconds passed
    ...90%, 0.42 MB, 1911 KB/s, 0 seconds passed
    ...91%, 0.43 MB, 1946 KB/s, 0 seconds passed
    ...93%, 0.44 MB, 1980 KB/s, 0 seconds passed
    ...95%, 0.45 MB, 2015 KB/s, 0 seconds passed
    ...96%, 0.45 MB, 2050 KB/s, 0 seconds passed
    ...98%, 0.46 MB, 2085 KB/s, 0 seconds passed
    ...100%, 0.47 MB, 2118 KB/s, 0 seconds passed
+
    ...1%, 0.01 MB, 314 KB/s, 0 seconds passed
    ...3%, 0.02 MB, 623 KB/s, 0 seconds passed
    ...5%, 0.02 MB, 924 KB/s, 0 seconds passed
    ...6%, 0.03 MB, 1223 KB/s, 0 seconds passed
    ...8%, 0.04 MB, 1507 KB/s, 0 seconds passed
    ...10%, 0.05 MB, 1797 KB/s, 0 seconds passed
    ...11%, 0.05 MB, 2078 KB/s, 0 seconds passed
    ...13%, 0.06 MB, 2360 KB/s, 0 seconds passed
    ...15%, 0.07 MB, 2618 KB/s, 0 seconds passed
    ...16%, 0.08 MB, 2870 KB/s, 0 seconds passed
    ...18%, 0.09 MB, 3119 KB/s, 0 seconds passed
    ...20%, 0.09 MB, 3378 KB/s, 0 seconds passed
    ...21%, 0.10 MB, 3643 KB/s, 0 seconds passed
    ...23%, 0.11 MB, 3904 KB/s, 0 seconds passed
    ...25%, 0.12 MB, 4141 KB/s, 0 seconds passed
    ...26%, 0.12 MB, 4398 KB/s, 0 seconds passed
    ...28%, 0.13 MB, 4652 KB/s, 0 seconds passed
    ...30%, 0.14 MB, 4904 KB/s, 0 seconds passed
    ...31%, 0.15 MB, 5144 KB/s, 0 seconds passed
    ...33%, 0.16 MB, 5391 KB/s, 0 seconds passed
    ...35%, 0.16 MB, 560
 9 KB/s, 0 seconds passed
    ...36%, 0.17 MB, 5840 KB/s, 0 seconds passed
    ...38%, 0.18 MB, 6078 KB/s, 0 seconds passed
    ...40%, 0.19 MB, 6316 KB/s, 0 seconds passed
    ...41%, 0.20 MB, 6517 KB/s, 0 seconds passed
    ...43%, 0.20 MB, 6742 KB/s, 0 seconds passed
    ...45%, 0.21 MB, 6972 KB/s, 0 seconds passed
    ...46%, 0.22 MB, 7170 KB/s, 0 seconds passed
    ...48%, 0.23 MB, 7396 KB/s, 0 seconds passed
    ...50%, 0.23 MB, 7621 KB/s, 0 seconds passed
    ...51%, 0.24 MB, 7779 KB/s, 0 seconds passed
    ...53%, 0.25 MB, 7998 KB/s, 0 seconds passed
    ...55%, 0.26 MB, 8215 KB/s, 0 seconds passed
    ...56%, 0.27 MB, 8430 KB/s, 0 seconds passed
    ...58%, 0.27 MB, 8643 KB/s, 0 seconds passed
    ...60%, 0.28 MB, 8855 KB/s, 0 seconds passed
    ...61%, 0.29 MB, 9066 KB/s, 0 seconds passed
    ...63%, 0.30 MB, 9275 KB/s, 0 seconds passed
    ...65%, 0.30 MB, 9479 KB/s, 0 seconds passed
    ...66%, 0.31 MB, 9632 KB/s, 0 seconds passed
    ...68%, 0.32 MB, 9836 KB/s, 0 seconds
  passed
    ...70%, 0.33 MB, 10038 KB/s, 0 seconds passed
    ...71%, 0.34 MB, 10237 KB/s, 0 seconds passed
    ...73%, 0.34 MB, 10435 KB/s, 0 seconds passed
    ...75%, 0.35 MB, 10632 KB/s, 0 seconds passed
    ...76%, 0.36 MB, 10828 KB/s, 0 seconds passed
    ...78%, 0.37 MB, 11022 KB/s, 0 seconds passed
    ...80%, 0.38 MB, 11215 KB/s, 0 seconds passed
    ...81%, 0.38 MB, 11397 KB/s, 0 seconds passed
    ...83%, 0.39 MB, 11587 KB/s, 0 seconds passed
    ...85%, 0.40 MB, 11774 KB/s, 0 seconds passed
    ...86%, 0.41 MB, 11961 KB/s, 0 seconds passed
    ...88%, 0.41 MB, 12147 KB/s, 0 seconds passed
    ...90%, 0.42 MB, 12331 KB/s, 0 seconds passed
    ...91%, 0.43 MB, 12514 KB/s, 0 seconds passed
    ...93%, 0.44 MB, 12637 KB/s, 0 seconds passed
    ...95%, 0.45 MB, 12817 KB/s, 0 seconds passed
    ...96%, 0.45 MB, 12994 KB/s, 0 seconds passed
    ...98%, 0.46 MB, 13171 KB/s, 0 seconds passed
    ...100%, 0.47 MB, 13318 KB/s, 0 seconds passed
     Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -model=unknown -thread_warp_size=32, workload=('dense_small_batch.cuda', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (1000, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
 
 
diff --git a/docs/_sources/tutorials/get_started/sg_execution_times.rst.txt b/docs/_sources/tutorials/get_started/sg_execution_times.rst.txt
index 7d657b5..d3a7e4e 100644
--- a/docs/_sources/tutorials/get_started/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/get_started/sg_execution_times.rst.txt
@@ -5,9 +5,9 @@
 
 Computation times
 =================
-**00:16.931** total execution time for **tutorials_get_started** files:
+**00:13.165** total execution time for **tutorials_get_started** files:
 
-- **00:16.360**: :ref:`sphx_glr_tutorials_get_started_relay_quick_start.py` (``relay_quick_start.py``)
-- **00:00.358**: :ref:`sphx_glr_tutorials_get_started_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)
-- **00:00.126**: :ref:`sphx_glr_tutorials_get_started_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``)
-- **00:00.087**: :ref:`sphx_glr_tutorials_get_started_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)
+- **00:12.577**: :ref:`sphx_glr_tutorials_get_started_relay_quick_start.py` (``relay_quick_start.py``)
+- **00:00.347**: :ref:`sphx_glr_tutorials_get_started_tensor_expr_get_started.py` (``tensor_expr_get_started.py``)
+- **00:00.150**: :ref:`sphx_glr_tutorials_get_started_cross_compilation_and_rpc.py` (``cross_compilation_and_rpc.py``)
+- **00:00.091**: :ref:`sphx_glr_tutorials_get_started_tvmc_command_line_driver.py` (``tvmc_command_line_driver.py``)
diff --git a/docs/_sources/tutorials/get_started/tensor_expr_get_started.rst.txt b/docs/_sources/tutorials/get_started/tensor_expr_get_started.rst.txt
index f86234d..8673a9a 100644
--- a/docs/_sources/tutorials/get_started/tensor_expr_get_started.rst.txt
+++ b/docs/_sources/tutorials/get_started/tensor_expr_get_started.rst.txt
@@ -325,7 +325,7 @@ The following code first performs the following steps:
 
  .. code-block:: none
 
-    ['myadd.tvm_meta.json', 'myadd.ptx', 'myadd.so', 'myadd.o']
+    ['myadd.tvm_meta.json', 'myadd.so', 'myadd.ptx', 'myadd.o']
 
 
 
diff --git a/docs/_sources/tutorials/index.rst.txt b/docs/_sources/tutorials/index.rst.txt
index 32a19ab..10baf1c 100644
--- a/docs/_sources/tutorials/index.rst.txt
+++ b/docs/_sources/tutorials/index.rst.txt
@@ -970,6 +970,26 @@ AutoScheduler : Template-free Auto Scheduling
 
 .. only:: html
 
+    .. figure:: /tutorials/auto_scheduler/images/thumb/sphx_glr_tune_network_mali_thumb.png
+
+        :ref:`sphx_glr_tutorials_auto_scheduler_tune_network_mali.py`
+
+.. raw:: html
+
+    </div>
+
+
+.. toctree::
+   :hidden:
+
+   /tutorials/auto_scheduler/tune_network_mali
+
+.. raw:: html
+
+    <div class="sphx-glr-thumbcontainer" tooltip="Auto-tuning for specific devices and workloads is critical for getting the best performance. Th...">
+
+.. only:: html
+
     .. figure:: /tutorials/auto_scheduler/images/thumb/sphx_glr_tune_network_x86_thumb.png
 
         :ref:`sphx_glr_tutorials_auto_scheduler_tune_network_x86.py`
diff --git a/docs/_sources/tutorials/language/schedule_primitives.rst.txt b/docs/_sources/tutorials/language/schedule_primitives.rst.txt
index ec5d465..877313c 100644
--- a/docs/_sources/tutorials/language/schedule_primitives.rst.txt
+++ b/docs/_sources/tutorials/language/schedule_primitives.rst.txt
@@ -449,13 +449,13 @@ of computation of `C`.
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {B: Buffer(B_2: Pointer(float32), float32, [m: int32], [stride: int32], type="auto"),
-                 C: Buffer(C_2: Pointer(float32), float32, [m], [stride_1: int32], type="auto"),
+      buffers = {C: Buffer(C_2: Pointer(float32), float32, [m: int32], [stride: int32], type="auto"),
+                 B: Buffer(B_2: Pointer(float32), float32, [m], [stride_1: int32], type="auto"),
                  A: Buffer(A_2: Pointer(float32), float32, [m], [stride_2: int32], type="auto")}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       for (i: int32, 0, m) {
-        B_2[(i*stride)] = ((float32*)A_2[(i*stride_2)] + 1f32)
-        C_2[(i*stride_1)] = ((float32*)B_2[(i*stride)]*2f32)
+        B_2[(i*stride_1)] = ((float32*)A_2[(i*stride_2)] + 1f32)
+        C_2[(i*stride)] = ((float32*)B_2[(i*stride_1)]*2f32)
       }
     }
 
@@ -492,12 +492,12 @@ tensor is required.
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {C: Buffer(C_2: Pointer(float32), float32, [m: int32], [stride: int32], type="auto"),
-                 B: Buffer(B_2: Pointer(float32), float32, [m], [stride_1: int32], type="auto"),
+      buffers = {B: Buffer(B_2: Pointer(float32), float32, [m: int32], [stride: int32], type="auto"),
+                 C: Buffer(C_2: Pointer(float32), float32, [m], [stride_1: int32], type="auto"),
                  A: Buffer(A_2: Pointer(float32), float32, [m], [stride_2: int32], type="auto")}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       for (i: int32, 0, m) {
-        C_2[(i*stride)] = (((float32*)A_2[(i*stride_2)] + 1f32)*2f32)
+        C_2[(i*stride_1)] = (((float32*)A_2[(i*stride_2)] + 1f32)*2f32)
       }
     }
 
diff --git a/docs/_sources/tutorials/language/sg_execution_times.rst.txt b/docs/_sources/tutorials/language/sg_execution_times.rst.txt
index e5f878a..beb90e1 100644
--- a/docs/_sources/tutorials/language/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/language/sg_execution_times.rst.txt
@@ -5,13 +5,13 @@
 
 Computation times
 =================
-**00:04.449** total execution time for **tutorials_language** files:
+**00:03.582** total execution time for **tutorials_language** files:
 
-- **00:01.582**: :ref:`sphx_glr_tutorials_language_intrin_math.py` (``intrin_math.py``)
-- **00:00.773**: :ref:`sphx_glr_tutorials_language_tensorize.py` (``tensorize.py``)
-- **00:00.581**: :ref:`sphx_glr_tutorials_language_scan.py` (``scan.py``)
-- **00:00.542**: :ref:`sphx_glr_tutorials_language_reduction.py` (``reduction.py``)
-- **00:00.319**: :ref:`sphx_glr_tutorials_language_extern_op.py` (``extern_op.py``)
-- **00:00.245**: :ref:`sphx_glr_tutorials_language_schedule_primitives.py` (``schedule_primitives.py``)
-- **00:00.210**: :ref:`sphx_glr_tutorials_language_tuple_inputs.py` (``tuple_inputs.py``)
-- **00:00.198**: :ref:`sphx_glr_tutorials_language_tedd.py` (``tedd.py``)
+- **00:01.247**: :ref:`sphx_glr_tutorials_language_intrin_math.py` (``intrin_math.py``)
+- **00:00.669**: :ref:`sphx_glr_tutorials_language_tensorize.py` (``tensorize.py``)
+- **00:00.457**: :ref:`sphx_glr_tutorials_language_scan.py` (``scan.py``)
+- **00:00.455**: :ref:`sphx_glr_tutorials_language_reduction.py` (``reduction.py``)
+- **00:00.238**: :ref:`sphx_glr_tutorials_language_extern_op.py` (``extern_op.py``)
+- **00:00.181**: :ref:`sphx_glr_tutorials_language_tedd.py` (``tedd.py``)
+- **00:00.173**: :ref:`sphx_glr_tutorials_language_schedule_primitives.py` (``schedule_primitives.py``)
+- **00:00.162**: :ref:`sphx_glr_tutorials_language_tuple_inputs.py` (``tuple_inputs.py``)
diff --git a/docs/_sources/tutorials/language/tensorize.rst.txt b/docs/_sources/tutorials/language/tensorize.rst.txt
index 1f1b6ab..4d05246 100644
--- a/docs/_sources/tutorials/language/tensorize.rst.txt
+++ b/docs/_sources/tutorials/language/tensorize.rst.txt
@@ -308,12 +308,12 @@ The importing needs to happen before the tensorized GEMV being executed.
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {C: Buffer(C_2: Pointer(float32), float32, [1024, 512], []),
-                 B: Buffer(B_2: Pointer(float32), float32, [512, 64], []),
+      buffers = {B: Buffer(B_2: Pointer(float32), float32, [512, 64], []),
+                 C: Buffer(C_2: Pointer(float32), float32, [1024, 512], []),
                  A: Buffer(A_2: Pointer(float32), float32, [1024, 64], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
-      attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmpinr5hwkd/input0.cc'
-    source_filename = "/tmp/tmpinr5hwkd/input0.cc"
+      attr [IterVar(i: int32, (nullptr), "DataPar", "")] "pragma_import_llvm" = "; ModuleID = '/tmp/tmp8hwfzwof/input0.cc'
+    source_filename = "/tmp/tmp8hwfzwof/input0.cc"
     target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
     target triple = "x86_64-pc-linux-gnu"
 
diff --git a/docs/_sources/tutorials/language/tuple_inputs.rst.txt b/docs/_sources/tutorials/language/tuple_inputs.rst.txt
index 9ddc391..dce05da 100644
--- a/docs/_sources/tutorials/language/tuple_inputs.rst.txt
+++ b/docs/_sources/tutorials/language/tuple_inputs.rst.txt
@@ -64,15 +64,15 @@ together in the next schedule procedure.
 
     primfn(A0_1: handle, A1_1: handle, B.v0_1: handle, B.v1_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {B.v1: Buffer(B.v1_2: Pointer(float32), float32, [m: int32, n: int32], [stride: int32, stride_1: int32], type="auto"),
-                 B.v0: Buffer(B.v0_2: Pointer(float32), float32, [m, n], [stride_2: int32, stride_3: int32], type="auto"),
-                 A1: Buffer(A1_2: Pointer(float32), float32, [m, n], [stride_4: int32, stride_5: int32], type="auto"),
+      buffers = {B.v0: Buffer(B.v0_2: Pointer(float32), float32, [m: int32, n: int32], [stride: int32, stride_1: int32], type="auto"),
+                 A1: Buffer(A1_2: Pointer(float32), float32, [m, n], [stride_2: int32, stride_3: int32], type="auto"),
+                 B.v1: Buffer(B.v1_2: Pointer(float32), float32, [m, n], [stride_4: int32, stride_5: int32], type="auto"),
                  A0: Buffer(A0_2: Pointer(float32), float32, [m, n], [stride_6: int32, stride_7: int32], type="auto")}
       buffer_map = {A0_1: A0, A1_1: A1, B.v0_1: B.v0, B.v1_1: B.v1} {
       for (i: int32, 0, m) {
         for (j: int32, 0, n) {
-          B.v0_2[((i*stride_2) + (j*stride_3))] = ((float32*)A0_2[((i*stride_6) + (j*stride_7))] + 2f32)
-          B.v1_2[((i*stride) + (j*stride_1))] = ((float32*)A1_2[((i*stride_4) + (j*stride_5))]*3f32)
+          B.v0_2[((i*stride) + (j*stride_1))] = ((float32*)A0_2[((i*stride_6) + (j*stride_7))] + 2f32)
+          B.v1_2[((i*stride_4) + (j*stride_5))] = ((float32*)A1_2[((i*stride_2) + (j*stride_3))]*3f32)
         }
       }
     }
@@ -135,17 +135,17 @@ with :py:func:`te.comm_reducer` as below:
 
     primfn(idx_1: handle, val_1: handle, T.v0_1: handle, T.v1_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {T.v1: Buffer(T.v1_2: Pointer(int32), int32, [m: int32], [stride: int32], type="auto"),
+      buffers = {T.v0: Buffer(T.v0_2: Pointer(int32), int32, [m: int32], [stride: int32], type="auto"),
                  val: Buffer(val_2: Pointer(int32), int32, [m, n: int32], [stride_1: int32, stride_2: int32], type="auto"),
-                 T.v0: Buffer(T.v0_2: Pointer(int32), int32, [m], [stride_3: int32], type="auto"),
+                 T.v1: Buffer(T.v1_2: Pointer(int32), int32, [m], [stride_3: int32], type="auto"),
                  idx: Buffer(idx_2: Pointer(int32), int32, [m, n], [stride_4: int32, stride_5: int32], type="auto")}
       buffer_map = {idx_1: idx, val_1: val, T.v0_1: T.v0, T.v1_1: T.v1} {
       for (i: int32, 0, m) {
-        T.v0_2[(i*stride_3)] = -1
-        T.v1_2[(i*stride)] = -2147483648
+        T.v0_2[(i*stride)] = -1
+        T.v1_2[(i*stride_3)] = -2147483648
         for (k: int32, 0, n) {
-          T.v0_2[(i*stride_3)] = @tir.if_then_else(((int32*)val_2[((i*stride_1) + (k*stride_2))] <= (int32*)T.v1_2[(i*stride)]), (int32*)T.v0_2[(i*stride_3)], (int32*)idx_2[((i*stride_4) + (k*stride_5))], dtype=int32)
-          T.v1_2[(i*stride)] = @tir.if_then_else(((int32*)val_2[((i*stride_1) + (k*stride_2))] <= (int32*)T.v1_2[(i*stride)]), (int32*)T.v1_2[(i*stride)], (int32*)val_2[((i*stride_1) + (k*stride_2))], dtype=int32)
+          T.v0_2[(i*stride)] = @tir.if_then_else(((int32*)val_2[((i*stride_1) + (k*stride_2))] <= (int32*)T.v1_2[(i*stride_3)]), (int32*)T.v0_2[(i*stride)], (int32*)idx_2[((i*stride_4) + (k*stride_5))], dtype=int32)
+          T.v1_2[(i*stride_3)] = @tir.if_then_else(((int32*)val_2[((i*stride_1) + (k*stride_2))] <= (int32*)T.v1_2[(i*stride_3)]), (int32*)T.v1_2[(i*stride_3)], (int32*)val_2[((i*stride_1) + (k*stride_2))], dtype=int32)
         }
       }
     }
diff --git a/docs/_sources/tutorials/micro/micro_tflite.rst.txt b/docs/_sources/tutorials/micro/micro_tflite.rst.txt
index bb9567b..122dd85 100644
--- a/docs/_sources/tutorials/micro/micro_tflite.rst.txt
+++ b/docs/_sources/tutorials/micro/micro_tflite.rst.txt
@@ -231,6 +231,9 @@ file.
         c_mod,
         lib_opts=opts["bin_opts"],
         bin_opts=opts["bin_opts"],
+        # Use the microTVM memory manager. If, in your main.cc, you change TVMPlatformMemoryAllocate and
+        # TVMPlatformMemoryFree to use e.g. malloc() and free(), you can omit this extra library.
+        extra_libs=[os.path.join(tvm.micro.build.CRT_ROOT_DIR, "memory")],
     )
 
 
diff --git a/docs/_sources/tutorials/micro/sg_execution_times.rst.txt b/docs/_sources/tutorials/micro/sg_execution_times.rst.txt
index c04245a..d2fc72f 100644
--- a/docs/_sources/tutorials/micro/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/micro/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
 
 Computation times
 =================
-**00:06.048** total execution time for **tutorials_micro** files:
+**00:06.152** total execution time for **tutorials_micro** files:
 
-- **00:05.849**: :ref:`sphx_glr_tutorials_micro_micro_tflite.py` (``micro_tflite.py``)
-- **00:00.199**: :ref:`sphx_glr_tutorials_micro_micro_reference_vm.py` (``micro_reference_vm.py``)
+- **00:05.998**: :ref:`sphx_glr_tutorials_micro_micro_tflite.py` (``micro_tflite.py``)
+- **00:00.154**: :ref:`sphx_glr_tutorials_micro_micro_reference_vm.py` (``micro_reference_vm.py``)
diff --git a/docs/_sources/tutorials/optimize/opt_conv_cuda.rst.txt b/docs/_sources/tutorials/optimize/opt_conv_cuda.rst.txt
index 85d20dc..140945d 100644
--- a/docs/_sources/tutorials/optimize/opt_conv_cuda.rst.txt
+++ b/docs/_sources/tutorials/optimize/opt_conv_cuda.rst.txt
@@ -296,7 +296,7 @@ latency of convolution.
 
  .. code-block:: none
 
-    Convolution: 53.197723 ms
+    Convolution: 19.711014 ms
 
 
 
diff --git a/docs/_sources/tutorials/optimize/opt_conv_tensorcore.rst.txt b/docs/_sources/tutorials/optimize/opt_conv_tensorcore.rst.txt
index 17e2062..c73eb43 100644
--- a/docs/_sources/tutorials/optimize/opt_conv_tensorcore.rst.txt
+++ b/docs/_sources/tutorials/optimize/opt_conv_tensorcore.rst.txt
@@ -624,7 +624,7 @@ be able to run on our build server
 
  .. code-block:: none
 
-    conv2d with tensor core: 8.329637 ms
+    conv2d with tensor core: 6.665305 ms
 
 
 
diff --git a/docs/_sources/tutorials/optimize/opt_gemm.rst.txt b/docs/_sources/tutorials/optimize/opt_gemm.rst.txt
index a213af0..452fea6 100644
--- a/docs/_sources/tutorials/optimize/opt_gemm.rst.txt
+++ b/docs/_sources/tutorials/optimize/opt_gemm.rst.txt
@@ -118,8 +118,8 @@ Then we write a baseline implementation, the simplest way to write a matrix mult
 
  .. code-block:: none
 
-    Numpy running time: 0.006963
-    Baseline: 3.516655
+    Numpy running time: 0.006749
+    Baseline: 5.843904
 
 
 
@@ -206,7 +206,7 @@ fill 32 * 32 * sizeof(float) which is 4KB in the cache whose total size is 32KB
 
  .. code-block:: none
 
-    Opt1: 0.284967
+    Opt1: 0.107367
 
 
 
@@ -300,7 +300,7 @@ In this tutorial, we chose to vectorize the inner loop row data since it is cach
 
  .. code-block:: none
 
-    Opt2: 0.321595
+    Opt2: 0.112012
 
 
 
@@ -389,7 +389,7 @@ the access pattern for A matrix is more cache friendly.
 
  .. code-block:: none
 
-    Opt3: 0.111657
+    Opt3: 0.060845
 
 
 
@@ -413,8 +413,8 @@ Here is the generated IR after loop permutation.
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {B: Buffer(B_2: Pointer(float32), float32, [1024, 1024], []),
-                 C: Buffer(C_2: Pointer(float32), float32, [1024, 1024], []),
+      buffers = {C: Buffer(C_2: Pointer(float32), float32, [1024, 1024], []),
+                 B: Buffer(B_2: Pointer(float32), float32, [1024, 1024], []),
                  A: Buffer(A_2: Pointer(float32), float32, [1024, 1024], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       for (x.outer: int32, 0, 32) {
@@ -499,7 +499,7 @@ the corresponding value from the packed array.
 
  .. code-block:: none
 
-    Opt4: 0.105409
+    Opt4: 0.062676
 
 
 
@@ -609,7 +609,7 @@ write to C when all the block results are ready.
 
  .. code-block:: none
 
-    Opt5: 0.098048
+    Opt5: 0.059428
 
 
 
@@ -725,7 +725,7 @@ Futhermore, we can also utilize multi-core processors to do the thread-level par
 
  .. code-block:: none
 
-    Opt6: 0.032347
+    Opt6: 0.016327
 
 
 
diff --git a/docs/_sources/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/tutorials/optimize/sg_execution_times.rst.txt
index eefe5a5..6e5acde 100644
--- a/docs/_sources/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,9 +5,9 @@
 
 Computation times
 =================
-**00:27.648** total execution time for **tutorials_optimize** files:
+**00:26.965** total execution time for **tutorials_optimize** files:
 
-- **00:25.018**: :ref:`sphx_glr_tutorials_optimize_opt_gemm.py` (``opt_gemm.py``)
-- **00:01.324**: :ref:`sphx_glr_tutorials_optimize_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``)
-- **00:01.095**: :ref:`sphx_glr_tutorials_optimize_opt_conv_cuda.py` (``opt_conv_cuda.py``)
-- **00:00.212**: :ref:`sphx_glr_tutorials_optimize_opt_matmul_auto_tensorcore.py` (``opt_matmul_auto_tensorcore.py``)
+- **00:24.637**: :ref:`sphx_glr_tutorials_optimize_opt_gemm.py` (``opt_gemm.py``)
+- **00:01.085**: :ref:`sphx_glr_tutorials_optimize_opt_conv_tensorcore.py` (``opt_conv_tensorcore.py``)
+- **00:01.080**: :ref:`sphx_glr_tutorials_optimize_opt_conv_cuda.py` (``opt_conv_cuda.py``)
+- **00:00.163**: :ref:`sphx_glr_tutorials_optimize_opt_matmul_auto_tensorcore.py` (``opt_matmul_auto_tensorcore.py``)
diff --git a/docs/_sources/tutorials/topi/intro_topi.rst.txt b/docs/_sources/tutorials/topi/intro_topi.rst.txt
index 445cb92..e9938a5 100644
--- a/docs/_sources/tutorials/topi/intro_topi.rst.txt
+++ b/docs/_sources/tutorials/topi/intro_topi.rst.txt
@@ -231,7 +231,7 @@ As you can see, scheduled stages of computation have been accumulated and we can
 
  .. code-block:: none
 
-    [stage(a, placeholder(a, 0x190de3e10)), stage(b, placeholder(b, 0x1a4304890)), stage(T_add, compute(T_add, body=[(a[ax0, ax1, ax2] + b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min=0, ext=10))], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[(a[ax0, ax1, ax2]*b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range( [...]
+    [stage(a, placeholder(a, 0x179c79ba0)), stage(b, placeholder(b, 0x178d329e0)), stage(T_add, compute(T_add, body=[(a[ax0, ax1, ax2] + b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range(min=0, ext=10))], reduce_axis=[], tag=broadcast, attrs={})), stage(T_multiply, compute(T_multiply, body=[(a[ax0, ax1, ax2]*b[ax1, ax2])], axis=[iter_var(ax0, range(min=0, ext=100)), iter_var(ax1, range(min=0, ext=10)), iter_var(ax2, range( [...]
 
 
 
diff --git a/docs/_sources/tutorials/topi/sg_execution_times.rst.txt b/docs/_sources/tutorials/topi/sg_execution_times.rst.txt
index a5d1531..a3de2c6 100644
--- a/docs/_sources/tutorials/topi/sg_execution_times.rst.txt
+++ b/docs/_sources/tutorials/topi/sg_execution_times.rst.txt
@@ -5,6 +5,6 @@
 
 Computation times
 =================
-**00:00.640** total execution time for **tutorials_topi** files:
+**00:00.576** total execution time for **tutorials_topi** files:
 
-- **00:00.640**: :ref:`sphx_glr_tutorials_topi_intro_topi.py` (``intro_topi.py``)
+- **00:00.576**: :ref:`sphx_glr_tutorials_topi_intro_topi.py` (``intro_topi.py``)
diff --git a/docs/_sources/vta/tutorials/autotvm/sg_execution_times.rst.txt b/docs/_sources/vta/tutorials/autotvm/sg_execution_times.rst.txt
index 7623620..1de3a8a 100644
--- a/docs/_sources/vta/tutorials/autotvm/sg_execution_times.rst.txt
+++ b/docs/_sources/vta/tutorials/autotvm/sg_execution_times.rst.txt
@@ -5,6 +5,6 @@
 
 Computation times
 =================
-**00:07.840** total execution time for **vta_tutorials_autotvm** files:
+**00:06.050** total execution time for **vta_tutorials_autotvm** files:
 
-- **00:07.840**: :ref:`sphx_glr_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``)
+- **00:06.050**: :ref:`sphx_glr_vta_tutorials_autotvm_tune_relay_vta.py` (``tune_relay_vta.py``)
diff --git a/docs/_sources/vta/tutorials/autotvm/tune_relay_vta.rst.txt b/docs/_sources/vta/tutorials/autotvm/tune_relay_vta.rst.txt
index 5735b17..f9de15d 100644
--- a/docs/_sources/vta/tutorials/autotvm/tune_relay_vta.rst.txt
+++ b/docs/_sources/vta/tutorials/autotvm/tune_relay_vta.rst.txt
@@ -30,7 +30,7 @@ To use the autotvm package in tvm, we need to install some extra dependencies.
 
 .. code-block:: bash
 
-  pip3 install --user psutil xgboost tornado mxnet requests "Pillow<7"
+  pip3 install --user psutil xgboost tornado mxnet requests "Pillow<7" cloudpickle
 
 To make TVM run faster during tuning, it is recommended to use cython
 as FFI of TVM. In the root directory of TVM, execute
@@ -375,18 +375,6 @@ Finally, we launch tuning jobs and evaluate the end-to-end performance.
 
     def tune_and_evaluate(tuning_opt):
 
-        if env.TARGET != "sim":
-            # Get remote from fleet node
-            remote = autotvm.measure.request_remote(
-                env.TARGET, tracker_host, tracker_port, timeout=10000
-            )
-            # Reconfigure the JIT runtime and FPGA.
-            vta.reconfig_runtime(remote)
-            vta.program_fpga(remote, bitstream=None)
-        else:
-            # In simulation mode, host the RPC server locally.
-            remote = rpc.LocalSession()
-
         # Register VTA tuning tasks
         register_vta_tuning_tasks()
 
@@ -442,6 +430,19 @@ Finally, we launch tuning jobs and evaluate the end-to-end performance.
         print("Tuning...")
         tune_tasks(tasks, **tuning_opt)
 
+        # evaluate with tuning history
+        if env.TARGET != "sim":
+            # Get remote from fleet node
+            remote = autotvm.measure.request_remote(
+                env.TARGET, tracker_host, tracker_port, timeout=10000
+            )
+            # Reconfigure the JIT runtime and FPGA.
+            vta.reconfig_runtime(remote)
+            vta.program_fpga(remote, bitstream=None)
+        else:
+            # In simulation mode, host the RPC server locally.
+            remote = rpc.LocalSession()
+
         # compile kernels with history best records
         with autotvm.tophub.context(target, extra_files=[log_file]):
             # Compile network
@@ -460,9 +461,9 @@ Finally, we launch tuning jobs and evaluate the end-to-end performance.
             # Export library
             print("Upload...")
             temp = utils.tempdir()
-            lib.save(temp.relpath("graphlib.o"))
-            remote.upload(temp.relpath("graphlib.o"))
-            lib = remote.load_module("graphlib.o")
+            lib.export_library(temp.relpath("graphlib.tar"))
+            remote.upload(temp.relpath("graphlib.tar"))
+            lib = remote.load_module("graphlib.tar")
 
             # Generate the graph runtime
             ctx = remote.ext_dev(0) if device == "vta" else remote.cpu(0)
@@ -497,7 +498,7 @@ Finally, we launch tuning jobs and evaluate the end-to-end performance.
  .. code-block:: none
 
     Extract tasks...
-
    ...1%, 0.01 MB, 22 KB/s, 0 seconds passed
    ...2%, 0.02 MB, 45 KB/s, 0 seconds passed
    ...3%, 0.02 MB, 68 KB/s, 0 seconds passed
    ...4%, 0.03 MB, 91 KB/s, 0 seconds passed
    ...5%, 0.04 MB, 112 KB/s, 0 seconds passed
    ...6%, 0.05 MB, 134 KB/s, 0 seconds passed
    ...7%, 0.05 MB, 157 KB/s, 0 seconds passed
    ...8%, 0.06 MB, 179 KB/s, 0 seconds passed
    ...9%, 0.07 MB, 201 KB/s, 0 seconds passed
    ...10%, 0.08 MB, 223 KB/s, 0 seconds passed
    ...11%, 0.09 MB, 246 KB/s, 0 seconds passed
    ...13%, 0.09 MB, 267 KB/s, 0 seconds passed
    ...14%, 0.10 MB, 290 KB/s, 0 seconds passed
    ...15%, 0.11 MB, 311 KB/s, 0 seconds passed
    ...16%, 0.12 MB, 331 KB/s, 0 seconds passed
    ...17%, 0.12 MB, 352 KB/s, 0 seconds passed
    ...18%, 0.13 MB, 374 KB/s, 0 seconds passed
    ...19%, 0.14 MB, 396 KB/s, 0 seconds passed
    ...20%, 0.15 MB, 417 KB/s, 0 seconds passed
    ...21%, 0.16 MB, 439 KB/s, 0 seconds passed
    ...22%, 0.16 MB, 461 KB/s, 0 seconds passed
 
    ...23%, 0.17 MB, 483 KB/s, 0 seconds passed
    ...24%, 0.18 MB, 504 KB/s, 0 seconds passed
    ...26%, 0.19 MB, 526 KB/s, 0 seconds passed
    ...27%, 0.20 MB, 548 KB/s, 0 seconds passed
    ...28%, 0.20 MB, 569 KB/s, 0 seconds passed
    ...29%, 0.21 MB, 591 KB/s, 0 seconds passed
    ...30%, 0.22 MB, 613 KB/s, 0 seconds passed
    ...31%, 0.23 MB, 634 KB/s, 0 seconds passed
    ...32%, 0.23 MB, 656 KB/s, 0 seconds passed
    ...33%, 0.24 MB, 677 KB/s, 0 seconds passed
    ...34%, 0.25 MB, 699 KB/s, 0 seconds passed
    ...35%, 0.26 MB, 720 KB/s, 0 seconds passed
    ...36%, 0.27 MB, 742 KB/s, 0 seconds passed
    ...38%, 0.27 MB, 759 KB/s, 0 seconds passed
    ...39%, 0.28 MB, 780 KB/s, 0 seconds passed
    ...40%, 0.29 MB, 802 KB/s, 0 seconds passed
    ...41%, 0.30 MB, 823 KB/s, 0 seconds passed
    ...42%, 0.30 MB, 845 KB/s, 0 seconds passed
    ...43%, 0.31 MB, 864 KB/s, 0 seconds passed
    ...44%, 0.32 MB, 885 KB/s, 0 seconds passed
    ...45%, 0.33 MB, 906 KB/s, 0 secon
 ds passed
    ...46%, 0.34 MB, 928 KB/s, 0 seconds passed
    ...47%, 0.34 MB, 949 KB/s, 0 seconds passed
    ...48%, 0.35 MB, 970 KB/s, 0 seconds passed
    ...49%, 0.36 MB, 992 KB/s, 0 seconds passed
    ...51%, 0.37 MB, 1013 KB/s, 0 seconds passed
    ...52%, 0.38 MB, 1034 KB/s, 0 seconds passed
    ...53%, 0.38 MB, 1056 KB/s, 0 seconds passed
    ...54%, 0.39 MB, 1077 KB/s, 0 seconds passed
    ...55%, 0.40 MB, 1098 KB/s, 0 seconds passed
    ...56%, 0.41 MB, 1119 KB/s, 0 seconds passed
    ...57%, 0.41 MB, 1140 KB/s, 0 seconds passed
    ...58%, 0.42 MB, 1162 KB/s, 0 seconds passed
    ...59%, 0.43 MB, 1183 KB/s, 0 seconds passed
    ...60%, 0.44 MB, 1203 KB/s, 0 seconds passed
    ...61%, 0.45 MB, 1225 KB/s, 0 seconds passed
    ...63%, 0.45 MB, 1246 KB/s, 0 seconds passed
    ...64%, 0.46 MB, 1267 KB/s, 0 seconds passed
    ...65%, 0.47 MB, 1288 KB/s, 0 seconds passed
    ...66%, 0.48 MB, 1309 KB/s, 0 seconds passed
    ...67%, 0.48 MB, 1330 KB/s, 0 seconds passed
    ...68%,
  0.49 MB, 1351 KB/s, 0 seconds passed
    ...69%, 0.50 MB, 1372 KB/s, 0 seconds passed
    ...70%, 0.51 MB, 1393 KB/s, 0 seconds passed
    ...71%, 0.52 MB, 1415 KB/s, 0 seconds passed
    ...72%, 0.52 MB, 1436 KB/s, 0 seconds passed
    ...73%, 0.53 MB, 1457 KB/s, 0 seconds passed
    ...74%, 0.54 MB, 1478 KB/s, 0 seconds passed
    ...76%, 0.55 MB, 1499 KB/s, 0 seconds passed
    ...77%, 0.55 MB, 1515 KB/s, 0 seconds passed
    ...78%, 0.56 MB, 1536 KB/s, 0 seconds passed
    ...79%, 0.57 MB, 1557 KB/s, 0 seconds passed
    ...80%, 0.58 MB, 1578 KB/s, 0 seconds passed
    ...81%, 0.59 MB, 1599 KB/s, 0 seconds passed
    ...82%, 0.59 MB, 1620 KB/s, 0 seconds passed
    ...83%, 0.60 MB, 1641 KB/s, 0 seconds passed
    ...84%, 0.61 MB, 1662 KB/s, 0 seconds passed
    ...85%, 0.62 MB, 1678 KB/s, 0 seconds passed
    ...86%, 0.62 MB, 1699 KB/s, 0 seconds passed
    ...87%, 0.63 MB, 1720 KB/s, 0 seconds passed
    ...89%, 0.64 MB, 1741 KB/s, 0 seconds passed
    ...90%, 0.65 MB, 1762 KB
 /s, 0 seconds passed
    ...91%, 0.66 MB, 1782 KB/s, 0 seconds passed
    ...92%, 0.66 MB, 1803 KB/s, 0 seconds passed
    ...93%, 0.67 MB, 1824 KB/s, 0 seconds passed
    ...94%, 0.68 MB, 1845 KB/s, 0 seconds passed
    ...95%, 0.69 MB, 1866 KB/s, 0 seconds passed
    ...96%, 0.70 MB, 1887 KB/s, 0 seconds passed
    ...97%, 0.70 MB, 1908 KB/s, 0 seconds passed
    ...98%, 0.71 MB, 1929 KB/s, 0 seconds passed
    ...99%, 0.72 MB, 1949 KB/s, 0 seconds passed
    ...100%, 0.73 MB, 1970 KB/s, 0 seconds passed
+
    ...1%, 0.01 MB, 32 KB/s, 0 seconds passed
    ...2%, 0.02 MB, 64 KB/s, 0 seconds passed
    ...3%, 0.02 MB, 97 KB/s, 0 seconds passed
    ...4%, 0.03 MB, 129 KB/s, 0 seconds passed
    ...5%, 0.04 MB, 161 KB/s, 0 seconds passed
    ...6%, 0.05 MB, 194 KB/s, 0 seconds passed
    ...7%, 0.05 MB, 226 KB/s, 0 seconds passed
    ...8%, 0.06 MB, 258 KB/s, 0 seconds passed
    ...9%, 0.07 MB, 290 KB/s, 0 seconds passed
    ...10%, 0.08 MB, 322 KB/s, 0 seconds passed
    ...11%, 0.09 MB, 354 KB/s, 0 seconds passed
    ...13%, 0.09 MB, 386 KB/s, 0 seconds passed
    ...14%, 0.10 MB, 418 KB/s, 0 seconds passed
    ...15%, 0.11 MB, 449 KB/s, 0 seconds passed
    ...16%, 0.12 MB, 481 KB/s, 0 seconds passed
    ...17%, 0.12 MB, 513 KB/s, 0 seconds passed
    ...18%, 0.13 MB, 544 KB/s, 0 seconds passed
    ...19%, 0.14 MB, 576 KB/s, 0 seconds passed
    ...20%, 0.15 MB, 607 KB/s, 0 seconds passed
    ...21%, 0.16 MB, 639 KB/s, 0 seconds passed
    ...22%, 0.16 MB, 670 KB/s, 0 seconds passed
     ...23%, 0.17 MB, 702 KB/s, 0 seconds passed
    ...24%, 0.18 MB, 733 KB/s, 0 seconds passed
    ...26%, 0.19 MB, 765 KB/s, 0 seconds passed
    ...27%, 0.20 MB, 796 KB/s, 0 seconds passed
    ...28%, 0.20 MB, 827 KB/s, 0 seconds passed
    ...29%, 0.21 MB, 859 KB/s, 0 seconds passed
    ...30%, 0.22 MB, 890 KB/s, 0 seconds passed
    ...31%, 0.23 MB, 921 KB/s, 0 seconds passed
    ...32%, 0.23 MB, 953 KB/s, 0 seconds passed
    ...33%, 0.24 MB, 983 KB/s, 0 seconds passed
    ...34%, 0.25 MB, 1014 KB/s, 0 seconds passed
    ...35%, 0.26 MB, 1045 KB/s, 0 seconds passed
    ...36%, 0.27 MB, 1076 KB/s, 0 seconds passed
    ...38%, 0.27 MB, 1105 KB/s, 0 seconds passed
    ...39%, 0.28 MB, 1136 KB/s, 0 seconds passed
    ...40%, 0.29 MB, 1167 KB/s, 0 seconds passed
    ...41%, 0.30 MB, 1198 KB/s, 0 seconds passed
    ...42%, 0.30 MB, 1229 KB/s, 0 seconds passed
    ...43%, 0.31 MB, 1260 KB/s, 0 seconds passed
    ...44%, 0.32 MB, 1290 KB/s, 0 seconds passed
    ...45%, 0.33 MB, 1321 K
 B/s, 0 seconds passed
    ...46%, 0.34 MB, 1351 KB/s, 0 seconds passed
    ...47%, 0.34 MB, 1382 KB/s, 0 seconds passed
    ...48%, 0.35 MB, 1413 KB/s, 0 seconds passed
    ...49%, 0.36 MB, 1443 KB/s, 0 seconds passed
    ...51%, 0.37 MB, 1474 KB/s, 0 seconds passed
    ...52%, 0.38 MB, 1505 KB/s, 0 seconds passed
    ...53%, 0.38 MB, 1535 KB/s, 0 seconds passed
    ...54%, 0.39 MB, 1566 KB/s, 0 seconds passed
    ...55%, 0.40 MB, 1597 KB/s, 0 seconds passed
    ...56%, 0.41 MB, 1627 KB/s, 0 seconds passed
    ...57%, 0.41 MB, 1658 KB/s, 0 seconds passed
    ...58%, 0.42 MB, 1687 KB/s, 0 seconds passed
    ...59%, 0.43 MB, 1717 KB/s, 0 seconds passed
    ...60%, 0.44 MB, 1748 KB/s, 0 seconds passed
    ...61%, 0.45 MB, 1779 KB/s, 0 seconds passed
    ...63%, 0.45 MB, 1809 KB/s, 0 seconds passed
    ...64%, 0.46 MB, 1840 KB/s, 0 seconds passed
    ...65%, 0.47 MB, 1870 KB/s, 0 seconds passed
    ...66%, 0.48 MB, 1901 KB/s, 0 seconds passed
    ...67%, 0.48 MB, 1931 KB/s, 0 seconds pa
 ssed
    ...68%, 0.49 MB, 1962 KB/s, 0 seconds passed
    ...69%, 0.50 MB, 1992 KB/s, 0 seconds passed
    ...70%, 0.51 MB, 2022 KB/s, 0 seconds passed
    ...71%, 0.52 MB, 2052 KB/s, 0 seconds passed
    ...72%, 0.52 MB, 2083 KB/s, 0 seconds passed
    ...73%, 0.53 MB, 2110 KB/s, 0 seconds passed
    ...74%, 0.54 MB, 2140 KB/s, 0 seconds passed
    ...76%, 0.55 MB, 2170 KB/s, 0 seconds passed
    ...77%, 0.55 MB, 2201 KB/s, 0 seconds passed
    ...78%, 0.56 MB, 2230 KB/s, 0 seconds passed
    ...79%, 0.57 MB, 2261 KB/s, 0 seconds passed
    ...80%, 0.58 MB, 2291 KB/s, 0 seconds passed
    ...81%, 0.59 MB, 2321 KB/s, 0 seconds passed
    ...82%, 0.59 MB, 2351 KB/s, 0 seconds passed
    ...83%, 0.60 MB, 2381 KB/s, 0 seconds passed
    ...84%, 0.61 MB, 2407 KB/s, 0 seconds passed
    ...85%, 0.62 MB, 2438 KB/s, 0 seconds passed
    ...86%, 0.62 MB, 2468 KB/s, 0 seconds passed
    ...87%, 0.63 MB, 2498 KB/s, 0 seconds passed
    ...89%, 0.64 MB, 2528 KB/s, 0 seconds passed
    ...90%, 
 0.65 MB, 2558 KB/s, 0 seconds passed
    ...91%, 0.66 MB, 2588 KB/s, 0 seconds passed
    ...92%, 0.66 MB, 2618 KB/s, 0 seconds passed
    ...93%, 0.67 MB, 2648 KB/s, 0 seconds passed
    ...94%, 0.68 MB, 2677 KB/s, 0 seconds passed
    ...95%, 0.69 MB, 2707 KB/s, 0 seconds passed
    ...96%, 0.70 MB, 2737 KB/s, 0 seconds passed
    ...97%, 0.70 MB, 2767 KB/s, 0 seconds passed
    ...98%, 0.71 MB, 2796 KB/s, 0 seconds passed
    ...99%, 0.72 MB, 2826 KB/s, 0 seconds passed
    ...100%, 0.73 MB, 2855 KB/s, 0 seconds passed
     Extracted 10 conv2d tasks:
     (1, 14, 14, 256, 512, 1, 1, 0, 0, 2, 2)
     (1, 28, 28, 128, 256, 1, 1, 0, 0, 2, 2)
diff --git a/docs/_sources/vta/tutorials/frontend/deploy_classification.rst.txt b/docs/_sources/vta/tutorials/frontend/deploy_classification.rst.txt
index 602c00f..24a47fd 100644
--- a/docs/_sources/vta/tutorials/frontend/deploy_classification.rst.txt
+++ b/docs/_sources/vta/tutorials/frontend/deploy_classification.rst.txt
@@ -243,8 +243,8 @@ The compilation steps are:
 
  .. code-block:: none
 
-
    ...12%, 0.01 MB, 42 KB/s, 0 seconds passed
    ...25%, 0.02 MB, 85 KB/s, 0 seconds passed
    ...38%, 0.02 MB, 127 KB/s, 0 seconds passed
    ...51%, 0.03 MB, 169 KB/s, 0 seconds passed
    ...64%, 0.04 MB, 206 KB/s, 0 seconds passed
    ...77%, 0.05 MB, 247 KB/s, 0 seconds passed
    ...89%, 0.05 MB, 288 KB/s, 0 seconds passed
    ...100%, 0.06 MB, 329 KB/s, 0 seconds passed
-    resnet18_v1 inference graph built in 8.53s!
+
    ...12%, 0.01 MB, 42 KB/s, 0 seconds passed
    ...25%, 0.02 MB, 85 KB/s, 0 seconds passed
    ...38%, 0.02 MB, 128 KB/s, 0 seconds passed
    ...51%, 0.03 MB, 170 KB/s, 0 seconds passed
    ...64%, 0.04 MB, 213 KB/s, 0 seconds passed
    ...77%, 0.05 MB, 255 KB/s, 0 seconds passed
    ...89%, 0.05 MB, 297 KB/s, 0 seconds passed
    ...100%, 0.06 MB, 339 KB/s, 0 seconds passed
+    resnet18_v1 inference graph built in 6.50s!
 
 
 
diff --git a/docs/_sources/vta/tutorials/frontend/sg_execution_times.rst.txt b/docs/_sources/vta/tutorials/frontend/sg_execution_times.rst.txt
index 08db91b..a91794b 100644
--- a/docs/_sources/vta/tutorials/frontend/sg_execution_times.rst.txt
+++ b/docs/_sources/vta/tutorials/frontend/sg_execution_times.rst.txt
@@ -5,6 +5,6 @@
 
 Computation times
 =================
-**00:29.592** total execution time for **vta_tutorials_frontend** files:
+**00:24.644** total execution time for **vta_tutorials_frontend** files:
 
-- **00:29.592**: :ref:`sphx_glr_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``)
+- **00:24.644**: :ref:`sphx_glr_vta_tutorials_frontend_deploy_classification.py` (``deploy_classification.py``)
diff --git a/docs/_sources/vta/tutorials/matrix_multiply.rst.txt b/docs/_sources/vta/tutorials/matrix_multiply.rst.txt
index 4331499..aa5bdcc 100644
--- a/docs/_sources/vta/tutorials/matrix_multiply.rst.txt
+++ b/docs/_sources/vta/tutorials/matrix_multiply.rst.txt
@@ -535,8 +535,8 @@ by the VTA runtime JIT compiler.
 
     primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {B: Buffer(B_2: Pointer(int8), int8, [16, 16, 16, 16], []),
-                 C: Buffer(C_2: Pointer(int8), int8, [1, 16, 1, 16], []),
+      buffers = {C: Buffer(C_2: Pointer(int8), int8, [1, 16, 1, 16], []),
+                 B: Buffer(B_2: Pointer(int8), int8, [16, 16, 16, 16], []),
                  A: Buffer(A_2: Pointer(int8), int8, [1, 16, 1, 16], [])}
       buffer_map = {A_1: A, B_1: B, C_1: C} {
       attr [C_buf: Pointer(int32)] "storage_scope" = "local.acc_buffer";
diff --git a/docs/_sources/vta/tutorials/optimize/convolution_opt.rst.txt b/docs/_sources/vta/tutorials/optimize/convolution_opt.rst.txt
index 95b98e9..72105b5 100644
--- a/docs/_sources/vta/tutorials/optimize/convolution_opt.rst.txt
+++ b/docs/_sources/vta/tutorials/optimize/convolution_opt.rst.txt
@@ -219,14 +219,6 @@ manageable chunks.
 
 
 
-.. rst-class:: sphx-glr-script-out
-
- Out:
-
- .. code-block:: none
-
-    <class 'int'>
-
 
 
 Scheduling the Computation
@@ -456,8 +448,8 @@ below.
 
     primfn(data_1: handle, kernel_1: handle, res_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {res: Buffer(res_2: Pointer(int8), int8, [1, 16, 14, 14, 1, 16], []),
-                 kernel: Buffer(kernel_2: Pointer(int8), int8, [16, 16, 3, 3, 16, 16], []),
+      buffers = {kernel: Buffer(kernel_2: Pointer(int8), int8, [16, 16, 3, 3, 16, 16], []),
+                 res: Buffer(res_2: Pointer(int8), int8, [1, 16, 14, 14, 1, 16], []),
                  data: Buffer(data_2: Pointer(int8), int8, [1, 16, 14, 14, 1, 16], [])}
       buffer_map = {data_1: data, kernel_1: kernel, res_1: res} {
       attr [data_buf: Pointer(int8)] "storage_scope" = "global";
diff --git a/docs/_sources/vta/tutorials/optimize/matrix_multiply_opt.rst.txt b/docs/_sources/vta/tutorials/optimize/matrix_multiply_opt.rst.txt
index dd5dcfc..5b75d4d 100644
--- a/docs/_sources/vta/tutorials/optimize/matrix_multiply_opt.rst.txt
+++ b/docs/_sources/vta/tutorials/optimize/matrix_multiply_opt.rst.txt
@@ -156,14 +156,6 @@ manageable chunks.
 
 
 
-.. rst-class:: sphx-glr-script-out
-
- Out:
-
- .. code-block:: none
-
-    <class 'int'>
-
 
 
 Scheduling the Computation
@@ -197,8 +189,8 @@ Those include:
 
     primfn(data_1: handle, weight_1: handle, res_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {res: Buffer(res_2: Pointer(int8), int8, [1, 64, 1, 16], []),
-                 weight: Buffer(weight_2: Pointer(int8), int8, [64, 64, 16, 16], []),
+      buffers = {weight: Buffer(weight_2: Pointer(int8), int8, [64, 64, 16, 16], []),
+                 res: Buffer(res_2: Pointer(int8), int8, [1, 64, 1, 16], []),
                  data: Buffer(data_2: Pointer(int8), int8, [1, 64, 1, 16], [])}
       buffer_map = {data_1: data, weight_1: weight, res_1: res} {
       attr [data_buf: Pointer(int8)] "storage_scope" = "global";
@@ -502,8 +494,8 @@ and mapping the shift, and clipping computation to the vector ALU.
 
     primfn(data_1: handle, weight_1: handle, res_1: handle) -> ()
       attr = {"global_symbol": "main", "tir.noalias": True}
-      buffers = {weight: Buffer(weight_2: Pointer(int8), int8, [64, 64, 16, 16], []),
-                 res: Buffer(res_2: Pointer(int8), int8, [1, 64, 1, 16], []),
+      buffers = {res: Buffer(res_2: Pointer(int8), int8, [1, 64, 1, 16], []),
+                 weight: Buffer(weight_2: Pointer(int8), int8, [64, 64, 16, 16], []),
                  data: Buffer(data_2: Pointer(int8), int8, [1, 64, 1, 16], [])}
       buffer_map = {data_1: data, weight_1: weight, res_1: res} {
       attr [res_gem: Pointer(int32)] "storage_scope" = "local.acc_buffer";
diff --git a/docs/_sources/vta/tutorials/optimize/sg_execution_times.rst.txt b/docs/_sources/vta/tutorials/optimize/sg_execution_times.rst.txt
index c37a09d..bfad1bf 100644
--- a/docs/_sources/vta/tutorials/optimize/sg_execution_times.rst.txt
+++ b/docs/_sources/vta/tutorials/optimize/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
 
 Computation times
 =================
-**00:03.753** total execution time for **vta_tutorials_optimize** files:
+**00:02.971** total execution time for **vta_tutorials_optimize** files:
 
-- **00:03.215**: :ref:`sphx_glr_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)
-- **00:00.537**: :ref:`sphx_glr_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``)
+- **00:02.539**: :ref:`sphx_glr_vta_tutorials_optimize_convolution_opt.py` (``convolution_opt.py``)
+- **00:00.432**: :ref:`sphx_glr_vta_tutorials_optimize_matrix_multiply_opt.py` (``matrix_multiply_opt.py``)
diff --git a/docs/_sources/vta/tutorials/sg_execution_times.rst.txt b/docs/_sources/vta/tutorials/sg_execution_times.rst.txt
index 1e2b328..612ccf3 100644
--- a/docs/_sources/vta/tutorials/sg_execution_times.rst.txt
+++ b/docs/_sources/vta/tutorials/sg_execution_times.rst.txt
@@ -5,7 +5,7 @@
 
 Computation times
 =================
-**00:00.975** total execution time for **vta_tutorials** files:
+**00:00.817** total execution time for **vta_tutorials** files:
 
-- **00:00.496**: :ref:`sphx_glr_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``)
-- **00:00.479**: :ref:`sphx_glr_vta_tutorials_vta_get_started.py` (``vta_get_started.py``)
+- **00:00.424**: :ref:`sphx_glr_vta_tutorials_matrix_multiply.py` (``matrix_multiply.py``)
+- **00:00.394**: :ref:`sphx_glr_vta_tutorials_vta_get_started.py` (``vta_get_started.py``)
diff --git a/docs/api/doxygen/annotated.html b/docs/api/doxygen/annotated.html
index 7654691..05f54d9 100644
--- a/docs/api/doxygen/annotated.html
+++ b/docs/api/doxygen/annotated.html
@@ -176,40 +176,42 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <tr id="row_1_1_49_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1ProgramMeasurerNode.html" target="_self">ProgramMeasurerNode</a></td><td class="desc">Measurer that measures the time costs of tvm programs This class combines <a class="el" href="classtvm_1_1auto__scheduler_1_1ProgramBuilder.html" title="Managed refere [...]
 <tr id="row_1_1_50_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1ProgramRunner.html" target="_self">ProgramRunner</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1ProgramRunnerNode.html" title="ProgramRunner that runs the built programs and measure the time cost. ">Prog [...]
 <tr id="row_1_1_51_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1ProgramRunnerNode.html" target="_self">ProgramRunnerNode</a></td><td class="desc"><a class="el" href="classtvm_1_1auto__scheduler_1_1ProgramRunner.html" title="Managed reference to ProgramRunnerNode. ">ProgramRunner</a> that runs the built programs and  [...]
-<tr id="row_1_1_52_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedModel.html" target="_self">PythonBasedModel</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html" title="A wrapper for cost model defined by python code This class will cal [...]
-<tr id="row_1_1_53_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html" target="_self">PythonBasedModelNode</a></td><td class="desc">A wrapper for cost model defined by python code This class will call functions defined in the python </td></tr>
-<tr id="row_1_1_54_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModel.html" target="_self">RandomModel</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModelNode.html" title="The cost model returning random value for all predictions. ">RandomModelNode</a> < [...]
-<tr id="row_1_1_55_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModelNode.html" target="_self">RandomModelNode</a></td><td class="desc">The cost model returning random value for all predictions </td></tr>
-<tr id="row_1_1_56_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReader.html" target="_self">RecordReader</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReaderNode.html" title="Log reader to load step logs from a file. ">RecordReaderNode</a> </td></tr>
-<tr id="row_1_1_57_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReaderNode.html" target="_self">RecordReaderNode</a></td><td class="desc">Log reader to load step logs from a file </td></tr>
-<tr id="row_1_1_58_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordToFile.html" target="_self">RecordToFile</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RecordToFileNode.html" title="Callback for logging the input and results of measurements to file. ">RecordToF [...]
-<tr id="row_1_1_59_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordToFileNode.html" target="_self">RecordToFileNode</a></td><td class="desc">Callback for logging the input and results of measurements to file </td></tr>
-<tr id="row_1_1_60_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStep.html" target="_self">ReorderStep</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStepNode.html" title="Reorder step that corresponds to te::Stage::reorder. ">ReorderStepNode</a> </td></tr>
-<tr id="row_1_1_61_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStepNode.html" target="_self">ReorderStepNode</a></td><td class="desc">Reorder step that corresponds to <a class="el" href="classtvm_1_1te_1_1Stage.html#ad96cd240a92df9cafae89cdf2a7e302e" title="Reorder the iteration. ">te::Stage::reorder</a> </td></tr>
-<tr id="row_1_1_62_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RfactorStep.html" target="_self">RfactorStep</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RfactorStepNode.html" title="Reduction factor step that corresponds to te::Schedule::rfactor. ">RfactorStepNode [...]
-<tr id="row_1_1_63_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RfactorStepNode.html" target="_self">RfactorStepNode</a></td><td class="desc">Reduction factor step that corresponds to <a class="el" href="classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862" title="Factor a reduction axis in tensor&#39;s [...]
-<tr id="row_1_1_64_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RPCRunner.html" target="_self">RPCRunner</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RPCRunnerNode.html" title="RPCRunner that uses RPC call to measures the time cost of programs on remote devices. Or [...]
-<tr id="row_1_1_65_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RPCRunnerNode.html" target="_self">RPCRunnerNode</a></td><td class="desc"><a class="el" href="classtvm_1_1auto__scheduler_1_1RPCRunner.html" title="Managed reference to RPCRunnerNode. ">RPCRunner</a> that uses RPC call to measures the time cost of progr [...]
-<tr id="row_1_1_66_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchCallback.html" target="_self">SearchCallback</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1SearchCallbackNode.html" title="Callback function to be called by the search process. This interface allo [...]
-<tr id="row_1_1_67_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchCallbackNode.html" target="_self">SearchCallbackNode</a></td><td class="desc">Callback function to be called by the search process. This interface allows to do extra initializations before schedule search or extra check during/after the schedule s [...]
-<tr id="row_1_1_68_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchPolicy.html" target="_self">SearchPolicy</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1SearchPolicyNode.html" title="The base class of search policies. ">SearchPolicyNode</a> </td></tr>
-<tr id="row_1_1_69_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1auto__scheduler_1_1SearchPolicyKey.html" target="_self">SearchPolicyKey</a></td><td class="desc">Attribute keys of ops used for <a class="el" href="classtvm_1_1auto__scheduler_1_1SearchPolicy.html" title="Managed reference to SearchPolicyNode. ">SearchPolicy</a> </td></tr>
-<tr id="row_1_1_70_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchPolicyNode.html" target="_self">SearchPolicyNode</a></td><td class="desc">The base class of search policies </td></tr>
-<tr id="row_1_1_71_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchTask.html" target="_self">SearchTask</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1SearchTaskNode.html" title="The computation information and hardware parameters for a specific schedule search ta [...]
-<tr id="row_1_1_72_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchTaskNode.html" target="_self">SearchTaskNode</a></td><td class="desc">The computation information and hardware parameters for a specific schedule search task </td></tr>
-<tr id="row_1_1_73_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SplitStep.html" target="_self">SplitStep</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1SplitStepNode.html" title="Split step that corresponds to te::Stage::split with additional support of multiple-leve [...]
-<tr id="row_1_1_74_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SplitStepNode.html" target="_self">SplitStepNode</a></td><td class="desc">Split step that corresponds to <a class="el" href="classtvm_1_1te_1_1Stage.html#a5a7cd562be59b68a187ad97085a3425d" title="Split the parent by factor, generate. ">te::Stage::split< [...]
-<tr id="row_1_1_75_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1Stage.html" target="_self">Stage</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1StageNode.html" title="A op stage in the compute declaration. Similar to te::Stage in include/tvm/te/schedule.h. ">StageNod [...]
-<tr id="row_1_1_76_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1auto__scheduler_1_1StageAttributes.html" target="_self">StageAttributes</a></td><td class="desc">Stage-level attributes </td></tr>
-<tr id="row_1_1_77_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StageNode.html" target="_self">StageNode</a></td><td class="desc">A op stage in the compute declaration. Similar to <a class="el" href="classtvm_1_1te_1_1Stage.html" title="Stage, contains scheduling for a stage of computation. ">te::Stage</a> in <code> [...]
-<tr id="row_1_1_78_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1State.html" target="_self">State</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1StateNode.html" title="A state in the search process. It consists of the current loop structure and a list of transformatio [...]
-<tr id="row_1_1_79_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StateNode.html" target="_self">StateNode</a></td><td class="desc">A state in the search process. It consists of the current loop structure and a list of transformation steps used to construct it. Each <a class="el" href="classtvm_1_1auto__scheduler_1_1S [...]
-<tr id="row_1_1_80_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1Step.html" target="_self">Step</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1StepNode.html" title="The base class of transformation steps. Each step has its corresponding tvm.te schedule primitives..."> [...]
-<tr id="row_1_1_81_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StepNode.html" target="_self">StepNode</a></td><td class="desc">The base class of transformation steps. Each step has its corresponding <a class="el" href="namespacetvm_1_1te.html" title="Tensor expression language DSL. ">tvm.te</a> schedule primitives  [...]
-<tr id="row_1_1_82_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StorageAlignStep.html" target="_self">StorageAlignStep</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1StorageAlignStepNode.html" title="Storage align step that corresponds to te::Stage::storage_align. "> [...]
-<tr id="row_1_1_83_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StorageAlignStepNode.html" target="_self">StorageAlignStepNode</a></td><td class="desc">Storage align step that corresponds to <a class="el" href="classtvm_1_1te_1_1Stage.html#aa73e3a269d84c3b4f0a1994371d67bab" title="Set alignment requirement for speci [...]
-<tr id="row_1_1_84_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1TuningOptions.html" target="_self">TuningOptions</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html" title="Tuning and measurement options. ">TuningOptionsNode</a> </td></tr>
-<tr id="row_1_1_85_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html" target="_self">TuningOptionsNode</a></td><td class="desc">Tuning and measurement options </td></tr>
+<tr id="row_1_1_52_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedMeasureCallback.html" target="_self">PythonBasedMeasureCallback</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedMeasureCallbackNode.html" title="A wrapper for measure callback define [...]
+<tr id="row_1_1_53_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedMeasureCallbackNode.html" target="_self">PythonBasedMeasureCallbackNode</a></td><td class="desc">A wrapper for measure callback defined by python code This class will call functions defined in the python </td></tr>
+<tr id="row_1_1_54_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedModel.html" target="_self">PythonBasedModel</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html" title="A wrapper for cost model defined by python code This class will cal [...]
+<tr id="row_1_1_55_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1PythonBasedModelNode.html" target="_self">PythonBasedModelNode</a></td><td class="desc">A wrapper for cost model defined by python code This class will call functions defined in the python </td></tr>
+<tr id="row_1_1_56_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModel.html" target="_self">RandomModel</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModelNode.html" title="The cost model returning random value for all predictions. ">RandomModelNode</a> < [...]
+<tr id="row_1_1_57_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModelNode.html" target="_self">RandomModelNode</a></td><td class="desc">The cost model returning random value for all predictions </td></tr>
+<tr id="row_1_1_58_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReader.html" target="_self">RecordReader</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReaderNode.html" title="Log reader to load step logs from a file. ">RecordReaderNode</a> </td></tr>
+<tr id="row_1_1_59_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReaderNode.html" target="_self">RecordReaderNode</a></td><td class="desc">Log reader to load step logs from a file </td></tr>
+<tr id="row_1_1_60_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordToFile.html" target="_self">RecordToFile</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RecordToFileNode.html" title="Callback for logging the input and results of measurements to file. ">RecordToF [...]
+<tr id="row_1_1_61_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordToFileNode.html" target="_self">RecordToFileNode</a></td><td class="desc">Callback for logging the input and results of measurements to file </td></tr>
+<tr id="row_1_1_62_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStep.html" target="_self">ReorderStep</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStepNode.html" title="Reorder step that corresponds to te::Stage::reorder. ">ReorderStepNode</a> </td></tr>
+<tr id="row_1_1_63_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1ReorderStepNode.html" target="_self">ReorderStepNode</a></td><td class="desc">Reorder step that corresponds to <a class="el" href="classtvm_1_1te_1_1Stage.html#ad96cd240a92df9cafae89cdf2a7e302e" title="Reorder the iteration. ">te::Stage::reorder</a> </td></tr>
+<tr id="row_1_1_64_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RfactorStep.html" target="_self">RfactorStep</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RfactorStepNode.html" title="Reduction factor step that corresponds to te::Schedule::rfactor. ">RfactorStepNode [...]
+<tr id="row_1_1_65_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RfactorStepNode.html" target="_self">RfactorStepNode</a></td><td class="desc">Reduction factor step that corresponds to <a class="el" href="classtvm_1_1te_1_1Schedule.html#a34ae85add41bbed0140726d024d08862" title="Factor a reduction axis in tensor&#39;s [...]
+<tr id="row_1_1_66_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RPCRunner.html" target="_self">RPCRunner</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1RPCRunnerNode.html" title="RPCRunner that uses RPC call to measures the time cost of programs on remote devices. Or [...]
+<tr id="row_1_1_67_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1RPCRunnerNode.html" target="_self">RPCRunnerNode</a></td><td class="desc"><a class="el" href="classtvm_1_1auto__scheduler_1_1RPCRunner.html" title="Managed reference to RPCRunnerNode. ">RPCRunner</a> that uses RPC call to measures the time cost of progr [...]
+<tr id="row_1_1_68_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchCallback.html" target="_self">SearchCallback</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1SearchCallbackNode.html" title="Callback function to be called by the search process. This interface allo [...]
+<tr id="row_1_1_69_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchCallbackNode.html" target="_self">SearchCallbackNode</a></td><td class="desc">Callback function to be called by the search process. This interface allows to do extra initializations before schedule search or extra check during/after the schedule s [...]
+<tr id="row_1_1_70_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchPolicy.html" target="_self">SearchPolicy</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1SearchPolicyNode.html" title="The base class of search policies. ">SearchPolicyNode</a> </td></tr>
+<tr id="row_1_1_71_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1auto__scheduler_1_1SearchPolicyKey.html" target="_self">SearchPolicyKey</a></td><td class="desc">Attribute keys of ops used for <a class="el" href="classtvm_1_1auto__scheduler_1_1SearchPolicy.html" title="Managed reference to SearchPolicyNode. ">SearchPolicy</a> </td></tr>
+<tr id="row_1_1_72_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchPolicyNode.html" target="_self">SearchPolicyNode</a></td><td class="desc">The base class of search policies </td></tr>
+<tr id="row_1_1_73_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchTask.html" target="_self">SearchTask</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1SearchTaskNode.html" title="The computation information and hardware parameters for a specific schedule search ta [...]
+<tr id="row_1_1_74_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SearchTaskNode.html" target="_self">SearchTaskNode</a></td><td class="desc">The computation information and hardware parameters for a specific schedule search task </td></tr>
+<tr id="row_1_1_75_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SplitStep.html" target="_self">SplitStep</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1SplitStepNode.html" title="Split step that corresponds to te::Stage::split with additional support of multiple-leve [...]
+<tr id="row_1_1_76_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1SplitStepNode.html" target="_self">SplitStepNode</a></td><td class="desc">Split step that corresponds to <a class="el" href="classtvm_1_1te_1_1Stage.html#a5a7cd562be59b68a187ad97085a3425d" title="Split the parent by factor, generate. ">te::Stage::split< [...]
+<tr id="row_1_1_77_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1Stage.html" target="_self">Stage</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1StageNode.html" title="A op stage in the compute declaration. Similar to te::Stage in include/tvm/te/schedule.h. ">StageNod [...]
+<tr id="row_1_1_78_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1auto__scheduler_1_1StageAttributes.html" target="_self">StageAttributes</a></td><td class="desc">Stage-level attributes </td></tr>
+<tr id="row_1_1_79_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StageNode.html" target="_self">StageNode</a></td><td class="desc">A op stage in the compute declaration. Similar to <a class="el" href="classtvm_1_1te_1_1Stage.html" title="Stage, contains scheduling for a stage of computation. ">te::Stage</a> in <code> [...]
+<tr id="row_1_1_80_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1State.html" target="_self">State</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1StateNode.html" title="A state in the search process. It consists of the current loop structure and a list of transformatio [...]
+<tr id="row_1_1_81_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StateNode.html" target="_self">StateNode</a></td><td class="desc">A state in the search process. It consists of the current loop structure and a list of transformation steps used to construct it. Each <a class="el" href="classtvm_1_1auto__scheduler_1_1S [...]
+<tr id="row_1_1_82_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1Step.html" target="_self">Step</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1StepNode.html" title="The base class of transformation steps. Each step has its corresponding tvm.te schedule primitives..."> [...]
+<tr id="row_1_1_83_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StepNode.html" target="_self">StepNode</a></td><td class="desc">The base class of transformation steps. Each step has its corresponding <a class="el" href="namespacetvm_1_1te.html" title="Tensor expression language DSL. ">tvm.te</a> schedule primitives  [...]
+<tr id="row_1_1_84_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StorageAlignStep.html" target="_self">StorageAlignStep</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1StorageAlignStepNode.html" title="Storage align step that corresponds to te::Stage::storage_align. "> [...]
+<tr id="row_1_1_85_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1StorageAlignStepNode.html" target="_self">StorageAlignStepNode</a></td><td class="desc">Storage align step that corresponds to <a class="el" href="classtvm_1_1te_1_1Stage.html#aa73e3a269d84c3b4f0a1994371d67bab" title="Set alignment requirement for speci [...]
+<tr id="row_1_1_86_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1TuningOptions.html" target="_self">TuningOptions</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html" title="Tuning and measurement options. ">TuningOptionsNode</a> </td></tr>
+<tr id="row_1_1_87_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html" target="_self">TuningOptionsNode</a></td><td class="desc">Tuning and measurement options </td></tr>
 <tr id="row_1_2_" class="even" style="display:none;"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span id="arr_1_2_" class="arrow" onclick="toggleFolder('1_2_')">&#9658;</span><span class="icona"><span class="icon">N</span></span><a class="el" href="namespacetvm_1_1detail.html" target="_self">detail</a></td><td class="desc"></td></tr>
 <tr id="row_1_2_0_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1detail_1_1AttrDocEntry.html" target="_self">AttrDocEntry</a></td><td class="desc"></td></tr>
 <tr id="row_1_2_1_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1detail_1_1AttrDocVisitor.html" target="_self">AttrDocVisitor</a></td><td class="desc"></td></tr>
@@ -271,199 +273,202 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <tr id="row_1_4_13_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1AvgPool1DAttrs.html" target="_self">AvgPool1DAttrs</a></td><td class="desc">Attributes for 1D avg pool operator </td></tr>
 <tr id="row_1_4_14_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1AvgPool2DAttrs.html" target="_self">AvgPool2DAttrs</a></td><td class="desc">Attributes for avg pool operator </td></tr>
 <tr id="row_1_4_15_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1AvgPool3DAttrs.html" target="_self">AvgPool3DAttrs</a></td><td class="desc">Attributes for 3D avg pool operator </td></tr>
-<tr id="row_1_4_16_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BatchNormAttrs.html" target="_self">BatchNormAttrs</a></td><td class="desc">Attributes used in batch_norm operator </td></tr>
-<tr id="row_1_4_17_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BatchToSpaceNDAttrs.html" target="_self">BatchToSpaceNDAttrs</a></td><td class="desc">Attributes used in BatchToSpaceND operator </td></tr>
-<tr id="row_1_4_18_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BiasAddAttrs.html" target="_self">BiasAddAttrs</a></td><td class="desc">Add a 1D Tensor to an axis of a data </td></tr>
-<tr id="row_1_4_19_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BinaryConv2DAttrs.html" target="_self">BinaryConv2DAttrs</a></td><td class="desc">Attribues used in bitserial convolution operators </td></tr>
-<tr id="row_1_4_20_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BinaryDenseAttrs.html" target="_self">BinaryDenseAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_21_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BitPackAttrs.html" target="_self">BitPackAttrs</a></td><td class="desc">Attributes used in bitpack operators </td></tr>
-<tr id="row_1_4_22_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Call.html" target="_self">Call</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_23_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1CallNode.html" target="_self">CallNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Call.html">Call</a> container </td></tr>
-<tr id="row_1_4_24_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1CallPattern.html" target="_self">CallPattern</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_25_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1CallPatternNode.html" target="_self">CallPatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1CallPattern.html">CallPattern</a> container </td></tr>
-<tr id="row_1_4_26_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CastAttrs.html" target="_self">CastAttrs</a></td><td class="desc">Data type cast </td></tr>
-<tr id="row_1_4_27_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CastHintAttrs.html" target="_self">CastHintAttrs</a></td><td class="desc">Annotate an expression to be cast into specific data type </td></tr>
-<tr id="row_1_4_28_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Clause.html" target="_self">Clause</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_29_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ClauseNode.html" target="_self">ClauseNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Clause.html">Clause</a> container node </td></tr>
-<tr id="row_1_4_30_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ClipAttrs.html" target="_self">ClipAttrs</a></td><td class="desc">Attributes for Clip operator </td></tr>
-<tr id="row_1_4_31_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CompilerAttrs.html" target="_self">CompilerAttrs</a></td><td class="desc">Options for the operators used to annotate a compiler </td></tr>
-<tr id="row_1_4_32_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ConcatenateAttrs.html" target="_self">ConcatenateAttrs</a></td><td class="desc">Attributes used in concatenate operators </td></tr>
-<tr id="row_1_4_33_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Constant.html" target="_self">Constant</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_34_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ConstantNode.html" target="_self">ConstantNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Constant.html">Constant</a> tensor type </td></tr>
-<tr id="row_1_4_35_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ConstantPattern.html" target="_self">ConstantPattern</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_36_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ConstantPatternNode.html" target="_self">ConstantPatternNode</a></td><td class="desc">Container for <a class="el" href="classtvm_1_1relay_1_1Constant.html">Constant</a> </td></tr>
-<tr id="row_1_4_37_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ConstructorValue.html" target="_self">ConstructorValue</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_38_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ConstructorValueObj.html" target="_self">ConstructorValueObj</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_39_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv1DAttrs.html" target="_self">Conv1DAttrs</a></td><td class="desc">Attributes used in 1D convolution operators </td></tr>
-<tr id="row_1_4_40_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv1DTransposeAttrs.html" target="_self">Conv1DTransposeAttrs</a></td><td class="desc">Attributes used in 1D transposed convolution operator </td></tr>
-<tr id="row_1_4_41_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv2DAttrs.html" target="_self">Conv2DAttrs</a></td><td class="desc">Attributes used in convolution operators </td></tr>
-<tr id="row_1_4_42_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv2DTransposeAttrs.html" target="_self">Conv2DTransposeAttrs</a></td><td class="desc">Attributes used in transposed convolution operator </td></tr>
-<tr id="row_1_4_43_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradAttrs.html" target="_self">Conv2DWinogradAttrs</a></td><td class="desc">Attributes used in convolution operators with winograd algorithm </td></tr>
-<tr id="row_1_4_44_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradNNPACKWeightTransformAttrs.html" target="_self">Conv2DWinogradNNPACKWeightTransformAttrs</a></td><td class="desc">Attributes used in winograd weight transformation operators </td></tr>
-<tr id="row_1_4_45_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv3DAttrs.html" target="_self">Conv3DAttrs</a></td><td class="desc">Attributes used in convolution operators </td></tr>
-<tr id="row_1_4_46_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv3DTransposeAttrs.html" target="_self">Conv3DTransposeAttrs</a></td><td class="desc">Attributes used in transposed convolution operator </td></tr>
-<tr id="row_1_4_47_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv3DWinogradAttrs.html" target="_self">Conv3DWinogradAttrs</a></td><td class="desc">Attributes used in 3d winograd convolution operators </td></tr>
-<tr id="row_1_4_48_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ConvGemmWeightTransformAttrs.html" target="_self">ConvGemmWeightTransformAttrs</a></td><td class="desc">Attributes used in gemm weight transformation operators </td></tr>
-<tr id="row_1_4_49_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ConvWinogradWeightTransformAttrs.html" target="_self">ConvWinogradWeightTransformAttrs</a></td><td class="desc">Attributes used in winograd weight transformation operators </td></tr>
-<tr id="row_1_4_50_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CorrelationAttrs.html" target="_self">CorrelationAttrs</a></td><td class="desc">Attributes used in correlation operators </td></tr>
-<tr id="row_1_4_51_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CropAndResizeAttrs.html" target="_self">CropAndResizeAttrs</a></td><td class="desc">Attributes used in image crop_and_resize operator </td></tr>
-<tr id="row_1_4_52_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DataTypePattern.html" target="_self">DataTypePattern</a></td><td class="desc">A pattern which matches a type in another pattern </td></tr>
-<tr id="row_1_4_53_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DataTypePatternNode.html" target="_self">DataTypePatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> for Types </td></tr>
-<tr id="row_1_4_54_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DebugAttrs.html" target="_self">DebugAttrs</a></td><td class="desc">Options for the debug operators </td></tr>
-<tr id="row_1_4_55_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DeformableConv2DAttrs.html" target="_self">DeformableConv2DAttrs</a></td><td class="desc">Attributes for DeformableConv2D operator </td></tr>
-<tr id="row_1_4_56_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DenseAttrs.html" target="_self">DenseAttrs</a></td><td class="desc">Attributes for dense operator </td></tr>
-<tr id="row_1_4_57_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DeviceCopyAttrs.html" target="_self">DeviceCopyAttrs</a></td><td class="desc">Options for the device copy operators </td></tr>
-<tr id="row_1_4_58_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPattern.html" target="_self">DFPattern</a></td><td class="desc">Managed reference to dataflow patterns </td></tr>
-<tr id="row_1_4_59_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternCallback.html" target="_self">DFPatternCallback</a></td><td class="desc">Managed reference to dataflow pattern callbacks </td></tr>
-<tr id="row_1_4_60_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternCallbackNode.html" target="_self">DFPatternCallbackNode</a></td><td class="desc">Base type of all dataflow pattern callbacks </td></tr>
-<tr id="row_1_4_61_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor.html" target="_self">DFPatternFunctor</a></td><td class="desc">A dynamical functor that dispatches on in the first <a class="el" href="classtvm_1_1relay_1_1DFPattern.html" title="Managed reference to dataflow patterns. ">DFPattern</a> argument </ [...]
-<tr id="row_1_4_62_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor_3_01R_07const_01DFPattern_01_6n_00_01Args_8_8_8_08_4.html" target="_self">DFPatternFunctor&lt; R(const DFPattern &amp;n, Args...)&gt;</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_63_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternNode.html" target="_self">DFPatternNode</a></td><td class="desc">Base type of all dataflow patterns </td></tr>
-<tr id="row_1_4_64_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternVisitor.html" target="_self">DFPatternVisitor</a></td><td class="desc">A simple visitor wrapper around <a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor.html" title="A dynamical functor that dispatches on in the first DFPattern argument. ">DFPatt [...]
-<tr id="row_1_4_65_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DilateAttrs.html" target="_self">DilateAttrs</a></td><td class="desc">Attributes used in dilate operator </td></tr>
-<tr id="row_1_4_66_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Dilation2DAttrs.html" target="_self">Dilation2DAttrs</a></td><td class="desc">Attributes used in dilation operators </td></tr>
-<tr id="row_1_4_67_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DominatorPattern.html" target="_self">DominatorPattern</a></td><td class="desc">A pattern which matches a variable length dominator path </td></tr>
-<tr id="row_1_4_68_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DominatorPatternNode.html" target="_self">DominatorPatternNode</a></td><td class="desc">Dominated Graph <a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> <a class="el" href="cla [...]
-<tr id="row_1_4_69_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DropoutAttrs.html" target="_self">DropoutAttrs</a></td><td class="desc">Attributes used in dropout operator </td></tr>
-<tr id="row_1_4_70_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ExpandDimsAttrs.html" target="_self">ExpandDimsAttrs</a></td><td class="desc">Attributes used in expand_dims operators </td></tr>
-<tr id="row_1_4_71_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html" target="_self">ExprFunctor</a></td><td class="desc">A dynamical functor that dispatches on in the first Expr argument. You can use this as a more powerful Visitor, since it allows you to define function signatures of Visit <a class="el" href="cl [...]
-<tr id="row_1_4_72_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprFunctor_3_01R_07const_01Expr_01_6n_00_01Args_8_8_8_08_4.html" target="_self">ExprFunctor&lt; R(const Expr &amp;n, Args...)&gt;</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_73_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprMutator.html" target="_self">ExprMutator</a></td><td class="desc">A wrapper around <a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html" title="A dynamical functor that dispatches on in the first Expr argument. You can use this as a more powerfu...">Expr [...]
-<tr id="row_1_4_74_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprPattern.html" target="_self">ExprPattern</a></td><td class="desc">A pattern which matches a literal expression </td></tr>
-<tr id="row_1_4_75_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprPatternNode.html" target="_self">ExprPatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> for Relay Expression </td></tr>
-<tr id="row_1_4_76_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprRewriter.html" target="_self">ExprRewriter</a></td><td class="desc">A non-iterating Expression Rewriter </td></tr>
-<tr id="row_1_4_77_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprVisitor.html" target="_self">ExprVisitor</a></td><td class="desc">A simple visitor wrapper around <a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html" title="A dynamical functor that dispatches on in the first Expr argument. You can use this as a more p [...]
-<tr id="row_1_4_78_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1FeatureSet.html" target="_self">FeatureSet</a></td><td class="desc">A finite set of Feature </td></tr>
-<tr id="row_1_4_79_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1FIFOBufferAttrs.html" target="_self">FIFOBufferAttrs</a></td><td class="desc">Attributes for FIFO buffer operator </td></tr>
-<tr id="row_1_4_80_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1FixedPointMultiplyAttrs.html" target="_self">FixedPointMultiplyAttrs</a></td><td class="desc">Attributes for FixedPointMultiply operator </td></tr>
-<tr id="row_1_4_81_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Function.html" target="_self">Function</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1relay_1_1FunctionNode.html" title="Relay Function container. ">FunctionNode</a> </td></tr>
-<tr id="row_1_4_82_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1FunctionNode.html" target="_self">FunctionNode</a></td><td class="desc">Relay <a class="el" href="classtvm_1_1relay_1_1Function.html" title="Managed reference to FunctionNode. ">Function</a> container </td></tr>
-<tr id="row_1_4_83_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GatherAttrs.html" target="_self">GatherAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_84_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GetValidCountsAttrs.html" target="_self">GetValidCountsAttrs</a></td><td class="desc">Attributes used in get_valid_counts operator </td></tr>
-<tr id="row_1_4_85_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GlobalPool2DAttrs.html" target="_self">GlobalPool2DAttrs</a></td><td class="desc">Attributes for global pool operator </td></tr>
-<tr id="row_1_4_86_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GridSampleAttrs.html" target="_self">GridSampleAttrs</a></td><td class="desc">Attributes used in image grid_sample operator </td></tr>
-<tr id="row_1_4_87_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GroupNormAttrs.html" target="_self">GroupNormAttrs</a></td><td class="desc">Attributes used in group_norm operator </td></tr>
-<tr id="row_1_4_88_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Id.html" target="_self">Id</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_89_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1IdNode.html" target="_self">IdNode</a></td><td class="desc">The unique identifier of variables </td></tr>
-<tr id="row_1_4_90_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1If.html" target="_self">If</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_91_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1IfNode.html" target="_self">IfNode</a></td><td class="desc">Container of <a class="el" href="classtvm_1_1relay_1_1If.html">If</a> </td></tr>
-<tr id="row_1_4_92_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1InitOpAttrs.html" target="_self">InitOpAttrs</a></td><td class="desc">Attributes that specify a tensor </td></tr>
-<tr id="row_1_4_93_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1InstanceNormAttrs.html" target="_self">InstanceNormAttrs</a></td><td class="desc">Attributes used in instance_norm operator </td></tr>
-<tr id="row_1_4_94_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1InterpreterClosure.html" target="_self">InterpreterClosure</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_95_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1InterpreterClosureObj.html" target="_self">InterpreterClosureObj</a></td><td class="desc">The container type of Closures used by the interpreter </td></tr>
-<tr id="row_1_4_96_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1L2NormalizeAttrs.html" target="_self">L2NormalizeAttrs</a></td><td class="desc">Attributes for L2Normalize operator </td></tr>
-<tr id="row_1_4_97_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1LayerNormAttrs.html" target="_self">LayerNormAttrs</a></td><td class="desc">Attributes used in layer_norm operator </td></tr>
-<tr id="row_1_4_98_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1LayoutTransformAttrs.html" target="_self">LayoutTransformAttrs</a></td><td class="desc">Attributes for LayoutTransform operator </td></tr>
-<tr id="row_1_4_99_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1LeakyReluAttrs.html" target="_self">LeakyReluAttrs</a></td><td class="desc">Attributes for leaky relu operator </td></tr>
-<tr id="row_1_4_100_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Let.html" target="_self">Let</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_101_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1LetNode.html" target="_self">LetNode</a></td><td class="desc">A binding of a sub-network </td></tr>
-<tr id="row_1_4_102_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1LRNAttrs.html" target="_self">LRNAttrs</a></td><td class="desc">Attributes for LRN operator </td></tr>
-<tr id="row_1_4_103_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Match.html" target="_self">Match</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_104_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1MatchNode.html" target="_self">MatchNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Match.html">Match</a> container node </td></tr>
-<tr id="row_1_4_105_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MatrixSetDiagAttrs.html" target="_self">MatrixSetDiagAttrs</a></td><td class="desc">Attributes used in matrix_set_diag operator </td></tr>
-<tr id="row_1_4_106_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MaxPool1DAttrs.html" target="_self">MaxPool1DAttrs</a></td><td class="desc">Attributes for 1D max pool operator </td></tr>
-<tr id="row_1_4_107_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MaxPool2DAttrs.html" target="_self">MaxPool2DAttrs</a></td><td class="desc">Attributes for max pool operator </td></tr>
-<tr id="row_1_4_108_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MaxPool3DAttrs.html" target="_self">MaxPool3DAttrs</a></td><td class="desc">Attributes for 3D max pool operator </td></tr>
-<tr id="row_1_4_109_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MeshgridAttrs.html" target="_self">MeshgridAttrs</a></td><td class="desc">Attributes used in meshgrid operators </td></tr>
-<tr id="row_1_4_110_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MirrorPadAttrs.html" target="_self">MirrorPadAttrs</a></td><td class="desc">Attributes used for the MirrorPadding operator </td></tr>
-<tr id="row_1_4_111_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1MixedModeMutator.html" target="_self">MixedModeMutator</a></td><td class="desc">Non-recursive DFS Graph Traversal for Custom Rewriting Passes </td></tr>
-<tr id="row_1_4_112_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1MixedModeVisitor.html" target="_self">MixedModeVisitor</a></td><td class="desc">A wrapper around <a class="el" href="classtvm_1_1relay_1_1ExprVisitor.html" title="A simple visitor wrapper around ExprFunctor. Recursively visit the content. ">ExprVisitor</a> which [...]
-<tr id="row_1_4_113_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MultiBoxPriorAttrs.html" target="_self">MultiBoxPriorAttrs</a></td><td class="desc">Attributes used in multibox_prior operators </td></tr>
-<tr id="row_1_4_114_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MultiBoxTransformLocAttrs.html" target="_self">MultiBoxTransformLocAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_115_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1NdarraySizeAttrs.html" target="_self">NdarraySizeAttrs</a></td><td class="desc">Attributes for ndarray_size operator </td></tr>
-<tr id="row_1_4_116_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1NonMaximumSuppressionAttrs.html" target="_self">NonMaximumSuppressionAttrs</a></td><td class="desc">Attributes used in non_maximum_suppression operator </td></tr>
-<tr id="row_1_4_117_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1OnDeviceAttrs.html" target="_self">OnDeviceAttrs</a></td><td class="desc">Options for the device annotation operators </td></tr>
-<tr id="row_1_4_118_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1OneHotAttrs.html" target="_self">OneHotAttrs</a></td><td class="desc">Attributes used in one-hot operator </td></tr>
-<tr id="row_1_4_119_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpImplementation.html" target="_self">OpImplementation</a></td><td class="desc">Operator implementation class </td></tr>
-<tr id="row_1_4_120_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpImplementationNode.html" target="_self">OpImplementationNode</a></td><td class="desc">Operator implementation that includes compute and schedule function </td></tr>
-<tr id="row_1_4_121_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpSpecialization.html" target="_self">OpSpecialization</a></td><td class="desc">Operator specialization class </td></tr>
-<tr id="row_1_4_122_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpSpecializationNode.html" target="_self">OpSpecializationNode</a></td><td class="desc">Specialized implementations for operators under certain conditions </td></tr>
-<tr id="row_1_4_123_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpStrategy.html" target="_self">OpStrategy</a></td><td class="desc">Operator strategy class </td></tr>
-<tr id="row_1_4_124_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpStrategyNode.html" target="_self">OpStrategyNode</a></td><td class="desc">Operator strategy to choose implementation </td></tr>
-<tr id="row_1_4_125_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1PadAttrs.html" target="_self">PadAttrs</a></td><td class="desc">Attributes used for the padding operator </td></tr>
-<tr id="row_1_4_126_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Pattern.html" target="_self">Pattern</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> is the base type for an ADT match pattern in Relay </td></tr>
-<tr id="row_1_4_127_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternConstructor.html" target="_self">PatternConstructor</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_128_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternConstructorNode.html" target="_self">PatternConstructorNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> container node </td></tr>
-<tr id="row_1_4_129_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternFunctor.html" target="_self">PatternFunctor</a></td><td class="desc">A dynamical functor on ADT patterns that dispatches on its first argument. You can use this as a more powerful visitor, since it allows you to define the types of further arguments to Vi [...]
-<tr id="row_1_4_130_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternFunctor_3_01R_07const_01Pattern_01_6n_00_01Args_8_8_8_08_4.html" target="_self">PatternFunctor&lt; R(const Pattern &amp;n, Args...)&gt;</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_131_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternMutator.html" target="_self">PatternMutator</a></td><td class="desc">A wrapper around <a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html" title="A dynamical functor that dispatches on in the first Expr argument. You can use this as a more powerfu.. [...]
-<tr id="row_1_4_132_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternNode.html" target="_self">PatternNode</a></td><td class="desc">Base type for declaring relay pattern </td></tr>
-<tr id="row_1_4_133_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternTuple.html" target="_self">PatternTuple</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_134_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternTupleNode.html" target="_self">PatternTupleNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> container node </td></tr>
-<tr id="row_1_4_135_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternVar.html" target="_self">PatternVar</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_136_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternVarNode.html" target="_self">PatternVarNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> container node </td></tr>
-<tr id="row_1_4_137_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternVisitor.html" target="_self">PatternVisitor</a></td><td class="desc">A simple visitor wrapper around <a class="el" href="classtvm_1_1relay_1_1PatternFunctor.html" title="A dynamical functor on ADT patterns that dispatches on its first argument. You can us [...]
-<tr id="row_1_4_138_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternWildcard.html" target="_self">PatternWildcard</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_139_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternWildcardNode.html" target="_self">PatternWildcardNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1PatternWildcard.html">PatternWildcard</a> container node </td></tr>
-<tr id="row_1_4_140_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1PReluAttrs.html" target="_self">PReluAttrs</a></td><td class="desc">Attributes for prelu operator </td></tr>
-<tr id="row_1_4_141_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ProposalAttrs.html" target="_self">ProposalAttrs</a></td><td class="desc">Attributes used in proposal operators </td></tr>
-<tr id="row_1_4_142_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RecClosure.html" target="_self">RecClosure</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_143_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RecClosureObj.html" target="_self">RecClosureObj</a></td><td class="desc">The container type of <a class="el" href="classtvm_1_1relay_1_1RecClosure.html">RecClosure</a> </td></tr>
-<tr id="row_1_4_144_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReduceAttrs.html" target="_self">ReduceAttrs</a></td><td class="desc">Attributes for Reduce operators </td></tr>
-<tr id="row_1_4_145_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefCreate.html" target="_self">RefCreate</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_146_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefCreateNode.html" target="_self">RefCreateNode</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_147_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefRead.html" target="_self">RefRead</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_148_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefReadNode.html" target="_self">RefReadNode</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_149_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefValue.html" target="_self">RefValue</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_150_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1RefValueObj.html" target="_self">RefValueObj</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_151_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefWrite.html" target="_self">RefWrite</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_152_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefWriteNode.html" target="_self">RefWriteNode</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_153_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RelayNode.html" target="_self">RelayNode</a></td><td class="desc">This is the base node container of all relay structures </td></tr>
-<tr id="row_1_4_154_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1RepeatAttrs.html" target="_self">RepeatAttrs</a></td><td class="desc">Attributes used in repeat operators </td></tr>
-<tr id="row_1_4_155_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReshapeAttrs.html" target="_self">ReshapeAttrs</a></td><td class="desc">Attributes used in reshape operators </td></tr>
-<tr id="row_1_4_156_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReshapeLikeAttrs.html" target="_self">ReshapeLikeAttrs</a></td><td class="desc">Attributes used in MXNet-style reshape_like operators </td></tr>
-<tr id="row_1_4_157_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReshapeTensorAttrs.html" target="_self">ReshapeTensorAttrs</a></td><td class="desc">Attributes for VM reshape_tensor operator </td></tr>
-<tr id="row_1_4_158_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Resize3dAttrs.html" target="_self">Resize3dAttrs</a></td><td class="desc">Attributes used in image resize3d operator </td></tr>
-<tr id="row_1_4_159_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ResizeAttrs.html" target="_self">ResizeAttrs</a></td><td class="desc">Attributes used in image resize operator </td></tr>
-<tr id="row_1_4_160_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReverseAttrs.html" target="_self">ReverseAttrs</a></td><td class="desc">Attributes used in reverse operators </td></tr>
-<tr id="row_1_4_161_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReverseSequenceAttrs.html" target="_self">ReverseSequenceAttrs</a></td><td class="desc">Attributes used in reverse_sequence operators </td></tr>
-<tr id="row_1_4_162_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ROIAlignAttrs.html" target="_self">ROIAlignAttrs</a></td><td class="desc">Attributes used in roi_align operators </td></tr>
-<tr id="row_1_4_163_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ROIPoolAttrs.html" target="_self">ROIPoolAttrs</a></td><td class="desc">Attributes used in roi_pool operators </td></tr>
-<tr id="row_1_4_164_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ScatterAddAttrs.html" target="_self">ScatterAddAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_165_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ScatterAttrs.html" target="_self">ScatterAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_166_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ScatterNDAttrs.html" target="_self">ScatterNDAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_167_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SequenceMaskAttrs.html" target="_self">SequenceMaskAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_168_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ShapeFuncAttrs.html" target="_self">ShapeFuncAttrs</a></td><td class="desc">Options for the shape function operator </td></tr>
-<tr id="row_1_4_169_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ShapeOfAttrs.html" target="_self">ShapeOfAttrs</a></td><td class="desc">Attributes for ShapeOf operator </td></tr>
-<tr id="row_1_4_170_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ShapePattern.html" target="_self">ShapePattern</a></td><td class="desc">A pattern which matches a type in another pattern </td></tr>
-<tr id="row_1_4_171_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ShapePatternNode.html" target="_self">ShapePatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> for Shapes </td></tr>
-<tr id="row_1_4_172_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SliceLikeAttrs.html" target="_self">SliceLikeAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_173_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SoftmaxAttrs.html" target="_self">SoftmaxAttrs</a></td><td class="desc">Attributes used in softmax operators </td></tr>
-<tr id="row_1_4_174_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SpaceToBatchNDAttrs.html" target="_self">SpaceToBatchNDAttrs</a></td><td class="desc">Attributes used in SpaceToBatchND operator </td></tr>
-<tr id="row_1_4_175_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SparseDenseAttrs.html" target="_self">SparseDenseAttrs</a></td><td class="desc">Attributes for sparse_dense operator </td></tr>
-<tr id="row_1_4_176_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SparseToDenseAttrs.html" target="_self">SparseToDenseAttrs</a></td><td class="desc">Attributes used in sparse_to_dense operator </td></tr>
-<tr id="row_1_4_177_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SparseTransposeAttrs.html" target="_self">SparseTransposeAttrs</a></td><td class="desc">Attributes for sparse_transpose operator </td></tr>
-<tr id="row_1_4_178_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SplitAttrs.html" target="_self">SplitAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_179_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SqueezeAttrs.html" target="_self">SqueezeAttrs</a></td><td class="desc">Attributes used in squeeze operators </td></tr>
-<tr id="row_1_4_180_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1StackAttrs.html" target="_self">StackAttrs</a></td><td class="desc">Attributes used in stack operators </td></tr>
-<tr id="row_1_4_181_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1StridedSliceAttrs.html" target="_self">StridedSliceAttrs</a></td><td class="desc">Attributes for StridedSlice operator </td></tr>
-<tr id="row_1_4_182_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SubPixelAttrs.html" target="_self">SubPixelAttrs</a></td><td class="desc">Attributes used in subpixel operators </td></tr>
-<tr id="row_1_4_183_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1TakeAttrs.html" target="_self">TakeAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_184_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TempExpr.html" target="_self">TempExpr</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_185_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TempExprNode.html" target="_self">TempExprNode</a></td><td class="desc">Base class of the temporary expression </td></tr>
-<tr id="row_1_4_186_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1TileAttrs.html" target="_self">TileAttrs</a></td><td class="desc">Attributes used in tile operators </td></tr>
-<tr id="row_1_4_187_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1TopKAttrs.html" target="_self">TopKAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_188_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1TransposeAttrs.html" target="_self">TransposeAttrs</a></td><td class="desc">Attributes used in transpose operators </td></tr>
-<tr id="row_1_4_189_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Tuple.html" target="_self">Tuple</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_190_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleGetItem.html" target="_self">TupleGetItem</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_191_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleGetItemNode.html" target="_self">TupleGetItemNode</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_192_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleGetItemPattern.html" target="_self">TupleGetItemPattern</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_193_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleGetItemPatternNode.html" target="_self">TupleGetItemPatternNode</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_194_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleNode.html" target="_self">TupleNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Tuple.html">Tuple</a> container </td></tr>
-<tr id="row_1_4_195_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TuplePattern.html" target="_self">TuplePattern</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_196_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TuplePatternNode.html" target="_self">TuplePatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Tuple.html">Tuple</a> container </td></tr>
-<tr id="row_1_4_197_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TypePattern.html" target="_self">TypePattern</a></td><td class="desc">A pattern which matches a type in another pattern </td></tr>
-<tr id="row_1_4_198_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TypePatternNode.html" target="_self">TypePatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> for Types </td></tr>
-<tr id="row_1_4_199_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1UpSampling3DAttrs.html" target="_self">UpSampling3DAttrs</a></td><td class="desc">Attributes for upsampling3d operator </td></tr>
-<tr id="row_1_4_200_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1UpSamplingAttrs.html" target="_self">UpSamplingAttrs</a></td><td class="desc">Attributes for upsampling operator </td></tr>
-<tr id="row_1_4_201_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Var.html" target="_self">Var</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_202_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1VarianceAttrs.html" target="_self">VarianceAttrs</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_203_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1VarNode.html" target="_self">VarNode</a></td><td class="desc">Container for <a class="el" href="classtvm_1_1relay_1_1Var.html">Var</a> </td></tr>
-<tr id="row_1_4_204_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1VarPattern.html" target="_self">VarPattern</a></td><td class="desc"></td></tr>
-<tr id="row_1_4_205_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1VarPatternNode.html" target="_self">VarPatternNode</a></td><td class="desc">Container for <a class="el" href="classtvm_1_1relay_1_1Var.html">Var</a> </td></tr>
-<tr id="row_1_4_206_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1WildcardPattern.html" target="_self">WildcardPattern</a></td><td class="desc">A pattern which matches anything </td></tr>
-<tr id="row_1_4_207_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1WildcardPatternNode.html" target="_self">WildcardPatternNode</a></td><td class="desc">Wildcard <a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> </td></tr>
-<tr id="row_1_4_208_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1YoloReorgAttrs.html" target="_self">YoloReorgAttrs</a></td><td class="desc">Attributes used in yolo reorg operators </td></tr>
+<tr id="row_1_4_16_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BatchMatmulAttrs.html" target="_self">BatchMatmulAttrs</a></td><td class="desc">Attributes for batch matmul operator </td></tr>
+<tr id="row_1_4_17_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BatchNormAttrs.html" target="_self">BatchNormAttrs</a></td><td class="desc">Attributes used in batch_norm operator </td></tr>
+<tr id="row_1_4_18_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BatchToSpaceNDAttrs.html" target="_self">BatchToSpaceNDAttrs</a></td><td class="desc">Attributes used in BatchToSpaceND operator </td></tr>
+<tr id="row_1_4_19_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BiasAddAttrs.html" target="_self">BiasAddAttrs</a></td><td class="desc">Add a 1D Tensor to an axis of a data </td></tr>
+<tr id="row_1_4_20_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BinaryConv2DAttrs.html" target="_self">BinaryConv2DAttrs</a></td><td class="desc">Attribues used in bitserial convolution operators </td></tr>
+<tr id="row_1_4_21_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BinaryDenseAttrs.html" target="_self">BinaryDenseAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_22_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1BitPackAttrs.html" target="_self">BitPackAttrs</a></td><td class="desc">Attributes used in bitpack operators </td></tr>
+<tr id="row_1_4_23_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Call.html" target="_self">Call</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_24_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1CallNode.html" target="_self">CallNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Call.html">Call</a> container </td></tr>
+<tr id="row_1_4_25_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1CallPattern.html" target="_self">CallPattern</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_26_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1CallPatternNode.html" target="_self">CallPatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1CallPattern.html">CallPattern</a> container </td></tr>
+<tr id="row_1_4_27_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CastAttrs.html" target="_self">CastAttrs</a></td><td class="desc">Data type cast </td></tr>
+<tr id="row_1_4_28_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CastHintAttrs.html" target="_self">CastHintAttrs</a></td><td class="desc">Annotate an expression to be cast into specific data type </td></tr>
+<tr id="row_1_4_29_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Clause.html" target="_self">Clause</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_30_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ClauseNode.html" target="_self">ClauseNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Clause.html">Clause</a> container node </td></tr>
+<tr id="row_1_4_31_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ClipAttrs.html" target="_self">ClipAttrs</a></td><td class="desc">Attributes for Clip operator </td></tr>
+<tr id="row_1_4_32_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CompilerAttrs.html" target="_self">CompilerAttrs</a></td><td class="desc">Options for the operators used to annotate a compiler </td></tr>
+<tr id="row_1_4_33_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ConcatenateAttrs.html" target="_self">ConcatenateAttrs</a></td><td class="desc">Attributes used in concatenate operators </td></tr>
+<tr id="row_1_4_34_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Constant.html" target="_self">Constant</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_35_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ConstantNode.html" target="_self">ConstantNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Constant.html">Constant</a> tensor type </td></tr>
+<tr id="row_1_4_36_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ConstantPattern.html" target="_self">ConstantPattern</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_37_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ConstantPatternNode.html" target="_self">ConstantPatternNode</a></td><td class="desc">Container for <a class="el" href="classtvm_1_1relay_1_1Constant.html">Constant</a> </td></tr>
+<tr id="row_1_4_38_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ConstructorValue.html" target="_self">ConstructorValue</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_39_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ConstructorValueObj.html" target="_self">ConstructorValueObj</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_40_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv1DAttrs.html" target="_self">Conv1DAttrs</a></td><td class="desc">Attributes used in 1D convolution operators </td></tr>
+<tr id="row_1_4_41_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv1DTransposeAttrs.html" target="_self">Conv1DTransposeAttrs</a></td><td class="desc">Attributes used in 1D transposed convolution operator </td></tr>
+<tr id="row_1_4_42_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv2DAttrs.html" target="_self">Conv2DAttrs</a></td><td class="desc">Attributes used in convolution operators </td></tr>
+<tr id="row_1_4_43_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv2DTransposeAttrs.html" target="_self">Conv2DTransposeAttrs</a></td><td class="desc">Attributes used in transposed convolution operator </td></tr>
+<tr id="row_1_4_44_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradAttrs.html" target="_self">Conv2DWinogradAttrs</a></td><td class="desc">Attributes used in convolution operators with winograd algorithm </td></tr>
+<tr id="row_1_4_45_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradNNPACKWeightTransformAttrs.html" target="_self">Conv2DWinogradNNPACKWeightTransformAttrs</a></td><td class="desc">Attributes used in winograd weight transformation operators </td></tr>
+<tr id="row_1_4_46_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv3DAttrs.html" target="_self">Conv3DAttrs</a></td><td class="desc">Attributes used in convolution operators </td></tr>
+<tr id="row_1_4_47_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv3DTransposeAttrs.html" target="_self">Conv3DTransposeAttrs</a></td><td class="desc">Attributes used in transposed convolution operator </td></tr>
+<tr id="row_1_4_48_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Conv3DWinogradAttrs.html" target="_self">Conv3DWinogradAttrs</a></td><td class="desc">Attributes used in 3d winograd convolution operators </td></tr>
+<tr id="row_1_4_49_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ConvGemmWeightTransformAttrs.html" target="_self">ConvGemmWeightTransformAttrs</a></td><td class="desc">Attributes used in gemm weight transformation operators </td></tr>
+<tr id="row_1_4_50_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ConvWinogradWeightTransformAttrs.html" target="_self">ConvWinogradWeightTransformAttrs</a></td><td class="desc">Attributes used in winograd weight transformation operators </td></tr>
+<tr id="row_1_4_51_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CorrelationAttrs.html" target="_self">CorrelationAttrs</a></td><td class="desc">Attributes used in correlation operators </td></tr>
+<tr id="row_1_4_52_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1CropAndResizeAttrs.html" target="_self">CropAndResizeAttrs</a></td><td class="desc">Attributes used in image crop_and_resize operator </td></tr>
+<tr id="row_1_4_53_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DataTypePattern.html" target="_self">DataTypePattern</a></td><td class="desc">A pattern which matches a type in another pattern </td></tr>
+<tr id="row_1_4_54_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DataTypePatternNode.html" target="_self">DataTypePatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> for Types </td></tr>
+<tr id="row_1_4_55_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DebugAttrs.html" target="_self">DebugAttrs</a></td><td class="desc">Options for the debug operators </td></tr>
+<tr id="row_1_4_56_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DeformableConv2DAttrs.html" target="_self">DeformableConv2DAttrs</a></td><td class="desc">Attributes for DeformableConv2D operator </td></tr>
+<tr id="row_1_4_57_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DenseAttrs.html" target="_self">DenseAttrs</a></td><td class="desc">Attributes for dense operator </td></tr>
+<tr id="row_1_4_58_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DeviceCopyAttrs.html" target="_self">DeviceCopyAttrs</a></td><td class="desc">Options for the device copy operators </td></tr>
+<tr id="row_1_4_59_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPattern.html" target="_self">DFPattern</a></td><td class="desc">Managed reference to dataflow patterns </td></tr>
+<tr id="row_1_4_60_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternCallback.html" target="_self">DFPatternCallback</a></td><td class="desc">Managed reference to dataflow pattern callbacks </td></tr>
+<tr id="row_1_4_61_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternCallbackNode.html" target="_self">DFPatternCallbackNode</a></td><td class="desc">Base type of all dataflow pattern callbacks </td></tr>
+<tr id="row_1_4_62_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor.html" target="_self">DFPatternFunctor</a></td><td class="desc">A dynamical functor that dispatches on in the first <a class="el" href="classtvm_1_1relay_1_1DFPattern.html" title="Managed reference to dataflow patterns. ">DFPattern</a> argument </ [...]
+<tr id="row_1_4_63_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor_3_01R_07const_01DFPattern_01_6n_00_01Args_8_8_8_08_4.html" target="_self">DFPatternFunctor&lt; R(const DFPattern &amp;n, Args...)&gt;</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_64_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternNode.html" target="_self">DFPatternNode</a></td><td class="desc">Base type of all dataflow patterns </td></tr>
+<tr id="row_1_4_65_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DFPatternVisitor.html" target="_self">DFPatternVisitor</a></td><td class="desc">A simple visitor wrapper around <a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor.html" title="A dynamical functor that dispatches on in the first DFPattern argument. ">DFPatt [...]
+<tr id="row_1_4_66_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DilateAttrs.html" target="_self">DilateAttrs</a></td><td class="desc">Attributes used in dilate operator </td></tr>
+<tr id="row_1_4_67_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Dilation2DAttrs.html" target="_self">Dilation2DAttrs</a></td><td class="desc">Attributes used in dilation operators </td></tr>
+<tr id="row_1_4_68_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DominatorPattern.html" target="_self">DominatorPattern</a></td><td class="desc">A pattern which matches a variable length dominator path </td></tr>
+<tr id="row_1_4_69_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1DominatorPatternNode.html" target="_self">DominatorPatternNode</a></td><td class="desc">Dominated Graph <a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> <a class="el" href="cla [...]
+<tr id="row_1_4_70_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1DropoutAttrs.html" target="_self">DropoutAttrs</a></td><td class="desc">Attributes used in dropout operator </td></tr>
+<tr id="row_1_4_71_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ExpandDimsAttrs.html" target="_self">ExpandDimsAttrs</a></td><td class="desc">Attributes used in expand_dims operators </td></tr>
+<tr id="row_1_4_72_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html" target="_self">ExprFunctor</a></td><td class="desc">A dynamical functor that dispatches on in the first Expr argument. You can use this as a more powerful Visitor, since it allows you to define function signatures of Visit <a class="el" href="cl [...]
+<tr id="row_1_4_73_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprFunctor_3_01R_07const_01Expr_01_6n_00_01Args_8_8_8_08_4.html" target="_self">ExprFunctor&lt; R(const Expr &amp;n, Args...)&gt;</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_74_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprMutator.html" target="_self">ExprMutator</a></td><td class="desc">A wrapper around <a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html" title="A dynamical functor that dispatches on in the first Expr argument. You can use this as a more powerfu...">Expr [...]
+<tr id="row_1_4_75_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprPattern.html" target="_self">ExprPattern</a></td><td class="desc">A pattern which matches a literal expression </td></tr>
+<tr id="row_1_4_76_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprPatternNode.html" target="_self">ExprPatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> for Relay Expression </td></tr>
+<tr id="row_1_4_77_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprRewriter.html" target="_self">ExprRewriter</a></td><td class="desc">A non-iterating Expression Rewriter </td></tr>
+<tr id="row_1_4_78_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ExprVisitor.html" target="_self">ExprVisitor</a></td><td class="desc">A simple visitor wrapper around <a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html" title="A dynamical functor that dispatches on in the first Expr argument. You can use this as a more p [...]
+<tr id="row_1_4_79_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1FeatureSet.html" target="_self">FeatureSet</a></td><td class="desc">A finite set of Feature </td></tr>
+<tr id="row_1_4_80_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1FIFOBufferAttrs.html" target="_self">FIFOBufferAttrs</a></td><td class="desc">Attributes for FIFO buffer operator </td></tr>
+<tr id="row_1_4_81_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1FixedPointMultiplyAttrs.html" target="_self">FixedPointMultiplyAttrs</a></td><td class="desc">Attributes for FixedPointMultiply operator </td></tr>
+<tr id="row_1_4_82_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Function.html" target="_self">Function</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1relay_1_1FunctionNode.html" title="Relay Function container. ">FunctionNode</a> </td></tr>
+<tr id="row_1_4_83_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1FunctionNode.html" target="_self">FunctionNode</a></td><td class="desc">Relay <a class="el" href="classtvm_1_1relay_1_1Function.html" title="Managed reference to FunctionNode. ">Function</a> container </td></tr>
+<tr id="row_1_4_84_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1FunctionPattern.html" target="_self">FunctionPattern</a></td><td class="desc">Managed reference to <a class="el" href="classtvm_1_1relay_1_1FunctionNode.html" title="Relay Function container. ">FunctionNode</a> </td></tr>
+<tr id="row_1_4_85_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1FunctionPatternNode.html" target="_self">FunctionPatternNode</a></td><td class="desc">Relay <a class="el" href="classtvm_1_1relay_1_1Function.html" title="Managed reference to FunctionNode. ">Function</a> container </td></tr>
+<tr id="row_1_4_86_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GatherAttrs.html" target="_self">GatherAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_87_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GetValidCountsAttrs.html" target="_self">GetValidCountsAttrs</a></td><td class="desc">Attributes used in get_valid_counts operator </td></tr>
+<tr id="row_1_4_88_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GlobalPool2DAttrs.html" target="_self">GlobalPool2DAttrs</a></td><td class="desc">Attributes for global pool operator </td></tr>
+<tr id="row_1_4_89_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GridSampleAttrs.html" target="_self">GridSampleAttrs</a></td><td class="desc">Attributes used in image grid_sample operator </td></tr>
+<tr id="row_1_4_90_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1GroupNormAttrs.html" target="_self">GroupNormAttrs</a></td><td class="desc">Attributes used in group_norm operator </td></tr>
+<tr id="row_1_4_91_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Id.html" target="_self">Id</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_92_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1IdNode.html" target="_self">IdNode</a></td><td class="desc">The unique identifier of variables </td></tr>
+<tr id="row_1_4_93_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1If.html" target="_self">If</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_94_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1IfNode.html" target="_self">IfNode</a></td><td class="desc">Container of <a class="el" href="classtvm_1_1relay_1_1If.html">If</a> </td></tr>
+<tr id="row_1_4_95_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1InitOpAttrs.html" target="_self">InitOpAttrs</a></td><td class="desc">Attributes that specify a tensor </td></tr>
+<tr id="row_1_4_96_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1InstanceNormAttrs.html" target="_self">InstanceNormAttrs</a></td><td class="desc">Attributes used in instance_norm operator </td></tr>
+<tr id="row_1_4_97_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1InterpreterClosure.html" target="_self">InterpreterClosure</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_98_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1InterpreterClosureObj.html" target="_self">InterpreterClosureObj</a></td><td class="desc">The container type of Closures used by the interpreter </td></tr>
+<tr id="row_1_4_99_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1L2NormalizeAttrs.html" target="_self">L2NormalizeAttrs</a></td><td class="desc">Attributes for L2Normalize operator </td></tr>
+<tr id="row_1_4_100_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1LayerNormAttrs.html" target="_self">LayerNormAttrs</a></td><td class="desc">Attributes used in layer_norm operator </td></tr>
+<tr id="row_1_4_101_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1LayoutTransformAttrs.html" target="_self">LayoutTransformAttrs</a></td><td class="desc">Attributes for LayoutTransform operator </td></tr>
+<tr id="row_1_4_102_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1LeakyReluAttrs.html" target="_self">LeakyReluAttrs</a></td><td class="desc">Attributes for leaky relu operator </td></tr>
+<tr id="row_1_4_103_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Let.html" target="_self">Let</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_104_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1LetNode.html" target="_self">LetNode</a></td><td class="desc">A binding of a sub-network </td></tr>
+<tr id="row_1_4_105_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1LRNAttrs.html" target="_self">LRNAttrs</a></td><td class="desc">Attributes for LRN operator </td></tr>
+<tr id="row_1_4_106_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Match.html" target="_self">Match</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_107_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1MatchNode.html" target="_self">MatchNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Match.html">Match</a> container node </td></tr>
+<tr id="row_1_4_108_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MatrixSetDiagAttrs.html" target="_self">MatrixSetDiagAttrs</a></td><td class="desc">Attributes used in matrix_set_diag operator </td></tr>
+<tr id="row_1_4_109_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MaxPool1DAttrs.html" target="_self">MaxPool1DAttrs</a></td><td class="desc">Attributes for 1D max pool operator </td></tr>
+<tr id="row_1_4_110_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MaxPool2DAttrs.html" target="_self">MaxPool2DAttrs</a></td><td class="desc">Attributes for max pool operator </td></tr>
+<tr id="row_1_4_111_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MaxPool3DAttrs.html" target="_self">MaxPool3DAttrs</a></td><td class="desc">Attributes for 3D max pool operator </td></tr>
+<tr id="row_1_4_112_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MeshgridAttrs.html" target="_self">MeshgridAttrs</a></td><td class="desc">Attributes used in meshgrid operators </td></tr>
+<tr id="row_1_4_113_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MirrorPadAttrs.html" target="_self">MirrorPadAttrs</a></td><td class="desc">Attributes used for the MirrorPadding operator </td></tr>
+<tr id="row_1_4_114_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1MixedModeMutator.html" target="_self">MixedModeMutator</a></td><td class="desc">Non-recursive DFS Graph Traversal for Custom Rewriting Passes </td></tr>
+<tr id="row_1_4_115_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1MixedModeVisitor.html" target="_self">MixedModeVisitor</a></td><td class="desc">A wrapper around <a class="el" href="classtvm_1_1relay_1_1ExprVisitor.html" title="A simple visitor wrapper around ExprFunctor. Recursively visit the content. ">ExprVisitor</a> which [...]
+<tr id="row_1_4_116_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MultiBoxPriorAttrs.html" target="_self">MultiBoxPriorAttrs</a></td><td class="desc">Attributes used in multibox_prior operators </td></tr>
+<tr id="row_1_4_117_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1MultiBoxTransformLocAttrs.html" target="_self">MultiBoxTransformLocAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_118_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1NdarraySizeAttrs.html" target="_self">NdarraySizeAttrs</a></td><td class="desc">Attributes for ndarray_size operator </td></tr>
+<tr id="row_1_4_119_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1NonMaximumSuppressionAttrs.html" target="_self">NonMaximumSuppressionAttrs</a></td><td class="desc">Attributes used in non_maximum_suppression operator </td></tr>
+<tr id="row_1_4_120_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1OnDeviceAttrs.html" target="_self">OnDeviceAttrs</a></td><td class="desc">Options for the device annotation operators </td></tr>
+<tr id="row_1_4_121_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1OneHotAttrs.html" target="_self">OneHotAttrs</a></td><td class="desc">Attributes used in one-hot operator </td></tr>
+<tr id="row_1_4_122_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpImplementation.html" target="_self">OpImplementation</a></td><td class="desc">Operator implementation class </td></tr>
+<tr id="row_1_4_123_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpImplementationNode.html" target="_self">OpImplementationNode</a></td><td class="desc">Operator implementation that includes compute and schedule function </td></tr>
+<tr id="row_1_4_124_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpSpecialization.html" target="_self">OpSpecialization</a></td><td class="desc">Operator specialization class </td></tr>
+<tr id="row_1_4_125_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpSpecializationNode.html" target="_self">OpSpecializationNode</a></td><td class="desc">Specialized implementations for operators under certain conditions </td></tr>
+<tr id="row_1_4_126_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpStrategy.html" target="_self">OpStrategy</a></td><td class="desc">Operator strategy class </td></tr>
+<tr id="row_1_4_127_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1OpStrategyNode.html" target="_self">OpStrategyNode</a></td><td class="desc">Operator strategy to choose implementation </td></tr>
+<tr id="row_1_4_128_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1PadAttrs.html" target="_self">PadAttrs</a></td><td class="desc">Attributes used for the padding operator </td></tr>
+<tr id="row_1_4_129_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Pattern.html" target="_self">Pattern</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> is the base type for an ADT match pattern in Relay </td></tr>
+<tr id="row_1_4_130_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternConstructor.html" target="_self">PatternConstructor</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_131_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternConstructorNode.html" target="_self">PatternConstructorNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> container node </td></tr>
+<tr id="row_1_4_132_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternFunctor.html" target="_self">PatternFunctor</a></td><td class="desc">A dynamical functor on ADT patterns that dispatches on its first argument. You can use this as a more powerful visitor, since it allows you to define the types of further arguments to Vi [...]
+<tr id="row_1_4_133_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternFunctor_3_01R_07const_01Pattern_01_6n_00_01Args_8_8_8_08_4.html" target="_self">PatternFunctor&lt; R(const Pattern &amp;n, Args...)&gt;</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_134_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternMutator.html" target="_self">PatternMutator</a></td><td class="desc">A wrapper around <a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html" title="A dynamical functor that dispatches on in the first Expr argument. You can use this as a more powerfu.. [...]
+<tr id="row_1_4_135_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternNode.html" target="_self">PatternNode</a></td><td class="desc">Base type for declaring relay pattern </td></tr>
+<tr id="row_1_4_136_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternTuple.html" target="_self">PatternTuple</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_137_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternTupleNode.html" target="_self">PatternTupleNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> container node </td></tr>
+<tr id="row_1_4_138_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternVar.html" target="_self">PatternVar</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_139_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternVarNode.html" target="_self">PatternVarNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> container node </td></tr>
+<tr id="row_1_4_140_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternVisitor.html" target="_self">PatternVisitor</a></td><td class="desc">A simple visitor wrapper around <a class="el" href="classtvm_1_1relay_1_1PatternFunctor.html" title="A dynamical functor on ADT patterns that dispatches on its first argument. You can us [...]
+<tr id="row_1_4_141_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternWildcard.html" target="_self">PatternWildcard</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_142_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1PatternWildcardNode.html" target="_self">PatternWildcardNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1PatternWildcard.html">PatternWildcard</a> container node </td></tr>
+<tr id="row_1_4_143_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1PReluAttrs.html" target="_self">PReluAttrs</a></td><td class="desc">Attributes for prelu operator </td></tr>
+<tr id="row_1_4_144_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ProposalAttrs.html" target="_self">ProposalAttrs</a></td><td class="desc">Attributes used in proposal operators </td></tr>
+<tr id="row_1_4_145_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RecClosure.html" target="_self">RecClosure</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_146_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RecClosureObj.html" target="_self">RecClosureObj</a></td><td class="desc">The container type of <a class="el" href="classtvm_1_1relay_1_1RecClosure.html">RecClosure</a> </td></tr>
+<tr id="row_1_4_147_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReduceAttrs.html" target="_self">ReduceAttrs</a></td><td class="desc">Attributes for Reduce operators </td></tr>
+<tr id="row_1_4_148_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefCreate.html" target="_self">RefCreate</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_149_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefCreateNode.html" target="_self">RefCreateNode</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_150_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefRead.html" target="_self">RefRead</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_151_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefReadNode.html" target="_self">RefReadNode</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_152_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefValue.html" target="_self">RefValue</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_153_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1RefValueObj.html" target="_self">RefValueObj</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_154_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefWrite.html" target="_self">RefWrite</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_155_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RefWriteNode.html" target="_self">RefWriteNode</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_156_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1RelayNode.html" target="_self">RelayNode</a></td><td class="desc">This is the base node container of all relay structures </td></tr>
+<tr id="row_1_4_157_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1RepeatAttrs.html" target="_self">RepeatAttrs</a></td><td class="desc">Attributes used in repeat operators </td></tr>
+<tr id="row_1_4_158_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReshapeAttrs.html" target="_self">ReshapeAttrs</a></td><td class="desc">Attributes used in reshape operators </td></tr>
+<tr id="row_1_4_159_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReshapeLikeAttrs.html" target="_self">ReshapeLikeAttrs</a></td><td class="desc">Attributes used in MXNet-style reshape_like operators </td></tr>
+<tr id="row_1_4_160_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReshapeTensorAttrs.html" target="_self">ReshapeTensorAttrs</a></td><td class="desc">Attributes for VM reshape_tensor operator </td></tr>
+<tr id="row_1_4_161_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1Resize3dAttrs.html" target="_self">Resize3dAttrs</a></td><td class="desc">Attributes used in image resize3d operator </td></tr>
+<tr id="row_1_4_162_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ResizeAttrs.html" target="_self">ResizeAttrs</a></td><td class="desc">Attributes used in image resize operator </td></tr>
+<tr id="row_1_4_163_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReverseAttrs.html" target="_self">ReverseAttrs</a></td><td class="desc">Attributes used in reverse operators </td></tr>
+<tr id="row_1_4_164_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ReverseSequenceAttrs.html" target="_self">ReverseSequenceAttrs</a></td><td class="desc">Attributes used in reverse_sequence operators </td></tr>
+<tr id="row_1_4_165_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ROIAlignAttrs.html" target="_self">ROIAlignAttrs</a></td><td class="desc">Attributes used in roi_align operators </td></tr>
+<tr id="row_1_4_166_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ROIPoolAttrs.html" target="_self">ROIPoolAttrs</a></td><td class="desc">Attributes used in roi_pool operators </td></tr>
+<tr id="row_1_4_167_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ScatterAddAttrs.html" target="_self">ScatterAddAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_168_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ScatterAttrs.html" target="_self">ScatterAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_169_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ScatterNDAttrs.html" target="_self">ScatterNDAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_170_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SequenceMaskAttrs.html" target="_self">SequenceMaskAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_171_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ShapeFuncAttrs.html" target="_self">ShapeFuncAttrs</a></td><td class="desc">Options for the shape function operator </td></tr>
+<tr id="row_1_4_172_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1ShapeOfAttrs.html" target="_self">ShapeOfAttrs</a></td><td class="desc">Attributes for ShapeOf operator </td></tr>
+<tr id="row_1_4_173_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ShapePattern.html" target="_self">ShapePattern</a></td><td class="desc">A pattern which matches a type in another pattern </td></tr>
+<tr id="row_1_4_174_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1ShapePatternNode.html" target="_self">ShapePatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> for Shapes </td></tr>
+<tr id="row_1_4_175_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SliceLikeAttrs.html" target="_self">SliceLikeAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_176_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SoftmaxAttrs.html" target="_self">SoftmaxAttrs</a></td><td class="desc">Attributes used in softmax operators </td></tr>
+<tr id="row_1_4_177_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SpaceToBatchNDAttrs.html" target="_self">SpaceToBatchNDAttrs</a></td><td class="desc">Attributes used in SpaceToBatchND operator </td></tr>
+<tr id="row_1_4_178_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SparseDenseAttrs.html" target="_self">SparseDenseAttrs</a></td><td class="desc">Attributes for sparse_dense operator </td></tr>
+<tr id="row_1_4_179_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SparseToDenseAttrs.html" target="_self">SparseToDenseAttrs</a></td><td class="desc">Attributes used in sparse_to_dense operator </td></tr>
+<tr id="row_1_4_180_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SparseTransposeAttrs.html" target="_self">SparseTransposeAttrs</a></td><td class="desc">Attributes for sparse_transpose operator </td></tr>
+<tr id="row_1_4_181_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SplitAttrs.html" target="_self">SplitAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_182_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SqueezeAttrs.html" target="_self">SqueezeAttrs</a></td><td class="desc">Attributes used in squeeze operators </td></tr>
+<tr id="row_1_4_183_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1StackAttrs.html" target="_self">StackAttrs</a></td><td class="desc">Attributes used in stack operators </td></tr>
+<tr id="row_1_4_184_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1StridedSliceAttrs.html" target="_self">StridedSliceAttrs</a></td><td class="desc">Attributes for StridedSlice operator </td></tr>
+<tr id="row_1_4_185_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1SubPixelAttrs.html" target="_self">SubPixelAttrs</a></td><td class="desc">Attributes used in subpixel operators </td></tr>
+<tr id="row_1_4_186_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1TakeAttrs.html" target="_self">TakeAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_187_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TempExpr.html" target="_self">TempExpr</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_188_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TempExprNode.html" target="_self">TempExprNode</a></td><td class="desc">Base class of the temporary expression </td></tr>
+<tr id="row_1_4_189_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1TileAttrs.html" target="_self">TileAttrs</a></td><td class="desc">Attributes used in tile operators </td></tr>
+<tr id="row_1_4_190_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1TopKAttrs.html" target="_self">TopKAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_191_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1TransposeAttrs.html" target="_self">TransposeAttrs</a></td><td class="desc">Attributes used in transpose operators </td></tr>
+<tr id="row_1_4_192_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Tuple.html" target="_self">Tuple</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_193_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleGetItem.html" target="_self">TupleGetItem</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_194_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleGetItemNode.html" target="_self">TupleGetItemNode</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_195_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleGetItemPattern.html" target="_self">TupleGetItemPattern</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_196_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleGetItemPatternNode.html" target="_self">TupleGetItemPatternNode</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_197_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TupleNode.html" target="_self">TupleNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Tuple.html">Tuple</a> container </td></tr>
+<tr id="row_1_4_198_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TuplePattern.html" target="_self">TuplePattern</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_199_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TuplePatternNode.html" target="_self">TuplePatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Tuple.html">Tuple</a> container </td></tr>
+<tr id="row_1_4_200_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TypePattern.html" target="_self">TypePattern</a></td><td class="desc">A pattern which matches a type in another pattern </td></tr>
+<tr id="row_1_4_201_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1TypePatternNode.html" target="_self">TypePatternNode</a></td><td class="desc"><a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> for Types </td></tr>
+<tr id="row_1_4_202_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1UpSampling3DAttrs.html" target="_self">UpSampling3DAttrs</a></td><td class="desc">Attributes for upsampling3d operator </td></tr>
+<tr id="row_1_4_203_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1UpSamplingAttrs.html" target="_self">UpSamplingAttrs</a></td><td class="desc">Attributes for upsampling operator </td></tr>
+<tr id="row_1_4_204_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1Var.html" target="_self">Var</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_205_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1VarianceAttrs.html" target="_self">VarianceAttrs</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_206_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1VarNode.html" target="_self">VarNode</a></td><td class="desc">Container for <a class="el" href="classtvm_1_1relay_1_1Var.html">Var</a> </td></tr>
+<tr id="row_1_4_207_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1VarPattern.html" target="_self">VarPattern</a></td><td class="desc"></td></tr>
+<tr id="row_1_4_208_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1VarPatternNode.html" target="_self">VarPatternNode</a></td><td class="desc">Container for <a class="el" href="classtvm_1_1relay_1_1Var.html">Var</a> </td></tr>
+<tr id="row_1_4_209_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1WildcardPattern.html" target="_self">WildcardPattern</a></td><td class="desc">A pattern which matches anything </td></tr>
+<tr id="row_1_4_210_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1relay_1_1WildcardPatternNode.html" target="_self">WildcardPatternNode</a></td><td class="desc">Wildcard <a class="el" href="classtvm_1_1relay_1_1Pattern.html" title="Pattern is the base type for an ADT match pattern in Relay. ">Pattern</a> </td></tr>
+<tr id="row_1_4_211_" class="even" style="display:none;"><td class="entry"><span style="width:48px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structtvm_1_1relay_1_1YoloReorgAttrs.html" target="_self">YoloReorgAttrs</a></td><td class="desc">Attributes used in yolo reorg operators </td></tr>
 <tr id="row_1_5_" class="even" style="display:none;"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span id="arr_1_5_" class="arrow" onclick="toggleFolder('1_5_')">&#9658;</span><span class="icona"><span class="icon">N</span></span><a class="el" href="namespacetvm_1_1runtime.html" target="_self">runtime</a></td><td class="desc"></td></tr>
 <tr id="row_1_5_0_" class="even" style="display:none;"><td class="entry"><span style="width:32px;display:inline-block;">&#160;</span><span id="arr_1_5_0_" class="arrow" onclick="toggleFolder('1_5_0_')">&#9658;</span><span class="icona"><span class="icon">N</span></span><a class="el" href="namespacetvm_1_1runtime_1_1micro__rpc.html" target="_self">micro_rpc</a></td><td class="desc"></td></tr>
 <tr id="row_1_5_0_0_" class="even" style="display:none;"><td class="entry"><span style="width:64px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1FrameBuffer.html" target="_self">FrameBuffer</a></td><td class="desc"></td></tr>
@@ -853,16 +858,17 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <tr id="row_1_126_" class="even" style="display:none;"><td class="entry"><span style="width:32px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1TypeVarNode.html" target="_self">TypeVarNode</a></td><td class="desc"><a class="el" href="classtvm_1_1Type.html" title="Managed reference to TypeNode. ">Type</a> parameter in functions </td></tr>
 <tr id="row_1_127_" class="even" style="display:none;"><td class="entry"><span style="width:32px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1TypeVisitor.html" target="_self">TypeVisitor</a></td><td class="desc">A type visitor that recursively visit types </td></tr>
 <tr id="row_1_128_" class="even" style="display:none;"><td class="entry"><span style="width:32px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="classtvm_1_1With.html" target="_self">With</a></td><td class="desc">RAII wrapper function to enter and exit a context object similar to python's with syntax </td></tr>
-<tr id="row_2_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMArgs.html" target="_self">TVMArgs</a></td><td class="desc"></td></tr>
-<tr id="row_3_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMByteArray.html" target="_self">TVMByteArray</a></td><td class="desc">Byte array type used to pass in byte array When kTVMBytes is used as data type </td></tr>
-<tr id="row_4_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMFuncRegistry.html" target="_self">TVMFuncRegistry</a></td><td class="desc">A data structure that facilitates function lookup by C-string name </td></tr>
-<tr id="row_5_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMGraphRuntimeGraphAttr.html" target="_self">TVMGraphRuntimeGraphAttr</a></td><td class="desc"></td></tr>
-<tr id="row_6_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMModule.html" target="_self">TVMModule</a></td><td class="desc">Module container of TVM </td></tr>
-<tr id="row_7_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMMutableFuncRegistry.html" target="_self">TVMMutableFuncRegistry</a></td><td class="desc">A <a class="el" href="structTVMFuncRegistry.html" title="A data structure that facilitates function lookup by C-string name. ">TVMFuncRegistry</a> that supports adding and changing the functions </td></tr>
-<tr id="row_8_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMOpParam.html" target="_self">TVMOpParam</a></td><td class="desc">Operator attributes about tvm op </td></tr>
-<tr id="row_9_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMPackedFunc.html" target="_self">TVMPackedFunc</a></td><td class="desc"></td></tr>
-<tr id="row_10_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMParallelGroupEnv.html" target="_self">TVMParallelGroupEnv</a></td><td class="desc">Environment for TVM parallel task </td></tr>
-<tr id="row_11_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="unionTVMValue.html" target="_self">TVMValue</a></td><td class="desc">Union type of values being passed through API and function calls </td></tr>
+<tr id="row_2_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structMemoryManagerInterface.html" target="_self">MemoryManagerInterface</a></td><td class="desc"></td></tr>
+<tr id="row_3_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMArgs.html" target="_self">TVMArgs</a></td><td class="desc"></td></tr>
+<tr id="row_4_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMByteArray.html" target="_self">TVMByteArray</a></td><td class="desc">Byte array type used to pass in byte array When kTVMBytes is used as data type </td></tr>
+<tr id="row_5_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMFuncRegistry.html" target="_self">TVMFuncRegistry</a></td><td class="desc">A data structure that facilitates function lookup by C-string name </td></tr>
+<tr id="row_6_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMGraphRuntimeGraphAttr.html" target="_self">TVMGraphRuntimeGraphAttr</a></td><td class="desc"></td></tr>
+<tr id="row_7_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMModule.html" target="_self">TVMModule</a></td><td class="desc">Module container of TVM </td></tr>
+<tr id="row_8_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMMutableFuncRegistry.html" target="_self">TVMMutableFuncRegistry</a></td><td class="desc">A <a class="el" href="structTVMFuncRegistry.html" title="A data structure that facilitates function lookup by C-string name. ">TVMFuncRegistry</a> that supports adding and changing the functions </td></tr>
+<tr id="row_9_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMOpParam.html" target="_self">TVMOpParam</a></td><td class="desc">Operator attributes about tvm op </td></tr>
+<tr id="row_10_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMPackedFunc.html" target="_self">TVMPackedFunc</a></td><td class="desc"></td></tr>
+<tr id="row_11_"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="structTVMParallelGroupEnv.html" target="_self">TVMParallelGroupEnv</a></td><td class="desc">Environment for TVM parallel task </td></tr>
+<tr id="row_12_" class="even"><td class="entry"><span style="width:16px;display:inline-block;">&#160;</span><span class="icona"><span class="icon">C</span></span><a class="el" href="unionTVMValue.html" target="_self">TVMValue</a></td><td class="desc">Union type of values being passed through API and function calls </td></tr>
 </table>
 </div><!-- directory -->
 </div><!-- contents -->
diff --git a/docs/api/doxygen/auto__schedule_8h_source.html b/docs/api/doxygen/auto__schedule_8h_source.html
index 7438027..a71b867 100644
--- a/docs/api/doxygen/auto__schedule_8h_source.html
+++ b/docs/api/doxygen/auto__schedule_8h_source.html
@@ -97,7 +97,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1TuningOptionsNode_html_a60515c17b530f5c8b806930469bdd22c"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html#a60515c17b530f5c8b806930469bdd22c">tvm::auto_scheduler::TuningOptionsNode::VisitAttrs</a></div><div class="ttdeci">void VisitAttrs(tvm::AttrVisitor *v)</div><div class="ttdef"><b>Definition:</b> auto_schedule.h:54</div></div>
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1TuningOptionsNode_html"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html">tvm::auto_scheduler::TuningOptionsNode</a></div><div class="ttdoc">Tuning and measurement options. </div><div class="ttdef"><b>Definition:</b> auto_schedule.h:37</div></div>
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1TuningOptionsNode_html_a64abc6eeb90309a01a0a9d415aaf26f4"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html#a64abc6eeb90309a01a0a9d415aaf26f4">tvm::auto_scheduler::TuningOptionsNode::TVM_DECLARE_FINAL_OBJECT_INFO</a></div><div class="ttdeci">TVM_DECLARE_FINAL_OBJECT_INFO(TuningOptionsNode, Object)</div></div>
-<div class="ttc" id="classtvm_1_1auto__scheduler_1_1ProgramRunner_html"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1ProgramRunner.html">tvm::auto_scheduler::ProgramRunner</a></div><div class="ttdoc">Managed reference to ProgramRunnerNode. </div><div class="ttdef"><b>Definition:</b> measure.h:302</div></div>
+<div class="ttc" id="classtvm_1_1auto__scheduler_1_1ProgramRunner_html"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1ProgramRunner.html">tvm::auto_scheduler::ProgramRunner</a></div><div class="ttdoc">Managed reference to ProgramRunnerNode. </div><div class="ttdef"><b>Definition:</b> measure.h:331</div></div>
 <div class="ttc" id="classtvm_1_1AttrVisitor_html"><div class="ttname"><a href="classtvm_1_1AttrVisitor.html">tvm::AttrVisitor</a></div><div class="ttdoc">Visitor class for to get the attributesof a AST/IR node. The content is going to be called for each f...</div><div class="ttdef"><b>Definition:</b> reflection.h:52</div></div>
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1TuningOptionsNode_html_ade4f4a9a5dda968e8a81a43c8d8b597e"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html#ade4f4a9a5dda968e8a81a43c8d8b597e">tvm::auto_scheduler::TuningOptionsNode::_type_key</a></div><div class="ttdeci">static constexpr const char * _type_key</div><div class="ttdef"><b>Definition:</b> auto_schedule.h:64</div></div>
 <div class="ttc" id="measure_8h_html"><div class="ttname"><a href="measure_8h.html">measure.h</a></div><div class="ttdoc">Distributed measurement infrastructure to measure the runtime costs of tensor programs. These functions are responsible for building the tvm module, uploading it to remote devices, recording the running time costs, and checking the correctness of the output. </div></div>
@@ -105,7 +105,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1SearchPolicy_html"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1SearchPolicy.html">tvm::auto_scheduler::SearchPolicy</a></div><div class="ttdoc">Managed reference to SearchPolicyNode. </div><div class="ttdef"><b>Definition:</b> search_policy.h:198</div></div>
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1TuningOptionsNode_html_af6f3c49598d8377e02df4cd4d43ce732"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html#af6f3c49598d8377e02df4cd4d43ce732">tvm::auto_scheduler::TuningOptionsNode::num_measure_trials</a></div><div class="ttdeci">int num_measure_trials</div><div class="ttdoc">The number of total measurement trials. </div><div class="ttdef"><b>Definition:</b> auto_schedule.h:40</div></div>
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1TuningOptionsNode_html_a355d86b2c38f0827ae1b158753d1daa2"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html#a355d86b2c38f0827ae1b158753d1daa2">tvm::auto_scheduler::TuningOptionsNode::runner</a></div><div class="ttdeci">ProgramRunner runner</div><div class="ttdoc">ProgramRunner which runs the program and measures time costs. </div><div class="ttdef"><b>Definition:</b> auto_schedule.h:50</div></div>
-<div class="ttc" id="classtvm_1_1auto__scheduler_1_1ProgramBuilder_html"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1ProgramBuilder.html">tvm::auto_scheduler::ProgramBuilder</a></div><div class="ttdoc">Managed reference to ProgramBuilderNode. </div><div class="ttdef"><b>Definition:</b> measure.h:262</div></div>
+<div class="ttc" id="classtvm_1_1auto__scheduler_1_1ProgramBuilder_html"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1ProgramBuilder.html">tvm::auto_scheduler::ProgramBuilder</a></div><div class="ttdoc">Managed reference to ProgramBuilderNode. </div><div class="ttdef"><b>Definition:</b> measure.h:291</div></div>
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1TuningOptionsNode_html_a3ba0915aabeb33a6adbcdceb4e6a43b9"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1TuningOptionsNode.html#a3ba0915aabeb33a6adbcdceb4e6a43b9">tvm::auto_scheduler::TuningOptionsNode::measure_callbacks</a></div><div class="ttdeci">Optional&lt; Array&lt; MeasureCallback &gt; &gt; measure_callbacks</div><div class="ttdoc">MeasureCallback functions to be called after each measure batch. </div><div class="ttd [...]
 <div class="ttc" id="namespacetvm_1_1auto__scheduler_html_a1d6d14e885abf0c3b91e6b3cf542c2de"><div class="ttname"><a href="namespacetvm_1_1auto__scheduler.html#a1d6d14e885abf0c3b91e6b3cf542c2de">tvm::auto_scheduler::AutoSchedule</a></div><div class="ttdeci">std::pair&lt; te::Schedule, Array&lt; te::Tensor &gt; &gt; AutoSchedule(SearchPolicy search_policy, TuningOptions tuning_options)</div><div class="ttdoc">Run schedule search for a given compute declaration. </div></div>
 <div class="ttc" id="classtvm_1_1auto__scheduler_1_1TuningOptions_html"><div class="ttname"><a href="classtvm_1_1auto__scheduler_1_1TuningOptions.html">tvm::auto_scheduler::TuningOptions</a></div><div class="ttdoc">Managed reference to TuningOptionsNode. </div><div class="ttdef"><b>Definition:</b> auto_schedule.h:72</div></div>
diff --git a/docs/api/doxygen/builtin_8h.html b/docs/api/doxygen/builtin_8h.html
index 0c520c2..2f9354c 100644
--- a/docs/api/doxygen/builtin_8h.html
+++ b/docs/api/doxygen/builtin_8h.html
@@ -303,6 +303,9 @@ Functions</h2></td></tr>
 <tr class="memitem:a30dff65bc2c142b57fae7f60e378ff43"><td class="memItemLeft" align="right" valign="top">const Op &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1tir_1_1builtin.html#a30dff65bc2c142b57fae7f60e378ff43">tvm::tir::builtin::vectorcombine</a> ()</td></tr>
 <tr class="memdesc:a30dff65bc2c142b57fae7f60e378ff43"><td class="mdescLeft">&#160;</td><td class="mdescRight">Concat two vectors.  <a href="namespacetvm_1_1tir_1_1builtin.html#a30dff65bc2c142b57fae7f60e378ff43">More...</a><br /></td></tr>
 <tr class="separator:a30dff65bc2c142b57fae7f60e378ff43"><td class="memSeparator" colspan="2">&#160;</td></tr>
+<tr class="memitem:ab4a648f6e7451af295688f243a215cd7"><td class="memItemLeft" align="right" valign="top">const Op &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="namespacetvm_1_1tir_1_1builtin.html#ab4a648f6e7451af295688f243a215cd7">tvm::tir::builtin::atomic_add</a> ()</td></tr>
+<tr class="memdesc:ab4a648f6e7451af295688f243a215cd7"><td class="mdescLeft">&#160;</td><td class="mdescRight">atomic add instruction, corresponding e.g. to atomicAdd in CUDA  <a href="namespacetvm_1_1tir_1_1builtin.html#ab4a648f6e7451af295688f243a215cd7">More...</a><br /></td></tr>
+<tr class="separator:ab4a648f6e7451af295688f243a215cd7"><td class="memSeparator" colspan="2">&#160;</td></tr>
 </table>
 <a name="details" id="details"></a><h2 class="groupheader">Detailed Description</h2>
 <div class="textblock"><p>TIR builtin intrinsics. </p>
diff --git a/docs/api/doxygen/builtin_8h_source.html b/docs/api/doxygen/builtin_8h_source.html
index 45ca5d0..6a7873b 100644
--- a/docs/api/doxygen/builtin_8h_source.html
+++ b/docs/api/doxygen/builtin_8h_source.html
@@ -89,44 +89,45 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="title">builtin.h</div>  </div>
 </div><!--header-->
 <div class="contents">
-<a href="builtin_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more con [...]
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1cad28cfc7b69fd8745e12a4f0024d6942a"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1cad28cfc7b69fd8745e12a4f0024d6942a">tvm::tir::builtin::kArrNDim</a></div><div class="ttdef"><b>Definition:</b> builtin.h:559</div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca0b8af30aa268164148d5bfe1d8c2ba54"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca0b8af30aa268164148d5bfe1d8c2ba54">tvm::tir::builtin::kArrAddr</a></div><div class="ttdef"><b>Definition:</b> builtin.h:555</div></div>
+<a href="builtin_8h.html">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno">    1</span>&#160;<span class="comment">/*</span></div><div class="line"><a name="l00002"></a><span class="lineno">    2</span>&#160;<span class="comment"> * Licensed to the Apache Software Foundation (ASF) under one</span></div><div class="line"><a name="l00003"></a><span class="lineno">    3</span>&#160;<span class="comment"> * or more con [...]
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1cad28cfc7b69fd8745e12a4f0024d6942a"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1cad28cfc7b69fd8745e12a4f0024d6942a">tvm::tir::builtin::kArrNDim</a></div><div class="ttdef"><b>Definition:</b> builtin.h:564</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca0b8af30aa268164148d5bfe1d8c2ba54"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca0b8af30aa268164148d5bfe1d8c2ba54">tvm::tir::builtin::kArrAddr</a></div><div class="ttdef"><b>Definition:</b> builtin.h:560</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a925a45e5bb05e0cbf2daf2ffdbdcf53a"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a925a45e5bb05e0cbf2daf2ffdbdcf53a">tvm::tir::builtin::tvm_storage_sync</a></div><div class="ttdeci">const Op &amp; tvm_storage_sync()</div><div class="ttdoc">See pseudo code. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a15c5e0e0478e0ebff91690f60992cf3f"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a15c5e0e0478e0ebff91690f60992cf3f">tvm::tir::builtin::tvm_stack_alloca</a></div><div class="ttdeci">const Op &amp; tvm_stack_alloca()</div><div class="ttdoc">See pesudo code. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_aa1d19e758595200998a4e1ea39767b6b"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#aa1d19e758595200998a4e1ea39767b6b">tvm::tir::builtin::tvm_thread_allreduce</a></div><div class="ttdeci">const Op &amp; tvm_thread_allreduce()</div><div class="ttdoc">See pesudo code. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_aca44a85c87273dfab1731421f4edd2bf"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#aca44a85c87273dfab1731421f4edd2bf">tvm::tir::builtin::tvm_warp_shuffle</a></div><div class="ttdeci">const Op &amp; tvm_warp_shuffle()</div><div class="ttdoc">See pseudo code. </div></div>
 <div class="ttc" id="namespacetvm_html"><div class="ttname"><a href="namespacetvm.html">tvm</a></div><div class="ttdef"><b>Definition:</b> analyzer.h:36</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a83892dca19e44a96752625c65c38d645"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a83892dca19e44a96752625c65c38d645">tvm::tir::builtin::call_llvm_intrin</a></div><div class="ttdeci">const Op &amp; call_llvm_intrin()</div><div class="ttdoc">Call an LLVM intrinsic with a given intrinsic id and signature from the types of args in the runtime ...</div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1cabf798b873c868b7d77ced30c9907037d"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1cabf798b873c868b7d77ced30c9907037d">tvm::tir::builtin::kArrDeviceType</a></div><div class="ttdef"><b>Definition:</b> builtin.h:565</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1cabf798b873c868b7d77ced30c9907037d"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1cabf798b873c868b7d77ced30c9907037d">tvm::tir::builtin::kArrDeviceType</a></div><div class="ttdef"><b>Definition:</b> builtin.h:570</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a1e15b04fe89f7899e09e528946aa5bb4"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a1e15b04fe89f7899e09e528946aa5bb4">tvm::tir::builtin::fma</a></div><div class="ttdeci">const Op &amp; fma()</div><div class="ttdoc">Fused multiply add. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a8e3504415c78f3f8fd719a21e5280b38"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a8e3504415c78f3f8fd719a21e5280b38">tvm::tir::builtin::call_llvm_pure_intrin</a></div><div class="ttdeci">const Op &amp; call_llvm_pure_intrin()</div><div class="ttdoc">Call an LLVM pure intrinsic with a given intrinsic id and signature from the types of args in the run...</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ab4a648f6e7451af295688f243a215cd7"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ab4a648f6e7451af295688f243a215cd7">tvm::tir::builtin::atomic_add</a></div><div class="ttdeci">const Op &amp; atomic_add()</div><div class="ttdoc">atomic add instruction, corresponding e.g. to atomicAdd in CUDA </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ac8e7bc86b8fa81453291ae5299062001"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ac8e7bc86b8fa81453291ae5299062001">tvm::tir::builtin::tvm_global_barrier_kinit</a></div><div class="ttdeci">const Op &amp; tvm_global_barrier_kinit()</div><div class="ttdoc">Initialize the global barrier. Call this at beginning of kernel that need global barrier. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a30dff65bc2c142b57fae7f60e378ff43"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a30dff65bc2c142b57fae7f60e378ff43">tvm::tir::builtin::vectorcombine</a></div><div class="ttdeci">const Op &amp; vectorcombine()</div><div class="ttdoc">Concat two vectors. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ae0470bd69bb03047aae4cb52e1e6e337"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ae0470bd69bb03047aae4cb52e1e6e337">tvm::tir::builtin::tvm_warp_shuffle_up</a></div><div class="ttdeci">const Op &amp; tvm_warp_shuffle_up()</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ac54288cc9f1fee8c26db9bd87ac320ee"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ac54288cc9f1fee8c26db9bd87ac320ee">tvm::tir::builtin::tvm_call_trace_packed</a></div><div class="ttdeci">const Op &amp; tvm_call_trace_packed()</div><div class="ttdoc">See pesudo code. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_afc81da8cbcd7f34ec5e1e80d837ca265"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#afc81da8cbcd7f34ec5e1e80d837ca265">tvm::tir::builtin::tvm_store_matrix_sync</a></div><div class="ttdeci">const Op &amp; tvm_store_matrix_sync()</div><div class="ttdoc">tvm intrinsic for tensor core store operators. </div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca96e7b6492b5b174219cf60e19af0857c"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca96e7b6492b5b174219cf60e19af0857c">tvm::tir::builtin::kArrStrides</a></div><div class="ttdef"><b>Definition:</b> builtin.h:558</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca96e7b6492b5b174219cf60e19af0857c"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca96e7b6492b5b174219cf60e19af0857c">tvm::tir::builtin::kArrStrides</a></div><div class="ttdef"><b>Definition:</b> builtin.h:563</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ac4887bd93ad67619ad290a33e2bdd340"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ac4887bd93ad67619ad290a33e2bdd340">tvm::tir::builtin::call_spirv_pure_glsl450</a></div><div class="ttdeci">const Op &amp; call_spirv_pure_glsl450()</div><div class="ttdoc">Call an SPIRV pure GLSL450 intrinsic. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a6be181be34fba13d129aadc6c9a23f73"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a6be181be34fba13d129aadc6c9a23f73">tvm::tir::builtin::tvm_thread_context</a></div><div class="ttdeci">const Op &amp; tvm_thread_context()</div><div class="ttdoc">See pesudo code Mark the content as thread local context, can get optimized by only call the call onc...</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a700b7018f2c1f1fba8b4e28f264d8bbb"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a700b7018f2c1f1fba8b4e28f264d8bbb">tvm::tir::builtin::address_of</a></div><div class="ttdeci">const Op &amp; address_of()</div><div class="ttdoc">See pesudo code. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a091ef99dc63f6945588dbb81c968ca15"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a091ef99dc63f6945588dbb81c968ca15">tvm::tir::builtin::bitwise_not</a></div><div class="ttdeci">const Op &amp; bitwise_not()</div><div class="ttdoc">Bitwise not operator. </div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1cafdb925cdf50f17a2b96c7ac4faefa1fb"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1cafdb925cdf50f17a2b96c7ac4faefa1fb">tvm::tir::builtin::kArrByteOffset</a></div><div class="ttdef"><b>Definition:</b> builtin.h:563</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1cafdb925cdf50f17a2b96c7ac4faefa1fb"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1cafdb925cdf50f17a2b96c7ac4faefa1fb">tvm::tir::builtin::kArrByteOffset</a></div><div class="ttdef"><b>Definition:</b> builtin.h:568</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a0c2ebdcec34d7c79dc8480e5dab8547a"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a0c2ebdcec34d7c79dc8480e5dab8547a">tvm::tir::builtin::q_multiply_shift</a></div><div class="ttdeci">const Op &amp; q_multiply_shift()</div><div class="ttdoc">Execute a multiplication between two Q-numbers x and y followed by a right shift s The default roundi...</div></div>
 <div class="ttc" id="ir_2op_8h_html"><div class="ttname"><a href="ir_2op_8h.html">op.h</a></div><div class="ttdoc">Primitive operators(builtin intrinsics) and registry for them. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a7ed64a9fb0a7f575fc63e1e0395e96a6"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a7ed64a9fb0a7f575fc63e1e0395e96a6">tvm::tir::builtin::vectorlow</a></div><div class="ttdeci">const Op &amp; vectorlow()</div><div class="ttdoc">Get the low-level half of the vector. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a0cbd267877168afd5bbea35f0e5d70fe"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a0cbd267877168afd5bbea35f0e5d70fe">tvm::tir::builtin::tvm_mma_sync</a></div><div class="ttdeci">const Op &amp; tvm_mma_sync()</div><div class="ttdoc">tvm intrinsic for tensor core mma_sync operators. </div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca344dc1f419339b81024d4d3628083a1e"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca344dc1f419339b81024d4d3628083a1e">tvm::tir::builtin::kArrTypeBits</a></div><div class="ttdef"><b>Definition:</b> builtin.h:561</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca344dc1f419339b81024d4d3628083a1e"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca344dc1f419339b81024d4d3628083a1e">tvm::tir::builtin::kArrTypeBits</a></div><div class="ttdef"><b>Definition:</b> builtin.h:566</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a7b555bc5cca2f5e7b26c1037bc0001ce"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a7b555bc5cca2f5e7b26c1037bc0001ce">tvm::tir::builtin::reinterpret</a></div><div class="ttdeci">const Op &amp; reinterpret()</div><div class="ttdoc">Reinterpret the value using the target type. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ae2add6e324d391782d367360a68ccf51"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ae2add6e324d391782d367360a68ccf51">tvm::tir::builtin::call_pure_extern</a></div><div class="ttdeci">const Op &amp; call_pure_extern()</div><div class="ttdoc">Call an pure extern C function with given name and signature from the types of args in the runtime en...</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a2172690dd21d7fd50a4fd4d696ea7bb2"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a2172690dd21d7fd50a4fd4d696ea7bb2">tvm::tir::builtin::popcount</a></div><div class="ttdeci">const Op &amp; popcount()</div><div class="ttdoc">Popcount. </div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca0c960782c20a4f16cfe203c516760b00"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca0c960782c20a4f16cfe203c516760b00">tvm::tir::builtin::kArrTypeLanes</a></div><div class="ttdef"><b>Definition:</b> builtin.h:562</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca0c960782c20a4f16cfe203c516760b00"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca0c960782c20a4f16cfe203c516760b00">tvm::tir::builtin::kArrTypeLanes</a></div><div class="ttdef"><b>Definition:</b> builtin.h:567</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a2c13c6e4b2f92e17f357665f9f11736c"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a2c13c6e4b2f92e17f357665f9f11736c">tvm::tir::builtin::tvm_call_packed</a></div><div class="ttdeci">const Op &amp; tvm_call_packed()</div><div class="ttdoc">See pesudo code. </div></div>
 <div class="ttc" id="tir_2expr_8h_html"><div class="ttname"><a href="tir_2expr_8h.html">expr.h</a></div><div class="ttdoc">TIR expressions. </div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca4d4a5d54434514fd8b0ce57160059c92"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca4d4a5d54434514fd8b0ce57160059c92">tvm::tir::builtin::kArrKindBound_</a></div><div class="ttdef"><b>Definition:</b> builtin.h:566</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca4d4a5d54434514fd8b0ce57160059c92"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca4d4a5d54434514fd8b0ce57160059c92">tvm::tir::builtin::kArrKindBound_</a></div><div class="ttdef"><b>Definition:</b> builtin.h:571</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a21d1f0395dca5af4a90cdb42c1d1d154"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a21d1f0395dca5af4a90cdb42c1d1d154">tvm::tir::builtin::likely</a></div><div class="ttdeci">const Op &amp; likely()</div><div class="ttdoc">Marks a condition is likely going to happen. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a23003bd9331efaa58d8420529ea96c0b"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a23003bd9331efaa58d8420529ea96c0b">tvm::tir::builtin::tvm_struct_get</a></div><div class="ttdeci">const Op &amp; tvm_struct_get()</div><div class="ttdoc">See pesudo code. </div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca779c07403e11f671e936ec2813ce2304"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca779c07403e11f671e936ec2813ce2304">tvm::tir::builtin::kTVMValueContent</a></div><div class="ttdef"><b>Definition:</b> builtin.h:568</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca779c07403e11f671e936ec2813ce2304"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca779c07403e11f671e936ec2813ce2304">tvm::tir::builtin::kTVMValueContent</a></div><div class="ttdef"><b>Definition:</b> builtin.h:573</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a28f99e6dd767482765b854ee9fc71f2c"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a28f99e6dd767482765b854ee9fc71f2c">tvm::tir::builtin::tvm_stack_make_array</a></div><div class="ttdeci">const Op &amp; tvm_stack_make_array()</div><div class="ttdoc">Allocate a NDArray(DLTensor) on stack, return the handle. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a543f1fc334d2bc830add972895a03f17"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a543f1fc334d2bc830add972895a03f17">tvm::tir::builtin::prefetch</a></div><div class="ttdeci">const Op &amp; prefetch()</div><div class="ttdoc">Prefetch a cacheline. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a92624d2aa5c435cd7a0ea8efb698a115"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a92624d2aa5c435cd7a0ea8efb698a115">tvm::tir::builtin::tvm_throw_last_error</a></div><div class="ttdeci">const Op &amp; tvm_throw_last_error()</div><div class="ttdoc">See pesudo code. </div></div>
@@ -134,9 +135,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a3e84c73dbbcf7f97008ac84c169feae9"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a3e84c73dbbcf7f97008ac84c169feae9">tvm::tir::builtin::tvm_access_ptr</a></div><div class="ttdeci">const Op &amp; tvm_access_ptr()</div><div class="ttdoc">Get head access address with memory access pattern info. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_abd540cb73407771ecfb4f78722ce5a1b"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#abd540cb73407771ecfb4f78722ce5a1b">tvm::tir::builtin::tvm_stack_make_shape</a></div><div class="ttdeci">const Op &amp; tvm_stack_make_shape()</div><div class="ttdoc">Allocate a shape tuple on stack, return the handle. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ae741e67259cd4b844a8934f2e2704adc"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ae741e67259cd4b844a8934f2e2704adc">tvm::tir::builtin::if_then_else</a></div><div class="ttdeci">const Op &amp; if_then_else()</div><div class="ttdoc">Same as select, used for unsafe memory access. </div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1caa73457ed97931251f1762cb319adc858"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1caa73457ed97931251f1762cb319adc858">tvm::tir::builtin::kTVMValueKindBound_</a></div><div class="ttdef"><b>Definition:</b> builtin.h:569</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1caa73457ed97931251f1762cb319adc858"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1caa73457ed97931251f1762cb319adc858">tvm::tir::builtin::kTVMValueKindBound_</a></div><div class="ttdef"><b>Definition:</b> builtin.h:574</div></div>
 <div class="ttc" id="classtvm_1_1Op_html"><div class="ttname"><a href="classtvm_1_1Op.html">tvm::Op</a></div><div class="ttdoc">Managed reference class to OpNode. </div><div class="ttdef"><b>Definition:</b> op.h:165</div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca57f69fd3d141caaa7e2e72fda7d6a1da"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca57f69fd3d141caaa7e2e72fda7d6a1da">tvm::tir::builtin::kArrShape</a></div><div class="ttdef"><b>Definition:</b> builtin.h:557</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca57f69fd3d141caaa7e2e72fda7d6a1da"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca57f69fd3d141caaa7e2e72fda7d6a1da">tvm::tir::builtin::kArrShape</a></div><div class="ttdef"><b>Definition:</b> builtin.h:562</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_afc4086a245ded9076de226ae802ced32"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#afc4086a245ded9076de226ae802ced32">tvm::tir::builtin::tvm_warp_activemask</a></div><div class="ttdeci">const Op &amp; tvm_warp_activemask()</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a668eaad07b6c46238f2bf758e61b58a5"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a668eaad07b6c46238f2bf758e61b58a5">tvm::tir::builtin::call_extern</a></div><div class="ttdeci">const Op &amp; call_extern()</div><div class="ttdoc">Call an extern C function with given name and signature from the types of args in the runtime environ...</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_af103ae0715d4ebcbaccd49d2b6a12afe"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#af103ae0715d4ebcbaccd49d2b6a12afe">tvm::tir::builtin::shift_right</a></div><div class="ttdeci">const Op &amp; shift_right()</div><div class="ttdoc">Right shift. </div></div>
@@ -145,7 +146,7 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a8d5e173f1a16740172a9ad6f2aa85a08"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a8d5e173f1a16740172a9ad6f2aa85a08">tvm::tir::builtin::tvm_bmma_sync</a></div><div class="ttdeci">const Op &amp; tvm_bmma_sync()</div><div class="ttdoc">tvm intrinsic for tensor core bmma_sync operators. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a0cd2ac37b80c498ded412572146ecc67"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a0cd2ac37b80c498ded412572146ecc67">tvm::tir::builtin::bitwise_xor</a></div><div class="ttdeci">const Op &amp; bitwise_xor()</div><div class="ttdoc">Bitwise xor operator. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a6aeb24a28d19cdc60e4e1fa7b420d7fd"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a6aeb24a28d19cdc60e4e1fa7b420d7fd">tvm::tir::builtin::tvm_static_handle</a></div><div class="ttdeci">const Op &amp; tvm_static_handle()</div><div class="ttdoc">Create a function local static handle that iniitalizes to nullptr. can be used to cache function loca...</div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1c"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1c">tvm::tir::builtin::TVMStructFieldKind</a></div><div class="ttdeci">TVMStructFieldKind</div><div class="ttdoc">The kind of structure field info used in intrinsic. </div><div class="ttdef"><b>Definition:</b> builtin.h:553</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1c"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1c">tvm::tir::builtin::TVMStructFieldKind</a></div><div class="ttdeci">TVMStructFieldKind</div><div class="ttdoc">The kind of structure field info used in intrinsic. </div><div class="ttdef"><b>Definition:</b> builtin.h:558</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a0e633f53c50e14d7e2fc07636a223309"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a0e633f53c50e14d7e2fc07636a223309">tvm::tir::builtin::bitwise_and</a></div><div class="ttdeci">const Op &amp; bitwise_and()</div><div class="ttdoc">Bitwise and operator. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a6f53be295396c301082696ca0c113501"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a6f53be295396c301082696ca0c113501">tvm::tir::builtin::isnan</a></div><div class="ttdeci">const Op &amp; isnan()</div><div class="ttdoc">Check if value is nan. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_af6d1c48570e10287683d58f22e4de98f"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#af6d1c48570e10287683d58f22e4de98f">tvm::tir::builtin::tvm_warp_shuffle_down</a></div><div class="ttdeci">const Op &amp; tvm_warp_shuffle_down()</div></div>
@@ -155,9 +156,9 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a0117a4a76af962576a6a3bbf32f97b36"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a0117a4a76af962576a6a3bbf32f97b36">tvm::tir::builtin::tvm_call_packed_lowered</a></div><div class="ttdeci">const Op &amp; tvm_call_packed_lowered()</div><div class="ttdoc">Lowered version of call packed, the space of value and type codes are explicitly allocated. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a322ae63444ed4e5fcf7247aa93f8bb7c"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a322ae63444ed4e5fcf7247aa93f8bb7c">tvm::tir::builtin::large_uint_imm</a></div><div class="ttdeci">const Op &amp; large_uint_imm()</div><div class="ttdoc">See pesudo code. </div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a21c2ad8b095dcbefa786394981ea0b71"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a21c2ad8b095dcbefa786394981ea0b71">tvm::tir::builtin::tvm_context_id</a></div><div class="ttdeci">const Op &amp; tvm_context_id()</div><div class="ttdoc">Return a unique context id, used for hint of workspace separation. Different context id ganrantees no...</div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca9076fb1a58386bac2e0f1fdae9cab844"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca9076fb1a58386bac2e0f1fdae9cab844">tvm::tir::builtin::kArrData</a></div><div class="ttdef"><b>Definition:</b> builtin.h:556</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca9076fb1a58386bac2e0f1fdae9cab844"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca9076fb1a58386bac2e0f1fdae9cab844">tvm::tir::builtin::kArrData</a></div><div class="ttdef"><b>Definition:</b> builtin.h:561</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_a26472adf05d821f1929cfbc02bc3c231"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#a26472adf05d821f1929cfbc02bc3c231">tvm::tir::builtin::shift_left</a></div><div class="ttdeci">const Op &amp; shift_left()</div><div class="ttdoc">Left shift. </div></div>
-<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca5ce842cabb26975681dd561c5132af1b"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca5ce842cabb26975681dd561c5132af1b">tvm::tir::builtin::kArrTypeCode</a></div><div class="ttdef"><b>Definition:</b> builtin.h:560</div></div>
+<div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_ad3b90c881b67ebe8e8fe68f14143bb1ca5ce842cabb26975681dd561c5132af1b"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#ad3b90c881b67ebe8e8fe68f14143bb1ca5ce842cabb26975681dd561c5132af1b">tvm::tir::builtin::kArrTypeCode</a></div><div class="ttdef"><b>Definition:</b> builtin.h:565</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_aa6e23eac98abb8378b9837011a5c04b5"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#aa6e23eac98abb8378b9837011a5c04b5">tvm::tir::builtin::tvm_call_trace_packed_lowered</a></div><div class="ttdeci">const Op &amp; tvm_call_trace_packed_lowered()</div><div class="ttdoc">Lowered version of trace intrinsic, the space of value and type codes are explicitly allocated...</div></div>
 <div class="ttc" id="namespacetvm_1_1tir_1_1builtin_html_aa5b0e90771b35d78b6c07c0054abe023"><div class="ttname"><a href="namespacetvm_1_1tir_1_1builtin.html#aa5b0e90771b35d78b6c07c0054abe023">tvm::tir::builtin::isnullptr</a></div><div class="ttdeci">const Op &amp; isnullptr()</div><div class="ttdoc">See pesudo code. </div></div>
 </div><!-- fragment --></div><!-- contents -->
diff --git a/docs/api/doxygen/c__runtime__api_8h.html b/docs/api/doxygen/c__runtime__api_8h.html
index 2c74401..8b17183 100644
--- a/docs/api/doxygen/c__runtime__api_8h.html
+++ b/docs/api/doxygen/c__runtime__api_8h.html
@@ -107,7 +107,7 @@ Include dependency graph for c_runtime_api.h:</div>
 </div><div class="textblock"><div class="dynheader">
 This graph shows which files directly or indirectly include this file:</div>
 <div class="dyncontent">
-<div class="center"><iframe scrolling="no" frameborder="0" src="c__runtime__api_8h__dep__incl.svg" width="3431" height="1274"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
+<div class="center"><iframe scrolling="no" frameborder="0" src="c__runtime__api_8h__dep__incl.svg" width="3548" height="1274"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe>
 </div>
 </div>
 </div>
diff --git a/docs/api/doxygen/c__runtime__api_8h__dep__incl.svg b/docs/api/doxygen/c__runtime__api_8h__dep__incl.svg
index b7253b7..58e1dc7 100644
--- a/docs/api/doxygen/c__runtime__api_8h__dep__incl.svg
+++ b/docs/api/doxygen/c__runtime__api_8h__dep__incl.svg
@@ -4,1121 +4,1107 @@
 <!-- Generated by graphviz version 2.38.0 (20140413.2041)
  -->
 <!-- Title: include/tvm/runtime/c_runtime_api.h Pages: 1 -->
-<svg width="2573pt" height="955pt"
- viewBox="0.00 0.00 2572.50 955.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<svg width="2661pt" height="955pt"
+ viewBox="0.00 0.00 2661.00 955.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
 <g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 951)">
 <title>include/tvm/runtime/c_runtime_api.h</title>
-<polygon fill="white" stroke="none" points="-4,4 -4,-951 2568.5,-951 2568.5,4 -4,4"/>
+<polygon fill="white" stroke="none" points="-4,4 -4,-951 2657,-951 2657,4 -4,4"/>
 <!-- Node1 -->
 <g id="node1" class="node"><title>Node1</title>
-<polygon fill="#bfbfbf" stroke="black" points="513,-916.5 513,-946.5 626,-946.5 626,-916.5 513,-916.5"/>
-<text text-anchor="start" x="521" y="-934.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="569.5" y="-923.5" font-family="Helvetica,sans-Serif" font-size="10.00">/c_runtime_api.h</text>
+<polygon fill="#bfbfbf" stroke="black" points="1586.5,-916.5 1586.5,-946.5 1699.5,-946.5 1699.5,-916.5 1586.5,-916.5"/>
+<text text-anchor="start" x="1594.5" y="-934.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="1643" y="-923.5" font-family="Helvetica,sans-Serif" font-size="10.00">/c_runtime_api.h</text>
 </g>
 <!-- Node2 -->
 <g id="node2" class="node"><title>Node2</title>
 <g id="a_node2"><a xlink:href="compute__dag_8h.html" target="_top" xlink:title="The auto&#45;scheduler&#39;s computational graph and related program analyses. ">
-<polygon fill="white" stroke="black" points="442,-380.5 442,-410.5 591,-410.5 591,-380.5 442,-380.5"/>
-<text text-anchor="start" x="450" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
-<text text-anchor="middle" x="516.5" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00">/compute_dag.h</text>
+<polygon fill="white" stroke="black" points="582.5,-380.5 582.5,-410.5 731.5,-410.5 731.5,-380.5 582.5,-380.5"/>
+<text text-anchor="start" x="590.5" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
+<text text-anchor="middle" x="657" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00">/compute_dag.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node2 -->
 <g id="edge1" class="edge"><title>Node1&#45;&gt;Node2</title>
-<path fill="none" stroke="midnightblue" d="M502.88,-926.473C436.279,-920.892 340.534,-908.108 317.5,-880 189.196,-723.433 435.424,-472.753 501.036,-410.726"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="502.645,-929.965 512.893,-927.277 503.205,-922.988 502.645,-929.965"/>
+<path fill="none" stroke="midnightblue" d="M1576.28,-930.493C1290.81,-930.182 184.147,-926.005 127,-880 22.8948,-796.192 16.6086,-689.243 95,-581 153.164,-500.687 443.411,-436.548 582.475,-409.887"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1576.48,-933.993 1586.48,-930.503 1576.48,-926.993 1576.48,-933.993"/>
 </g>
 <!-- Node7 -->
 <g id="node7" class="node"><title>Node7</title>
 <g id="a_node7"><a xlink:href="node_8h.html" target="_top" xlink:title="Definitions and helper macros for IR/AST nodes. ">
-<polygon fill="white" stroke="black" points="1617,-386 1617,-405 1752,-405 1752,-386 1617,-386"/>
-<text text-anchor="middle" x="1684.5" y="-393" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/node.h</text>
+<polygon fill="white" stroke="black" points="1289.5,-386 1289.5,-405 1424.5,-405 1424.5,-386 1289.5,-386"/>
+<text text-anchor="middle" x="1357" y="-393" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/node.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node7 -->
 <g id="edge6" class="edge"><title>Node1&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M636.452,-929.311C968.7,-922.982 2424.5,-890.027 2424.5,-798.5 2424.5,-798.5 2424.5,-798.5 2424.5,-595.5 2424.5,-456.976 1934.81,-412.137 1752.13,-400.229"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="635.961,-925.82 626.029,-929.508 636.094,-932.819 635.961,-925.82"/>
+<path fill="none" stroke="midnightblue" d="M1707.96,-913.137C1730.22,-905.264 1754.42,-894.416 1774,-880 1804.57,-857.486 1806.16,-844.735 1827,-813 1883.88,-726.394 1897.33,-673.176 1850,-581 1833.15,-548.182 1831.36,-534.958 1801,-514 1708.8,-450.356 1666.33,-475.54 1558,-447 1499.9,-431.693 1432.06,-414.866 1391.92,-405.017"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1706.57,-909.911 1698.23,-916.438 1708.82,-916.54 1706.57,-909.911"/>
 </g>
-<!-- Node12 -->
-<g id="node12" class="node"><title>Node12</title>
-<g id="a_node12"><a xlink:href="tir_2expr_8h.html" target="_top" xlink:title="TIR expressions. ">
-<polygon fill="white" stroke="red" points="1017,-56.5 1017,-75.5 1134,-75.5 1134,-56.5 1017,-56.5"/>
-<text text-anchor="middle" x="1075.5" y="-63.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/tir/expr.h</text>
+<!-- Node11 -->
+<g id="node11" class="node"><title>Node11</title>
+<g id="a_node11"><a xlink:href="tir_2expr_8h.html" target="_top" xlink:title="TIR expressions. ">
+<polygon fill="white" stroke="red" points="1415.5,-56.5 1415.5,-75.5 1532.5,-75.5 1532.5,-56.5 1415.5,-56.5"/>
+<text text-anchor="middle" x="1474" y="-63.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/tir/expr.h</text>
+</a>
+</g>
+</g>
+<!-- Node1&#45;&gt;Node11 -->
+<g id="edge133" class="edge"><title>Node1&#45;&gt;Node11</title>
+<path fill="none" stroke="midnightblue" d="M1575.74,-929.593C1286.59,-925.531 160.744,-907.974 91,-880 40.6087,-859.788 0,-852.794 0,-798.5 0,-798.5 0,-798.5 0,-662.5 0,-567.825 76,-558.175 76,-463.5 76,-463.5 76,-463.5 76,-260.5 76,-123.058 1149.93,-77.8972 1415.27,-68.8369"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1576.02,-933.097 1586.07,-929.738 1576.12,-926.098 1576.02,-933.097"/>
+</g>
+<!-- Node22 -->
+<g id="node22" class="node"><title>Node22</title>
+<g id="a_node22"><a xlink:href="reflection_8h.html" target="_top" xlink:title="Reflection and serialization of compiler IR/AST nodes. ">
+<polygon fill="white" stroke="black" points="1326.5,-453 1326.5,-472 1481.5,-472 1481.5,-453 1326.5,-453"/>
+<text text-anchor="middle" x="1404" y="-460" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/reflection.h</text>
 </a>
 </g>
 </g>
-<!-- Node1&#45;&gt;Node12 -->
-<g id="edge136" class="edge"><title>Node1&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M502.555,-925.494C431.677,-918.95 325.789,-905.229 294.5,-880 263.947,-855.364 260.5,-837.748 260.5,-798.5 260.5,-798.5 260.5,-798.5 260.5,-327.5 260.5,-177.412 376.746,-155.142 520.5,-112 612.049,-84.5251 893.328,-72.6455 1016.59,-68.6618"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="502.472,-929.001 512.745,-926.408 503.098,-922.029 502.472,-929.001"/>
+<!-- Node1&#45;&gt;Node22 -->
+<g id="edge36" class="edge"><title>Node1&#45;&gt;Node22</title>
+<path fill="none" stroke="midnightblue" d="M1687.58,-911.938C1726.03,-893.233 1779.03,-860.336 1803,-813 1823.51,-772.484 1932.31,-676.778 1774,-514 1734.73,-473.623 1578.33,-464.792 1481.88,-463.285"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1685.69,-908.957 1678.14,-916.388 1688.68,-915.288 1685.69,-908.957"/>
 </g>
 <!-- Node24 -->
 <g id="node24" class="node"><title>Node24</title>
-<g id="a_node24"><a xlink:href="reflection_8h.html" target="_top" xlink:title="Reflection and serialization of compiler IR/AST nodes. ">
-<polygon fill="white" stroke="black" points="1217,-453 1217,-472 1372,-472 1372,-453 1217,-453"/>
-<text text-anchor="middle" x="1294.5" y="-460" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/reflection.h</text>
+<g id="a_node24"><a xlink:href="serialization_8h.html" target="_top" xlink:title="include/tvm/node/serialization.h">
+<polygon fill="white" stroke="black" points="1495,-788 1495,-807 1663,-807 1663,-788 1495,-788"/>
+<text text-anchor="middle" x="1579" y="-795" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/serialization.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node24 -->
-<g id="edge38" class="edge"><title>Node1&#45;&gt;Node24</title>
-<path fill="none" stroke="midnightblue" d="M636.606,-926.175C779.599,-915.664 1108.01,-884.224 1190.5,-813 1299.51,-718.88 1296.91,-516.03 1294.97,-472.031"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="635.947,-922.714 626.226,-926.925 636.452,-929.695 635.947,-922.714"/>
+<g id="edge39" class="edge"><title>Node1&#45;&gt;Node24</title>
+<path fill="none" stroke="midnightblue" d="M1631.65,-907.1C1617.18,-877.246 1592.86,-827.085 1583.17,-807.097"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1628.64,-908.903 1636.15,-916.374 1634.94,-905.849 1628.64,-908.903"/>
+</g>
+<!-- Node25 -->
+<g id="node25" class="node"><title>Node25</title>
+<g id="a_node25"><a xlink:href="relay_2qnn_2transform_8h.html" target="_top" xlink:title="include/tvm/relay/qnn\l/transform.h">
+<polygon fill="white" stroke="black" points="1956.5,-849.5 1956.5,-879.5 2075.5,-879.5 2075.5,-849.5 1956.5,-849.5"/>
+<text text-anchor="start" x="1964.5" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/relay/qnn</text>
+<text text-anchor="middle" x="2016" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00">/transform.h</text>
+</a>
+</g>
+</g>
+<!-- Node1&#45;&gt;Node25 -->
+<g id="edge40" class="edge"><title>Node1&#45;&gt;Node25</title>
+<path fill="none" stroke="midnightblue" d="M1709.77,-919.737C1770.42,-909.837 1862.32,-894.533 1942,-880 1946.64,-879.154 1951.43,-878.26 1956.25,-877.347"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1708.87,-916.337 1699.56,-921.399 1710,-923.245 1708.87,-916.337"/>
 </g>
 <!-- Node26 -->
 <g id="node26" class="node"><title>Node26</title>
-<g id="a_node26"><a xlink:href="serialization_8h.html" target="_top" xlink:title="include/tvm/node/serialization.h">
-<polygon fill="white" stroke="black" points="692.5,-788 692.5,-807 860.5,-807 860.5,-788 692.5,-788"/>
-<text text-anchor="middle" x="776.5" y="-795" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/serialization.h</text>
+<g id="a_node26"><a xlink:href="c__backend__api_8h.html" target="_top" xlink:title="TVM runtime backend API. ">
+<polygon fill="white" stroke="black" points="2093.5,-849.5 2093.5,-879.5 2206.5,-879.5 2206.5,-849.5 2093.5,-849.5"/>
+<text text-anchor="start" x="2101.5" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="2150" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00">/c_backend_api.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node26 -->
 <g id="edge41" class="edge"><title>Node1&#45;&gt;Node26</title>
-<path fill="none" stroke="midnightblue" d="M615.072,-912.542C635.151,-903.841 658.641,-892.556 678.5,-880 714.224,-857.413 751.194,-823.163 767.664,-807.226"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="613.651,-909.343 605.817,-916.476 616.39,-915.785 613.651,-909.343"/>
+<path fill="none" stroke="midnightblue" d="M1709.87,-924.988C1796.53,-917.353 1952.15,-901.973 2084,-880 2087.01,-879.499 2090.09,-878.949 2093.19,-878.367"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1709.3,-921.523 1699.65,-925.88 1709.91,-928.497 1709.3,-921.523"/>
 </g>
-<!-- Node27 -->
-<g id="node27" class="node"><title>Node27</title>
-<g id="a_node27"><a xlink:href="relay_2qnn_2transform_8h.html" target="_top" xlink:title="include/tvm/relay/qnn\l/transform.h">
-<polygon fill="white" stroke="black" points="327,-849.5 327,-879.5 446,-879.5 446,-849.5 327,-849.5"/>
-<text text-anchor="start" x="335" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/relay/qnn</text>
-<text text-anchor="middle" x="386.5" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00">/transform.h</text>
+<!-- Node29 -->
+<g id="node29" class="node"><title>Node29</title>
+<g id="a_node29"><a xlink:href="crt_2packed__func_8h.html" target="_top" xlink:title="Type&#45;erased function used across TVM API. ">
+<polygon fill="white" stroke="black" points="2220.5,-648.5 2220.5,-678.5 2333.5,-678.5 2333.5,-648.5 2220.5,-648.5"/>
+<text text-anchor="start" x="2228.5" y="-666.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="2277" y="-655.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/packed_func.h</text>
 </a>
 </g>
 </g>
-<!-- Node1&#45;&gt;Node27 -->
-<g id="edge42" class="edge"><title>Node1&#45;&gt;Node27</title>
-<path fill="none" stroke="midnightblue" d="M520.105,-912.955C490.626,-902.485 453.691,-889.366 426.131,-879.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="519.197,-916.347 529.792,-916.396 521.54,-909.751 519.197,-916.347"/>
+<!-- Node1&#45;&gt;Node29 -->
+<g id="edge49" class="edge"><title>Node1&#45;&gt;Node29</title>
+<path fill="none" stroke="midnightblue" d="M1709.8,-928.241C1856.37,-922.804 2196.65,-907.37 2241,-880 2300.42,-843.33 2304.07,-813.206 2323,-746 2326.74,-732.738 2327.97,-727.848 2323,-715 2317.43,-700.623 2305.66,-687.825 2295.42,-678.71"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1709.46,-924.752 1699.59,-928.616 1709.71,-931.747 1709.46,-924.752"/>
 </g>
-<!-- Node28 -->
-<g id="node28" class="node"><title>Node28</title>
-<g id="a_node28"><a xlink:href="c__backend__api_8h.html" target="_top" xlink:title="TVM runtime backend API. ">
-<polygon fill="white" stroke="black" points="119,-849.5 119,-879.5 232,-879.5 232,-849.5 119,-849.5"/>
-<text text-anchor="start" x="127" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="175.5" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00">/c_backend_api.h</text>
+<!-- Node30 -->
+<g id="node30" class="node"><title>Node30</title>
+<g id="a_node30"><a xlink:href="graph__runtime_8h.html" target="_top" xlink:title="Tiny graph runtime that can run graph containing only tvm PackedFunc. ">
+<polygon fill="white" stroke="black" points="2261.5,-581.5 2261.5,-611.5 2376.5,-611.5 2376.5,-581.5 2261.5,-581.5"/>
+<text text-anchor="start" x="2269.5" y="-599.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="2319" y="-588.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/graph_runtime.h</text>
 </a>
 </g>
 </g>
-<!-- Node1&#45;&gt;Node28 -->
-<g id="edge43" class="edge"><title>Node1&#45;&gt;Node28</title>
-<path fill="none" stroke="midnightblue" d="M502.594,-920.964C437.166,-911.421 334.781,-895.907 246.5,-880 241.795,-879.152 236.923,-878.237 232.036,-877.292"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="502.37,-924.468 512.77,-922.443 503.377,-917.541 502.37,-924.468"/>
+<!-- Node1&#45;&gt;Node30 -->
+<g id="edge47" class="edge"><title>Node1&#45;&gt;Node30</title>
+<path fill="none" stroke="midnightblue" d="M1709.87,-929.93C1850.71,-927.959 2173.29,-919.171 2275,-880 2330.46,-858.643 2381,-857.926 2381,-798.5 2381,-798.5 2381,-798.5 2381,-729.5 2381,-682.66 2348.62,-634.584 2330.66,-611.52"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1709.6,-926.433 1699.64,-930.065 1709.69,-933.433 1709.6,-926.433"/>
 </g>
 <!-- Node31 -->
 <g id="node31" class="node"><title>Node31</title>
-<g id="a_node31"><a xlink:href="crt_2packed__func_8h.html" target="_top" xlink:title="Type&#45;erased function used across TVM API. ">
-<polygon fill="white" stroke="black" points="43,-648.5 43,-678.5 156,-678.5 156,-648.5 43,-648.5"/>
-<text text-anchor="start" x="51" y="-666.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="99.5" y="-655.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/packed_func.h</text>
+<g id="a_node31"><a xlink:href="runtime_2crt_2memory_8h.html" target="_top" xlink:title="An implementation of a dynamic memory allocator for microcontrollers. ">
+<polygon fill="white" stroke="black" points="2409.5,-849.5 2409.5,-879.5 2522.5,-879.5 2522.5,-849.5 2409.5,-849.5"/>
+<text text-anchor="start" x="2417.5" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="2466" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/memory.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node31 -->
-<g id="edge50" class="edge"><title>Node1&#45;&gt;Node31</title>
-<path fill="none" stroke="midnightblue" d="M502.521,-930.49C390.825,-929.283 172.544,-921.495 109.5,-880 46.6408,-838.627 52.6541,-787.855 71.5,-715 74.9144,-701.801 82.6033,-688.373 89.0615,-678.685"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="502.63,-933.991 512.662,-930.585 502.695,-926.991 502.63,-933.991"/>
+<g id="edge48" class="edge"><title>Node1&#45;&gt;Node31</title>
+<path fill="none" stroke="midnightblue" d="M1710.2,-929.014C1842.06,-925.43 2143.76,-913.935 2395,-880 2399.61,-879.377 2404.38,-878.631 2409.15,-877.811"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1709.77,-925.524 1699.86,-929.288 1709.95,-932.521 1709.77,-925.524"/>
 </g>
 <!-- Node32 -->
 <g id="node32" class="node"><title>Node32</title>
-<g id="a_node32"><a xlink:href="graph__runtime_8h.html" target="_top" xlink:title="Tiny graph runtime that can run graph containing only tvm PackedFunc. ">
-<polygon fill="white" stroke="black" points="0,-581.5 0,-611.5 115,-611.5 115,-581.5 0,-581.5"/>
-<text text-anchor="start" x="8" y="-599.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="57.5" y="-588.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/graph_runtime.h</text>
+<g id="a_node32"><a xlink:href="platform_8h.html" target="_top" xlink:title="The virtual memory manager for micro&#45;controllers. ">
+<polygon fill="white" stroke="black" points="2201.5,-715.5 2201.5,-745.5 2314.5,-745.5 2314.5,-715.5 2201.5,-715.5"/>
+<text text-anchor="start" x="2209.5" y="-733.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="2258" y="-722.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/platform.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node32 -->
-<g id="edge49" class="edge"><title>Node1&#45;&gt;Node32</title>
-<path fill="none" stroke="midnightblue" d="M502.924,-930.434C384.738,-929.232 143.982,-921.574 72.5,-880 34.0688,-857.649 14.5,-842.958 14.5,-798.5 14.5,-798.5 14.5,-798.5 14.5,-729.5 14.5,-684.67 37.1994,-635.302 49.6131,-611.655"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="502.908,-933.934 512.939,-930.523 502.97,-926.934 502.908,-933.934"/>
+<g id="edge50" class="edge"><title>Node1&#45;&gt;Node32</title>
+<path fill="none" stroke="midnightblue" d="M1709.97,-928.784C1852.64,-924.535 2176.72,-911.497 2215,-880 2256.04,-846.233 2259.32,-775.866 2258.7,-745.714"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1709.5,-925.297 1699.61,-929.087 1709.7,-932.294 1709.5,-925.297"/>
 </g>
 <!-- Node33 -->
 <g id="node33" class="node"><title>Node33</title>
 <g id="a_node33"><a xlink:href="data__type_8h.html" target="_top" xlink:title="include/tvm/runtime\l/data_type.h">
-<polygon fill="white" stroke="black" points="1069,-782.5 1069,-812.5 1182,-812.5 1182,-782.5 1069,-782.5"/>
-<text text-anchor="start" x="1077" y="-800.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="1125.5" y="-789.5" font-family="Helvetica,sans-Serif" font-size="10.00">/data_type.h</text>
+<polygon fill="white" stroke="black" points="1681.5,-782.5 1681.5,-812.5 1794.5,-812.5 1794.5,-782.5 1681.5,-782.5"/>
+<text text-anchor="start" x="1689.5" y="-800.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="1738" y="-789.5" font-family="Helvetica,sans-Serif" font-size="10.00">/data_type.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node33 -->
-<g id="edge51" class="edge"><title>Node1&#45;&gt;Node33</title>
-<path fill="none" stroke="midnightblue" d="M636.138,-914.679C746.456,-888.489 965.112,-836.578 1068.63,-812.002"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="635.051,-911.34 626.13,-917.055 636.668,-918.151 635.051,-911.34"/>
+<g id="edge52" class="edge"><title>Node1&#45;&gt;Node33</title>
+<path fill="none" stroke="midnightblue" d="M1659.06,-908.189C1678.73,-880.859 1711.46,-835.383 1727.91,-812.525"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1656.17,-906.213 1653.17,-916.374 1661.85,-910.302 1656.17,-906.213"/>
 </g>
 <!-- Node36 -->
 <g id="node36" class="node"><title>Node36</title>
 <g id="a_node36"><a xlink:href="ndarray_8h.html" target="_top" xlink:title="A device&#45;independent managed NDArray abstraction. ">
-<polygon fill="white" stroke="black" points="693,-715.5 693,-745.5 806,-745.5 806,-715.5 693,-715.5"/>
-<text text-anchor="start" x="701" y="-733.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="749.5" y="-722.5" font-family="Helvetica,sans-Serif" font-size="10.00">/ndarray.h</text>
+<polygon fill="white" stroke="black" points="334.5,-715.5 334.5,-745.5 447.5,-745.5 447.5,-715.5 334.5,-715.5"/>
+<text text-anchor="start" x="342.5" y="-733.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="391" y="-722.5" font-family="Helvetica,sans-Serif" font-size="10.00">/ndarray.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node36 -->
-<g id="edge98" class="edge"><title>Node1&#45;&gt;Node36</title>
-<path fill="none" stroke="midnightblue" d="M570.065,-905.814C572.12,-873.361 580.432,-816.44 612.5,-782 633.402,-759.552 665.151,-747.042 692.824,-740.091"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="566.551,-906.041 569.571,-916.197 573.543,-906.374 566.551,-906.041"/>
+<g id="edge95" class="edge"><title>Node1&#45;&gt;Node36</title>
+<path fill="none" stroke="midnightblue" d="M1575.9,-930.886C1345.36,-931.756 594.727,-930.633 500,-880 443.873,-849.999 409.141,-776.907 396.457,-745.846"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1576.17,-934.385 1586.15,-930.845 1576.14,-927.385 1576.17,-934.385"/>
 </g>
 <!-- Node37 -->
 <g id="node37" class="node"><title>Node37</title>
 <g id="a_node37"><a xlink:href="packed__func_8h.html" target="_top" xlink:title="Type&#45;erased function used across TVM API. ">
-<polygon fill="white" stroke="red" points="933,-648.5 933,-678.5 1046,-678.5 1046,-648.5 933,-648.5"/>
-<text text-anchor="start" x="941" y="-666.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="989.5" y="-655.5" font-family="Helvetica,sans-Serif" font-size="10.00">/packed_func.h</text>
+<polygon fill="white" stroke="red" points="618.5,-648.5 618.5,-678.5 731.5,-678.5 731.5,-648.5 618.5,-648.5"/>
+<text text-anchor="start" x="626.5" y="-666.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="675" y="-655.5" font-family="Helvetica,sans-Serif" font-size="10.00">/packed_func.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node37 -->
-<g id="edge132" class="edge"><title>Node1&#45;&gt;Node37</title>
-<path fill="none" stroke="midnightblue" d="M604.663,-910.942C617.204,-902.659 630.649,-892.107 640.5,-880 670.519,-843.106 646.698,-812.132 683.5,-782 741.576,-734.451 778.506,-773.104 848.5,-746 894.378,-728.234 943.029,-696.968 969.489,-678.761"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="602.762,-908.003 596.18,-916.305 606.503,-913.919 602.762,-908.003"/>
+<g id="edge129" class="edge"><title>Node1&#45;&gt;Node37</title>
+<path fill="none" stroke="midnightblue" d="M1576.05,-930.828C1385.78,-930.433 846.993,-919.704 714,-813 672.834,-779.972 672.017,-709.2 673.759,-678.828"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1576.19,-934.328 1586.2,-930.84 1576.2,-927.328 1576.19,-934.328"/>
 </g>
 <!-- Node39 -->
 <g id="node39" class="node"><title>Node39</title>
 <g id="a_node39"><a xlink:href="device__api_8h.html" target="_top" xlink:title="Abstract device memory management API. ">
-<polygon fill="white" stroke="black" points="479,-581.5 479,-611.5 592,-611.5 592,-581.5 479,-581.5"/>
-<text text-anchor="start" x="487" y="-599.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="535.5" y="-588.5" font-family="Helvetica,sans-Serif" font-size="10.00">/device_api.h</text>
+<polygon fill="white" stroke="black" points="104.5,-581.5 104.5,-611.5 217.5,-611.5 217.5,-581.5 104.5,-581.5"/>
+<text text-anchor="start" x="112.5" y="-599.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="161" y="-588.5" font-family="Helvetica,sans-Serif" font-size="10.00">/device_api.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node39 -->
-<g id="edge96" class="edge"><title>Node1&#45;&gt;Node39</title>
-<path fill="none" stroke="midnightblue" d="M547.421,-908.586C539.765,-900.158 531.584,-890.12 525.5,-880 476.211,-798.011 465.901,-773.068 448.5,-679 445.994,-665.452 441.625,-659.94 448.5,-648 458.173,-631.2 475.814,-619.475 492.695,-611.582"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="545.113,-911.24 554.516,-916.123 550.21,-906.442 545.113,-911.24"/>
+<g id="edge93" class="edge"><title>Node1&#45;&gt;Node39</title>
+<path fill="none" stroke="midnightblue" d="M1576.18,-930.255C1346.2,-929.043 588.372,-921.888 349,-880 240.107,-860.945 114,-909.047 114,-798.5 114,-798.5 114,-798.5 114,-729.5 114,-684.254 138.811,-635.074 152.379,-611.562"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1576.39,-933.756 1586.41,-930.307 1576.43,-926.756 1576.39,-933.756"/>
 </g>
 <!-- Node40 -->
 <g id="node40" class="node"><title>Node40</title>
 <g id="a_node40"><a xlink:href="runtime_2module_8h.html" target="_top" xlink:title="Runtime container of the functions generated by TVM, This is used to support dynamically link...">
-<polygon fill="white" stroke="black" points="933,-581.5 933,-611.5 1046,-611.5 1046,-581.5 933,-581.5"/>
-<text text-anchor="start" x="941" y="-599.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="989.5" y="-588.5" font-family="Helvetica,sans-Serif" font-size="10.00">/module.h</text>
+<polygon fill="white" stroke="black" points="410.5,-581.5 410.5,-611.5 523.5,-611.5 523.5,-581.5 410.5,-581.5"/>
+<text text-anchor="start" x="418.5" y="-599.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="467" y="-588.5" font-family="Helvetica,sans-Serif" font-size="10.00">/module.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node40 -->
-<g id="edge97" class="edge"><title>Node1&#45;&gt;Node40</title>
-<path fill="none" stroke="midnightblue" d="M580.811,-907.163C599.435,-869.654 636.755,-798.482 659.5,-782 716.768,-740.502 751.892,-778.893 814.5,-746 832.603,-736.489 833.319,-728.701 848.5,-715 881.682,-685.053 889.06,-676.49 923.5,-648 938.841,-635.309 956.822,-621.626 970.138,-611.708"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="577.565,-905.831 576.289,-916.349 583.846,-908.923 577.565,-905.831"/>
+<g id="edge94" class="edge"><title>Node1&#45;&gt;Node40</title>
+<path fill="none" stroke="midnightblue" d="M1576.15,-929.753C1333.58,-926.172 514,-906.109 514,-798.5 514,-798.5 514,-798.5 514,-729.5 514,-684.254 489.189,-635.074 475.621,-611.562"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1576.19,-933.254 1586.24,-929.897 1576.29,-926.255 1576.19,-933.254"/>
 </g>
 <!-- Node43 -->
 <g id="node43" class="node"><title>Node43</title>
 <g id="a_node43"><a xlink:href="serializer_8h.html" target="_top" xlink:title="Serializer extension to support TVM data types Include this file to enable serialization of DLDataTyp...">
-<polygon fill="white" stroke="black" points="611,-648.5 611,-678.5 724,-678.5 724,-648.5 611,-648.5"/>
-<text text-anchor="start" x="619" y="-666.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="667.5" y="-655.5" font-family="Helvetica,sans-Serif" font-size="10.00">/serializer.h</text>
+<polygon fill="white" stroke="black" points="180.5,-648.5 180.5,-678.5 293.5,-678.5 293.5,-648.5 180.5,-648.5"/>
+<text text-anchor="start" x="188.5" y="-666.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="237" y="-655.5" font-family="Helvetica,sans-Serif" font-size="10.00">/serializer.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node43 -->
-<g id="edge133" class="edge"><title>Node1&#45;&gt;Node43</title>
-<path fill="none" stroke="midnightblue" d="M567.661,-906.005C566.892,-889.752 566.725,-867.997 569.5,-849 573.992,-818.25 576.318,-810.142 589.5,-782 608.299,-741.867 639.507,-699.747 656.179,-678.512"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="564.184,-906.497 568.262,-916.276 571.172,-906.088 564.184,-906.497"/>
+<g id="edge130" class="edge"><title>Node1&#45;&gt;Node43</title>
+<path fill="none" stroke="midnightblue" d="M1576.15,-930.878C1332.75,-931.834 502.445,-931.271 396,-880 308.034,-837.63 257.394,-719.76 242.036,-678.769"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1576.29,-934.377 1586.28,-930.836 1576.26,-927.378 1576.29,-934.377"/>
 </g>
 <!-- Node44 -->
 <g id="node44" class="node"><title>Node44</title>
 <g id="a_node44"><a xlink:href="memory__manager_8h.html" target="_top" xlink:title="Abstract device memory management API. ">
-<polygon fill="white" stroke="black" points="458,-648.5 458,-678.5 593,-678.5 593,-648.5 458,-648.5"/>
-<text text-anchor="start" x="466" y="-666.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="525.5" y="-655.5" font-family="Helvetica,sans-Serif" font-size="10.00">/vm/memory_manager.h</text>
+<polygon fill="white" stroke="black" points="350.5,-648.5 350.5,-678.5 485.5,-678.5 485.5,-648.5 350.5,-648.5"/>
+<text text-anchor="start" x="358.5" y="-666.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="418" y="-655.5" font-family="Helvetica,sans-Serif" font-size="10.00">/vm/memory_manager.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node44 -->
-<g id="edge134" class="edge"><title>Node1&#45;&gt;Node44</title>
-<path fill="none" stroke="midnightblue" d="M551.191,-908.054C545.426,-899.734 539.726,-889.904 536.5,-880 512.83,-807.336 519.899,-713.636 523.747,-678.568"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="548.376,-910.134 557.121,-916.116 554.015,-905.986 548.376,-910.134"/>
+<g id="edge131" class="edge"><title>Node1&#45;&gt;Node44</title>
+<path fill="none" stroke="midnightblue" d="M1576,-930.499C1340.47,-930.106 559.201,-925.363 457,-880 380.591,-846.085 356.701,-823.354 325,-746 319.775,-733.251 317.956,-726.841 325,-715 335.145,-697.947 353.282,-686.295 370.862,-678.521"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1576.14,-933.999 1586.14,-930.514 1576.15,-926.999 1576.14,-933.999"/>
 </g>
 <!-- Node46 -->
 <g id="node46" class="node"><title>Node46</title>
 <g id="a_node46"><a xlink:href="object_8h.html" target="_top" xlink:title="A managed object in the TVM runtime. ">
-<polygon fill="white" stroke="black" points="1382,-849.5 1382,-879.5 1495,-879.5 1495,-849.5 1382,-849.5"/>
-<text text-anchor="start" x="1390" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="1438.5" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00">/object.h</text>
+<polygon fill="white" stroke="black" points="1030.5,-849.5 1030.5,-879.5 1143.5,-879.5 1143.5,-849.5 1030.5,-849.5"/>
+<text text-anchor="start" x="1038.5" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="1087" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00">/object.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node46 -->
-<g id="edge99" class="edge"><title>Node1&#45;&gt;Node46</title>
-<path fill="none" stroke="midnightblue" d="M636.215,-925.51C801.481,-913.148 1227.62,-881.274 1381.78,-869.743"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="635.893,-922.024 626.182,-926.26 636.416,-929.004 635.893,-922.024"/>
+<g id="edge96" class="edge"><title>Node1&#45;&gt;Node46</title>
+<path fill="none" stroke="midnightblue" d="M1576.36,-922.71C1466.04,-909.813 1247.39,-884.25 1143.87,-872.149"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1576.03,-926.195 1586.37,-923.88 1576.84,-919.242 1576.03,-926.195"/>
 </g>
 <!-- Node50 -->
 <g id="node50" class="node"><title>Node50</title>
 <g id="a_node50"><a xlink:href="parallel__for_8h.html" target="_top" xlink:title="An implementation to run loop in parallel. ">
-<polygon fill="white" stroke="black" points="2452.5,-849.5 2452.5,-879.5 2564.5,-879.5 2564.5,-849.5 2452.5,-849.5"/>
-<text text-anchor="start" x="2460.5" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/support</text>
-<text text-anchor="middle" x="2508.5" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00">/parallel_for.h</text>
+<polygon fill="white" stroke="black" points="2541,-849.5 2541,-879.5 2653,-879.5 2653,-849.5 2541,-849.5"/>
+<text text-anchor="start" x="2549" y="-867.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/support</text>
+<text text-anchor="middle" x="2597" y="-856.5" font-family="Helvetica,sans-Serif" font-size="10.00">/parallel_for.h</text>
 </a>
 </g>
 </g>
 <!-- Node1&#45;&gt;Node50 -->
-<g id="edge135" class="edge"><title>Node1&#45;&gt;Node50</title>
-<path fill="none" stroke="midnightblue" d="M636.764,-930.28C927.668,-929.085 2079,-921.757 2438.5,-880 2443.06,-879.47 2447.77,-878.791 2452.48,-878.016"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="636.363,-926.782 626.377,-930.322 636.391,-933.782 636.363,-926.782"/>
+<g id="edge132" class="edge"><title>Node1&#45;&gt;Node50</title>
+<path fill="none" stroke="midnightblue" d="M1710.06,-930.405C1858.76,-929.383 2227.15,-922.374 2532,-880 2534.86,-879.602 2537.79,-879.144 2540.73,-878.64"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1709.71,-926.907 1699.73,-930.468 1709.76,-933.907 1709.71,-926.907"/>
 </g>
 <!-- Node3 -->
 <g id="node3" class="node"><title>Node3</title>
 <g id="a_node3"><a xlink:href="cost__model_8h.html" target="_top" xlink:title="Cost models that estimate the performance of programs. ">
-<polygon fill="white" stroke="black" points="645,-313.5 645,-343.5 794,-343.5 794,-313.5 645,-313.5"/>
-<text text-anchor="start" x="653" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
-<text text-anchor="middle" x="719.5" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">/cost_model.h</text>
+<polygon fill="white" stroke="black" points="735.5,-313.5 735.5,-343.5 884.5,-343.5 884.5,-313.5 735.5,-313.5"/>
+<text text-anchor="start" x="743.5" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
+<text text-anchor="middle" x="810" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">/cost_model.h</text>
 </a>
 </g>
 </g>
 <!-- Node2&#45;&gt;Node3 -->
 <g id="edge2" class="edge"><title>Node2&#45;&gt;Node3</title>
-<path fill="none" stroke="midnightblue" d="M570.442,-377.228C603.281,-366.713 644.7,-353.451 675.538,-343.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="569.004,-374.013 560.548,-380.396 571.139,-380.68 569.004,-374.013"/>
+<path fill="none" stroke="midnightblue" d="M699.589,-376.407C724.015,-366.029 754.225,-353.195 776.866,-343.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="698.034,-373.264 690.199,-380.396 700.771,-379.707 698.034,-373.264"/>
 </g>
 <!-- Node4 -->
 <g id="node4" class="node"><title>Node4</title>
 <g id="a_node4"><a xlink:href="auto__scheduler_2feature_8h.html" target="_top" xlink:title="Feature extraction for the cost model. We extract one feature vector per BufferStoreNode statement in...">
-<polygon fill="white" stroke="black" points="441,-313.5 441,-343.5 590,-343.5 590,-313.5 441,-313.5"/>
-<text text-anchor="start" x="449" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
-<text text-anchor="middle" x="515.5" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">/feature.h</text>
+<polygon fill="white" stroke="black" points="531.5,-313.5 531.5,-343.5 680.5,-343.5 680.5,-313.5 531.5,-313.5"/>
+<text text-anchor="start" x="539.5" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
+<text text-anchor="middle" x="606" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">/feature.h</text>
 </a>
 </g>
 </g>
 <!-- Node2&#45;&gt;Node4 -->
 <g id="edge3" class="edge"><title>Node2&#45;&gt;Node4</title>
-<path fill="none" stroke="midnightblue" d="M516.125,-370.108C515.987,-361.154 515.836,-351.323 515.717,-343.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="512.63,-370.451 516.283,-380.396 519.629,-370.343 512.63,-370.451"/>
+<path fill="none" stroke="midnightblue" d="M639.697,-372.447C632.205,-362.899 623.657,-352.004 617.045,-343.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="637.007,-374.689 645.934,-380.396 642.514,-370.368 637.007,-374.689"/>
 </g>
 <!-- Node5 -->
 <g id="node5" class="node"><title>Node5</title>
 <g id="a_node5"><a xlink:href="search__task_8h.html" target="_top" xlink:title="Meta information and hardware parameters for a search task. ">
-<polygon fill="white" stroke="red" points="1160,-179.5 1160,-209.5 1309,-209.5 1309,-179.5 1160,-179.5"/>
-<text text-anchor="start" x="1168" y="-197.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
-<text text-anchor="middle" x="1234.5" y="-186.5" font-family="Helvetica,sans-Serif" font-size="10.00">/search_task.h</text>
+<polygon fill="white" stroke="red" points="887.5,-179.5 887.5,-209.5 1036.5,-209.5 1036.5,-179.5 887.5,-179.5"/>
+<text text-anchor="start" x="895.5" y="-197.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
+<text text-anchor="middle" x="962" y="-186.5" font-family="Helvetica,sans-Serif" font-size="10.00">/search_task.h</text>
 </a>
 </g>
 </g>
 <!-- Node2&#45;&gt;Node5 -->
 <g id="edge4" class="edge"><title>Node2&#45;&gt;Node5</title>
-<path fill="none" stroke="midnightblue" d="M550.479,-375.085C565.521,-366.11 583.226,-355.013 598.5,-344 615.902,-331.453 616.141,-322.245 635.5,-313 810.98,-229.196 1042.66,-205.161 1159.81,-198.27"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="548.338,-372.285 541.503,-380.38 551.895,-378.314 548.338,-372.285"/>
+<path fill="none" stroke="midnightblue" d="M672.351,-371.712C685.13,-354.019 704.634,-329.758 726,-313 787.764,-264.557 872.689,-228.287 922.024,-209.621"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="669.205,-370.1 666.307,-380.29 674.928,-374.131 669.205,-370.1"/>
 </g>
 <!-- Node6 -->
 <g id="node6" class="node"><title>Node6</title>
 <g id="a_node6"><a xlink:href="search__policy_8h.html" target="_top" xlink:title="The base class of search policies, including the abstract definition of search policy and other suppo...">
-<polygon fill="white" stroke="red" points="1222,-112.5 1222,-142.5 1371,-142.5 1371,-112.5 1222,-112.5"/>
-<text text-anchor="start" x="1230" y="-130.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
-<text text-anchor="middle" x="1296.5" y="-119.5" font-family="Helvetica,sans-Serif" font-size="10.00">/search_policy.h</text>
+<polygon fill="white" stroke="red" points="957.5,-112.5 957.5,-142.5 1106.5,-142.5 1106.5,-112.5 957.5,-112.5"/>
+<text text-anchor="start" x="965.5" y="-130.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
+<text text-anchor="middle" x="1032" y="-119.5" font-family="Helvetica,sans-Serif" font-size="10.00">/search_policy.h</text>
 </a>
 </g>
 </g>
 <!-- Node5&#45;&gt;Node6 -->
 <g id="edge5" class="edge"><title>Node5&#45;&gt;Node6</title>
-<path fill="none" stroke="midnightblue" d="M1254.98,-172.024C1264.22,-162.34 1274.87,-151.174 1283.07,-142.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1252.32,-169.744 1247.95,-179.396 1257.39,-174.576 1252.32,-169.744"/>
+<path fill="none" stroke="midnightblue" d="M984.818,-172.311C995.319,-162.561 1007.49,-151.259 1016.84,-142.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="982.135,-170.027 977.189,-179.396 986.899,-175.156 982.135,-170.027"/>
 </g>
 <!-- Node7&#45;&gt;Node3 -->
 <g id="edge7" class="edge"><title>Node7&#45;&gt;Node3</title>
-<path fill="none" stroke="midnightblue" d="M1606.61,-389.253C1424.54,-376.99 973.029,-346.577 794.179,-334.53"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1606.43,-392.749 1616.64,-389.929 1606.9,-385.765 1606.43,-392.749"/>
+<path fill="none" stroke="midnightblue" d="M1279.16,-385.251C1174.15,-372.773 988.379,-350.697 884.902,-338.401"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1279.04,-388.76 1289.38,-386.465 1279.86,-381.809 1279.04,-388.76"/>
 </g>
 <!-- Node7&#45;&gt;Node6 -->
 <g id="edge8" class="edge"><title>Node7&#45;&gt;Node6</title>
-<path fill="none" stroke="midnightblue" d="M1606.57,-386.244C1519.46,-376.352 1388.2,-359.206 1370.5,-344 1312.07,-293.809 1344.09,-251.289 1317.5,-179 1312.91,-166.524 1307.06,-152.59 1302.7,-142.525"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1606.37,-389.743 1616.7,-387.384 1607.16,-382.787 1606.37,-389.743"/>
+<path fill="none" stroke="midnightblue" d="M1279.36,-385.75C1220.15,-377.664 1144.84,-363.879 1121,-344 1087.97,-316.459 1048.45,-186.155 1035.97,-142.666"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1279.09,-389.245 1289.46,-387.096 1280.01,-382.306 1279.09,-389.245"/>
 </g>
 <!-- Node8 -->
 <g id="node8" class="node"><title>Node8</title>
 <g id="a_node8"><a xlink:href="transform__step_8h.html" target="_top" xlink:title="Transformation steps. These steps are used to manipulate LoopState. They are similar to the schedule ...">
-<polygon fill="white" stroke="red" points="1686,-313.5 1686,-343.5 1835,-343.5 1835,-313.5 1686,-313.5"/>
-<text text-anchor="start" x="1694" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
-<text text-anchor="middle" x="1760.5" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">/transform_step.h</text>
+<polygon fill="white" stroke="red" points="1403.5,-313.5 1403.5,-343.5 1552.5,-343.5 1552.5,-313.5 1403.5,-313.5"/>
+<text text-anchor="start" x="1411.5" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/auto_scheduler</text>
+<text text-anchor="middle" x="1478" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">/transform_step.h</text>
 </a>
 </g>
 </g>
 <!-- Node7&#45;&gt;Node8 -->
 <g id="edge9" class="edge"><title>Node7&#45;&gt;Node8</title>
-<path fill="none" stroke="midnightblue" d="M1702.71,-378.928C1715.28,-368.179 1731.85,-354.002 1744.03,-343.589"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1700.07,-376.574 1694.75,-385.734 1704.62,-381.894 1700.07,-376.574"/>
+<path fill="none" stroke="midnightblue" d="M1382.31,-380.902C1402.67,-369.966 1431.18,-354.653 1451.77,-343.589"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1380.47,-377.918 1373.32,-385.734 1383.78,-384.085 1380.47,-377.918"/>
 </g>
 <!-- Node9 -->
 <g id="node9" class="node"><title>Node9</title>
 <g id="a_node9"><a xlink:href="ir_2adt_8h.html" target="_top" xlink:title="Algebraic data type definitions. ">
-<polygon fill="white" stroke="red" points="1942,-118 1942,-137 2051,-137 2051,-118 1942,-118"/>
-<text text-anchor="middle" x="1996.5" y="-125" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/adt.h</text>
+<polygon fill="white" stroke="red" points="1238.5,-118 1238.5,-137 1347.5,-137 1347.5,-118 1238.5,-118"/>
+<text text-anchor="middle" x="1293" y="-125" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/adt.h</text>
 </a>
 </g>
 </g>
 <!-- Node7&#45;&gt;Node9 -->
 <g id="edge10" class="edge"><title>Node7&#45;&gt;Node9</title>
-<path fill="none" stroke="midnightblue" d="M1762.02,-388.323C1799.75,-381.995 1843.88,-369.314 1876.5,-344 1950.59,-286.504 1985.74,-169.671 1994.35,-137.101"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1761.41,-384.875 1752.07,-389.872 1762.49,-391.792 1761.41,-384.875"/>
+<path fill="none" stroke="midnightblue" d="M1352.55,-376.025C1339.91,-323.49 1304.1,-174.634 1295.08,-137.151"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1349.2,-377.047 1354.94,-385.951 1356.01,-375.41 1349.2,-377.047"/>
 </g>
 <!-- Node10 -->
 <g id="node10" class="node"><title>Node10</title>
 <g id="a_node10"><a xlink:href="ir_2expr_8h.html" target="_top" xlink:title="Base expr nodes in TVM. ">
-<polygon fill="white" stroke="red" points="1721,-185 1721,-204 1834,-204 1834,-185 1721,-185"/>
-<text text-anchor="middle" x="1777.5" y="-192" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/expr.h</text>
+<polygon fill="white" stroke="red" points="1685.5,-185 1685.5,-204 1798.5,-204 1798.5,-185 1685.5,-185"/>
+<text text-anchor="middle" x="1742" y="-192" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/expr.h</text>
 </a>
 </g>
 </g>
 <!-- Node7&#45;&gt;Node10 -->
 <g id="edge11" class="edge"><title>Node7&#45;&gt;Node10</title>
-<path fill="none" stroke="midnightblue" d="M1678.1,-375.845C1673.34,-358.924 1668.88,-333.559 1676.5,-313 1694.64,-264.069 1743.29,-221.821 1765.79,-204.227"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1674.82,-377.103 1681.14,-385.609 1681.51,-375.02 1674.82,-377.103"/>
+<path fill="none" stroke="midnightblue" d="M1414.2,-383.976C1465.44,-373.897 1536.2,-358.172 1561,-344 1574.23,-336.437 1645.01,-255.416 1657,-246 1678.73,-228.928 1706.75,-213.355 1724.55,-204.149"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1413.38,-380.569 1404.23,-385.914 1414.72,-387.441 1413.38,-380.569"/>
+</g>
+<!-- Node7&#45;&gt;Node11 -->
+<g id="edge34" class="edge"><title>Node7&#45;&gt;Node11</title>
+<path fill="none" stroke="midnightblue" d="M1361.4,-375.84C1367.02,-350.38 1376,-303.258 1376,-262.5 1376,-262.5 1376,-262.5 1376,-193.5 1376,-139.049 1433.45,-93.8241 1460.37,-75.616"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1357.92,-375.385 1359.11,-385.913 1364.74,-376.941 1357.92,-375.385"/>
+</g>
+<!-- Node12 -->
+<g id="node12" class="node"><title>Node12</title>
+<g id="a_node12"><a xlink:href="base_8h.html" target="_top" xlink:title="Base classes for the Relay IR. ">
+<polygon fill="white" stroke="red" points="2009,-0.5 2009,-19.5 2143,-19.5 2143,-0.5 2009,-0.5"/>
+<text text-anchor="middle" x="2076" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/relay/base.h</text>
+</a>
+</g>
 </g>
 <!-- Node7&#45;&gt;Node12 -->
-<g id="edge36" class="edge"><title>Node7&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M1606.55,-385.057C1550.82,-376.835 1482.03,-363.203 1460.5,-344 1378.99,-271.305 1463.67,-181.597 1379.5,-112 1343.11,-81.9108 1211.72,-71.9174 1134.09,-68.615"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1606.39,-388.57 1616.79,-386.529 1607.39,-381.641 1606.39,-388.57"/>
+<g id="edge26" class="edge"><title>Node7&#45;&gt;Node12</title>
+<path fill="none" stroke="midnightblue" d="M1434.52,-384.699C1527.95,-372.635 1675.51,-352.684 1700,-344 1887.38,-277.57 2039.24,-65.3285 2070.21,-19.6954"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1434.01,-381.236 1424.54,-385.985 1434.9,-388.179 1434.01,-381.236"/>
 </g>
-<!-- Node13 -->
-<g id="node13" class="node"><title>Node13</title>
-<g id="a_node13"><a xlink:href="base_8h.html" target="_top" xlink:title="Base classes for the Relay IR. ">
-<polygon fill="white" stroke="red" points="1795.5,-0.5 1795.5,-19.5 1929.5,-19.5 1929.5,-0.5 1795.5,-0.5"/>
-<text text-anchor="middle" x="1862.5" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/relay/base.h</text>
+<!-- Node15 -->
+<g id="node15" class="node"><title>Node15</title>
+<g id="a_node15"><a xlink:href="var_8h.html" target="_top" xlink:title="Variables in the TIR. ">
+<polygon fill="white" stroke="red" points="1738.5,-118 1738.5,-137 1849.5,-137 1849.5,-118 1738.5,-118"/>
+<text text-anchor="middle" x="1794" y="-125" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/tir/var.h</text>
 </a>
 </g>
 </g>
-<!-- Node7&#45;&gt;Node13 -->
-<g id="edge27" class="edge"><title>Node7&#45;&gt;Node13</title>
-<path fill="none" stroke="midnightblue" d="M1762.08,-386.202C1791.29,-379.292 1822.48,-366.738 1843.5,-344 1868.75,-316.689 1862.5,-299.694 1862.5,-262.5 1862.5,-262.5 1862.5,-262.5 1862.5,-126.5 1862.5,-86.5964 1862.5,-38.938 1862.5,-19.6934"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1761.3,-382.791 1752.27,-388.336 1762.78,-389.631 1761.3,-382.791"/>
+<!-- Node7&#45;&gt;Node15 -->
+<g id="edge35" class="edge"><title>Node7&#45;&gt;Node15</title>
+<path fill="none" stroke="midnightblue" d="M1430.97,-384.449C1541.72,-367.809 1742.62,-330.932 1789,-277 1817.87,-243.424 1812.56,-222.933 1807,-179 1805.11,-164.091 1800.21,-147.283 1796.99,-137.308"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1430.25,-381.018 1420.87,-385.948 1431.28,-387.942 1430.25,-381.018"/>
 </g>
 <!-- Node16 -->
 <g id="node16" class="node"><title>Node16</title>
-<g id="a_node16"><a xlink:href="var_8h.html" target="_top" xlink:title="Variables in the TIR. ">
-<polygon fill="white" stroke="red" points="1478,-118 1478,-137 1589,-137 1589,-118 1478,-118"/>
-<text text-anchor="middle" x="1533.5" y="-125" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/tir/var.h</text>
+<g id="a_node16"><a xlink:href="span_8h.html" target="_top" xlink:title="Span information for debugging purposes. ">
+<polygon fill="white" stroke="red" points="1950,-319 1950,-338 2066,-338 2066,-319 1950,-319"/>
+<text text-anchor="middle" x="2008" y="-326" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/span.h</text>
 </a>
 </g>
 </g>
 <!-- Node7&#45;&gt;Node16 -->
-<g id="edge37" class="edge"><title>Node7&#45;&gt;Node16</title>
-<path fill="none" stroke="midnightblue" d="M1607.03,-386.304C1552.95,-378.799 1487.53,-365.521 1470.5,-344 1459.46,-330.051 1461.77,-267.24 1476.5,-246 1497.57,-215.616 1531.81,-241.295 1551.5,-210 1566.24,-186.575 1548.97,-152.737 1539.24,-137.064"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1606.6,-389.778 1616.98,-387.638 1607.53,-382.84 1606.6,-389.778"/>
+<g id="edge19" class="edge"><title>Node7&#45;&gt;Node16</title>
+<path fill="none" stroke="midnightblue" d="M1431.84,-384.717C1445.21,-383.069 1459.01,-381.431 1472,-380 1645.5,-360.893 1851.1,-342.789 1949.64,-334.392"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1431.22,-381.267 1421.73,-385.976 1432.09,-388.213 1431.22,-381.267"/>
 </g>
 <!-- Node17 -->
 <g id="node17" class="node"><title>Node17</title>
-<g id="a_node17"><a xlink:href="span_8h.html" target="_top" xlink:title="Span information for debugging purposes. ">
-<polygon fill="white" stroke="red" points="2194.5,-319 2194.5,-338 2310.5,-338 2310.5,-319 2194.5,-319"/>
-<text text-anchor="middle" x="2252.5" y="-326" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/span.h</text>
+<g id="a_node17"><a xlink:href="ir_2type_8h.html" target="_top" xlink:title="IR/AST nodes for the unified type system in TVM. ">
+<polygon fill="white" stroke="red" points="1666,-252 1666,-271 1780,-271 1780,-252 1666,-252"/>
+<text text-anchor="middle" x="1723" y="-259" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/type.h</text>
 </a>
 </g>
 </g>
 <!-- Node7&#45;&gt;Node17 -->
-<g id="edge20" class="edge"><title>Node7&#45;&gt;Node17</title>
-<path fill="none" stroke="midnightblue" d="M1762.34,-385.592C1877.69,-372.392 2091.43,-347.932 2194.26,-336.165"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1761.71,-382.142 1752.17,-386.756 1762.5,-389.097 1761.71,-382.142"/>
+<g id="edge25" class="edge"><title>Node7&#45;&gt;Node17</title>
+<path fill="none" stroke="midnightblue" d="M1411.07,-384.023C1475.69,-371.341 1577.22,-350.834 1594,-344 1641.6,-324.618 1691.14,-287.808 1712.34,-271.115"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1410.18,-380.631 1401.04,-385.988 1411.53,-387.501 1410.18,-380.631"/>
 </g>
 <!-- Node18 -->
 <g id="node18" class="node"><title>Node18</title>
-<g id="a_node18"><a xlink:href="ir_2type_8h.html" target="_top" xlink:title="IR/AST nodes for the unified type system in TVM. ">
-<polygon fill="white" stroke="red" points="2024.5,-252 2024.5,-271 2138.5,-271 2138.5,-252 2024.5,-252"/>
-<text text-anchor="middle" x="2081.5" y="-259" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/type.h</text>
+<g id="a_node18"><a xlink:href="tag_8h.html" target="_top" xlink:title="Target tag registry. ">
+<polygon fill="white" stroke="black" points="1187,-179.5 1187,-209.5 1291,-209.5 1291,-179.5 1187,-179.5"/>
+<text text-anchor="start" x="1195" y="-197.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/target</text>
+<text text-anchor="middle" x="1239" y="-186.5" font-family="Helvetica,sans-Serif" font-size="10.00">/tag.h</text>
 </a>
 </g>
 </g>
 <!-- Node7&#45;&gt;Node18 -->
-<g id="edge26" class="edge"><title>Node7&#45;&gt;Node18</title>
-<path fill="none" stroke="midnightblue" d="M1750.8,-384.256C1797.38,-376.023 1860.73,-362.724 1914.5,-344 1973.3,-323.525 2038.36,-287.6 2066.82,-271.158"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1750.08,-380.829 1740.83,-385.987 1751.28,-387.726 1750.08,-380.829"/>
+<g id="edge27" class="edge"><title>Node7&#45;&gt;Node18</title>
+<path fill="none" stroke="midnightblue" d="M1345.41,-377.318C1338.97,-367.624 1330.88,-355.237 1324,-344 1294.22,-295.356 1261.3,-236.147 1246.7,-209.582"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1342.53,-379.299 1351,-385.662 1348.34,-375.405 1342.53,-379.299"/>
 </g>
 <!-- Node19 -->
 <g id="node19" class="node"><title>Node19</title>
-<g id="a_node19"><a xlink:href="tag_8h.html" target="_top" xlink:title="Target tag registry. ">
-<polygon fill="white" stroke="black" points="1598.5,-179.5 1598.5,-209.5 1702.5,-209.5 1702.5,-179.5 1598.5,-179.5"/>
-<text text-anchor="start" x="1606.5" y="-197.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/target</text>
-<text text-anchor="middle" x="1650.5" y="-186.5" font-family="Helvetica,sans-Serif" font-size="10.00">/tag.h</text>
+<g id="a_node19"><a xlink:href="target_8h.html" target="_top" xlink:title="Compilation target object. ">
+<polygon fill="white" stroke="red" points="1130,-246.5 1130,-276.5 1234,-276.5 1234,-246.5 1130,-246.5"/>
+<text text-anchor="start" x="1138" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/target</text>
+<text text-anchor="middle" x="1182" y="-253.5" font-family="Helvetica,sans-Serif" font-size="10.00">/target.h</text>
 </a>
 </g>
 </g>
 <!-- Node7&#45;&gt;Node19 -->
 <g id="edge28" class="edge"><title>Node7&#45;&gt;Node19</title>
-<path fill="none" stroke="midnightblue" d="M1667.47,-378.8C1658.79,-369.612 1649.11,-357.235 1644.5,-344 1628.13,-296.969 1639.85,-236.448 1646.61,-209.509"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1665.14,-381.417 1674.69,-385.995 1670.08,-376.458 1665.14,-381.417"/>
+<path fill="none" stroke="midnightblue" d="M1336,-379.945C1321.77,-369.99 1302.6,-356.399 1286,-344 1255.12,-320.94 1219.98,-293.047 1199.47,-276.596"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1334.36,-383.067 1344.56,-385.914 1338.36,-377.324 1334.36,-383.067"/>
 </g>
-<!-- Node20 -->
-<g id="node20" class="node"><title>Node20</title>
-<g id="a_node20"><a xlink:href="target_8h.html" target="_top" xlink:title="Compilation target object. ">
-<polygon fill="white" stroke="red" points="1485.5,-246.5 1485.5,-276.5 1589.5,-276.5 1589.5,-246.5 1485.5,-246.5"/>
-<text text-anchor="start" x="1493.5" y="-264.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/target</text>
-<text text-anchor="middle" x="1537.5" y="-253.5" font-family="Helvetica,sans-Serif" font-size="10.00">/target.h</text>
-</a>
-</g>
-</g>
-<!-- Node7&#45;&gt;Node20 -->
-<g id="edge29" class="edge"><title>Node7&#45;&gt;Node20</title>
-<path fill="none" stroke="midnightblue" d="M1606.8,-385.335C1555.69,-377.524 1495.4,-364.287 1479.5,-344 1461.51,-321.034 1492.11,-293.112 1515.27,-276.629"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1606.48,-388.826 1616.89,-386.826 1607.51,-381.901 1606.48,-388.826"/>
-</g>
-<!-- Node23 -->
-<g id="node23" class="node"><title>Node23</title>
-<g id="a_node23"><a xlink:href="target__kind_8h.html" target="_top" xlink:title="Target kind registry. ">
-<polygon fill="white" stroke="black" points="1488.5,-313.5 1488.5,-343.5 1592.5,-343.5 1592.5,-313.5 1488.5,-313.5"/>
-<text text-anchor="start" x="1496.5" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/target</text>
-<text text-anchor="middle" x="1540.5" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">/target_kind.h</text>
+<!-- Node21 -->
+<g id="node21" class="node"><title>Node21</title>
+<g id="a_node21"><a xlink:href="target__kind_8h.html" target="_top" xlink:title="Target kind registry. ">
+<polygon fill="white" stroke="black" points="1130,-313.5 1130,-343.5 1234,-343.5 1234,-313.5 1130,-313.5"/>
+<text text-anchor="start" x="1138" y="-331.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/target</text>
+<text text-anchor="middle" x="1182" y="-320.5" font-family="Helvetica,sans-Serif" font-size="10.00">/target_kind.h</text>
 </a>
 </g>
 </g>
-<!-- Node7&#45;&gt;Node23 -->
-<g id="edge34" class="edge"><title>Node7&#45;&gt;Node23</title>
-<path fill="none" stroke="midnightblue" d="M1655.97,-381.621C1631.61,-370.625 1596.64,-354.841 1571.58,-343.528"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1654.82,-384.944 1665.38,-385.869 1657.7,-378.564 1654.82,-384.944"/>
+<!-- Node7&#45;&gt;Node21 -->
+<g id="edge32" class="edge"><title>Node7&#45;&gt;Node21</title>
+<path fill="none" stroke="midnightblue" d="M1324.28,-382.346C1294.62,-371.329 1250.9,-355.093 1219.77,-343.528"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1323.17,-385.668 1333.76,-385.869 1325.61,-379.106 1323.17,-385.668"/>
 </g>
 <!-- Node10&#45;&gt;Node9 -->
 <g id="edge12" class="edge"><title>Node10&#45;&gt;Node9</title>
-<path fill="none" stroke="midnightblue" d="M1816.21,-182.01C1859.56,-169.145 1928.8,-148.594 1967.65,-137.064"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1815.17,-178.668 1806.58,-184.869 1817.16,-185.378 1815.17,-178.668"/>
-</g>
-<!-- Node11 -->
-<g id="node11" class="node"><title>Node11</title>
-<g id="a_node11"><a xlink:href="ir_2attrs_8h.html" target="_top" xlink:title="Helpers for attribute objects. ">
-<polygon fill="white" stroke="red" points="1017.5,-118 1017.5,-137 1133.5,-137 1133.5,-118 1017.5,-118"/>
-<text text-anchor="middle" x="1075.5" y="-125" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/attrs.h</text>
-</a>
-</g>
+<path fill="none" stroke="midnightblue" d="M1675.14,-183.821C1586.05,-170.923 1430.09,-148.345 1347.65,-136.412"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1674.74,-187.299 1685.13,-185.268 1675.74,-180.371 1674.74,-187.299"/>
 </g>
 <!-- Node10&#45;&gt;Node11 -->
 <g id="edge13" class="edge"><title>Node10&#45;&gt;Node11</title>
-<path fill="none" stroke="midnightblue" d="M1732.85,-182.779C1725.74,-181.336 1718.44,-180.008 1711.5,-179 1491.45,-147.061 1433.98,-162.704 1212.5,-143 1186.65,-140.7 1158.1,-137.745 1133.79,-135.104"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1732.57,-186.297 1743.08,-184.954 1734.02,-179.45 1732.57,-186.297"/>
+<path fill="none" stroke="midnightblue" d="M1717.58,-179.915C1687,-163.115 1632.82,-134.044 1585,-112 1554.97,-98.1574 1519.43,-84.1508 1496.8,-75.5238"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1716.19,-183.147 1726.64,-184.913 1719.58,-177.018 1716.19,-183.147"/>
+</g>
+<!-- Node10&#45;&gt;Node15 -->
+<g id="edge17" class="edge"><title>Node10&#45;&gt;Node15</title>
+<path fill="none" stroke="midnightblue" d="M1755.38,-176.77C1765.52,-164.097 1779.04,-147.201 1787.1,-137.127"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1752.53,-174.739 1749.01,-184.734 1757.99,-179.112 1752.53,-174.739"/>
 </g>
-<!-- Node10&#45;&gt;Node12 -->
-<g id="edge14" class="edge"><title>Node10&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M1754.23,-179.836C1720.8,-160.955 1656.71,-127.47 1597.5,-112 1511.36,-89.4933 1251.75,-75.0951 1134.32,-69.5645"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1752.77,-183.038 1763.19,-184.969 1756.25,-176.963 1752.77,-183.038"/>
+<!-- Node11&#45;&gt;Node12 -->
+<g id="edge14" class="edge"><title>Node11&#45;&gt;Node12</title>
+<path fill="none" stroke="midnightblue" d="M1542.75,-58.8333C1659.07,-48.3993 1893.28,-27.3904 2008.65,-17.0416"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1542.19,-55.3691 1532.54,-59.7486 1542.82,-62.3411 1542.19,-55.3691"/>
+</g>
+<!-- Node13 -->
+<g id="node13" class="node"><title>Node13</title>
+<g id="a_node13"><a xlink:href="autodiff_8h.html" target="_top" xlink:title="Automatic differentiation of tensor expressions. ">
+<polygon fill="white" stroke="black" points="268.5,-0.5 268.5,-19.5 401.5,-19.5 401.5,-0.5 268.5,-0.5"/>
+<text text-anchor="middle" x="335" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/te/autodiff.h</text>
+</a>
 </g>
-<!-- Node10&#45;&gt;Node16 -->
-<g id="edge18" class="edge"><title>Node10&#45;&gt;Node16</title>
-<path fill="none" stroke="midnightblue" d="M1735.54,-182.323C1687.37,-169.489 1609.37,-148.712 1565.72,-137.083"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1734.79,-185.743 1745.35,-184.936 1736.59,-178.979 1734.79,-185.743"/>
 </g>
-<!-- Node12&#45;&gt;Node13 -->
-<g id="edge15" class="edge"><title>Node12&#45;&gt;Node13</title>
-<path fill="none" stroke="midnightblue" d="M1144.43,-60.2704C1294.11,-50.0003 1647.28,-25.7676 1795.34,-15.6084"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1143.75,-56.8089 1134.01,-60.9853 1144.23,-63.7925 1143.75,-56.8089"/>
+<!-- Node11&#45;&gt;Node13 -->
+<g id="edge15" class="edge"><title>Node11&#45;&gt;Node13</title>
+<path fill="none" stroke="midnightblue" d="M1404.87,-61.7228C1200.74,-52.0448 602.337,-23.6745 401.555,-14.1554"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1405.01,-65.2328 1415.16,-62.2104 1405.34,-58.2407 1405.01,-65.2328"/>
 </g>
 <!-- Node14 -->
 <g id="node14" class="node"><title>Node14</title>
-<g id="a_node14"><a xlink:href="autodiff_8h.html" target="_top" xlink:title="Automatic differentiation of tensor expressions. ">
-<polygon fill="white" stroke="black" points="362,-0.5 362,-19.5 495,-19.5 495,-0.5 362,-0.5"/>
-<text text-anchor="middle" x="428.5" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/te/autodiff.h</text>
+<g id="a_node14"><a xlink:href="tir_2function_8h.html" target="_top" xlink:title="TIR Function. ">
+<polygon fill="white" stroke="red" points="716.5,-0.5 716.5,-19.5 851.5,-19.5 851.5,-0.5 716.5,-0.5"/>
+<text text-anchor="middle" x="784" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/tir/function.h</text>
 </a>
 </g>
 </g>
-<!-- Node12&#45;&gt;Node14 -->
-<g id="edge16" class="edge"><title>Node12&#45;&gt;Node14</title>
-<path fill="none" stroke="midnightblue" d="M1006.73,-59.2601C881.654,-48.8212 618.501,-26.8579 495.07,-16.5561"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1006.69,-62.7687 1016.94,-60.1126 1007.27,-55.7929 1006.69,-62.7687"/>
-</g>
-<!-- Node15 -->
-<g id="node15" class="node"><title>Node15</title>
-<g id="a_node15"><a xlink:href="tir_2function_8h.html" target="_top" xlink:title="TIR Function. ">
-<polygon fill="white" stroke="red" points="643,-0.5 643,-19.5 778,-19.5 778,-0.5 643,-0.5"/>
-<text text-anchor="middle" x="710.5" y="-7.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/tir/function.h</text>
-</a>
+<!-- Node11&#45;&gt;Node14 -->
+<g id="edge16" class="edge"><title>Node11&#45;&gt;Node14</title>
+<path fill="none" stroke="midnightblue" d="M1405.44,-59.6344C1272.95,-49.2659 983.246,-26.5932 851.557,-16.2871"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1405.23,-63.1283 1415.47,-60.4193 1405.77,-56.1496 1405.23,-63.1283"/>
 </g>
+<!-- Node15&#45;&gt;Node11 -->
+<g id="edge18" class="edge"><title>Node15&#45;&gt;Node11</title>
+<path fill="none" stroke="midnightblue" d="M1738.16,-116.117C1676.19,-104.594 1577.35,-86.216 1519.76,-75.5091"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1737.68,-119.588 1748.15,-117.975 1738.96,-112.706 1737.68,-119.588"/>
 </g>
-<!-- Node12&#45;&gt;Node15 -->
-<g id="edge17" class="edge"><title>Node12&#45;&gt;Node15</title>
-<path fill="none" stroke="midnightblue" d="M1007.77,-54.9794C938.821,-44.7791 833.094,-29.1372 768.065,-19.5164"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1007.26,-58.4422 1017.66,-56.4435 1008.28,-51.5176 1007.26,-58.4422"/>
+<!-- Node16&#45;&gt;Node10 -->
+<g id="edge20" class="edge"><title>Node16&#45;&gt;Node10</title>
+<path fill="none" stroke="midnightblue" d="M1997.45,-310.433C1985.04,-291.79 1962.7,-262.402 1936,-246 1894.04,-220.22 1839.2,-207.511 1798.53,-201.299"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1994.51,-312.333 2002.87,-318.848 2000.4,-308.545 1994.51,-312.333"/>
 </g>
 <!-- Node16&#45;&gt;Node12 -->
-<g id="edge19" class="edge"><title>Node16&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M1467.71,-117.509C1454.09,-115.668 1439.83,-113.758 1426.5,-112 1323.72,-98.4471 1203.92,-83.2085 1134.21,-74.3955"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1467.6,-121.025 1477.98,-118.898 1468.54,-114.088 1467.6,-121.025"/>
+<g id="edge24" class="edge"><title>Node16&#45;&gt;Node12</title>
+<path fill="none" stroke="midnightblue" d="M2040.2,-314.909C2082.65,-295.95 2152,-255.752 2152,-195.5 2152,-195.5 2152,-195.5 2152,-126.5 2152,-79.6927 2107.9,-37.1785 2086.89,-19.5755"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="2038.62,-311.781 2030.82,-318.953 2041.39,-318.21 2038.62,-311.781"/>
 </g>
-<!-- Node17&#45;&gt;Node10 -->
-<g id="edge21" class="edge"><title>Node17&#45;&gt;Node10</title>
-<path fill="none" stroke="midnightblue" d="M2242.51,-310.074C2230.48,-290.855 2208.35,-260.709 2180.5,-246 2121.66,-214.921 1930.51,-202.218 1834.14,-197.69"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="2239.58,-311.984 2247.73,-318.752 2245.58,-308.378 2239.58,-311.984"/>
-</g>
-<!-- Node17&#45;&gt;Node13 -->
-<g id="edge25" class="edge"><title>Node17&#45;&gt;Node13</title>
-<path fill="none" stroke="midnightblue" d="M2245.96,-309.298C2239.21,-291.978 2227.55,-265.709 2212.5,-246 2119.65,-124.434 1945.13,-44.7307 1884.17,-19.562"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="2242.73,-310.651 2249.52,-318.788 2249.29,-308.195 2242.73,-310.651"/>
-</g>
-<!-- Node17&#45;&gt;Node18 -->
-<g id="edge22" class="edge"><title>Node17&#45;&gt;Node18</title>
-<path fill="none" stroke="midnightblue" d="M2220.33,-315.271C2186.39,-302.37 2133.79,-282.376 2104.03,-271.064"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="2219.2,-318.587 2229.79,-318.869 2221.69,-312.044 2219.2,-318.587"/>
-</g>
-<!-- Node18&#45;&gt;Node9 -->
-<g id="edge23" class="edge"><title>Node18&#45;&gt;Node9</title>
-<path fill="none" stroke="midnightblue" d="M2070.42,-243.292C2052,-214.693 2015.76,-158.41 2001.99,-137.021"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="2067.57,-245.326 2075.92,-251.839 2073.45,-241.537 2067.57,-245.326"/>
-</g>
-<!-- Node18&#45;&gt;Node10 -->
-<g id="edge24" class="edge"><title>Node18&#45;&gt;Node10</title>
-<path fill="none" stroke="midnightblue" d="M2031.47,-249.803C1971.65,-237.012 1872.71,-215.858 1817.64,-204.083"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="2030.93,-253.267 2041.44,-251.936 2032.4,-246.422 2030.93,-253.267"/>
-</g>
-<!-- Node20&#45;&gt;Node5 -->
-<g id="edge30" class="edge"><title>Node20&#45;&gt;Node5</title>
-<path fill="none" stroke="midnightblue" d="M1475.48,-247.195C1424.32,-236.22 1351.95,-220.696 1299.89,-209.528"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1474.9,-250.651 1485.41,-249.326 1476.37,-243.806 1474.9,-250.651"/>
-</g>
-<!-- Node20&#45;&gt;Node19 -->
-<g id="edge33" class="edge"><title>Node20&#45;&gt;Node19</title>
-<path fill="none" stroke="midnightblue" d="M1570.89,-241.296C1588.57,-231.125 1609.9,-218.855 1626.03,-209.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1568.94,-238.376 1562.02,-246.396 1572.43,-244.444 1568.94,-238.376"/>
+<!-- Node16&#45;&gt;Node17 -->
+<g id="edge21" class="edge"><title>Node16&#45;&gt;Node17</title>
+<path fill="none" stroke="midnightblue" d="M1960.4,-316.644C1904.25,-303.837 1812.05,-282.809 1760.64,-271.083"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1959.92,-320.124 1970.45,-318.936 1961.48,-313.299 1959.92,-320.124"/>
 </g>
-<!-- Node21 -->
-<g id="node21" class="node"><title>Node21</title>
-<g id="a_node21"><a xlink:href="driver__api_8h.html" target="_top" xlink:title="Compiler driver APIs to drive the compilation. ">
-<polygon fill="white" stroke="black" points="1440.5,-179.5 1440.5,-209.5 1542.5,-209.5 1542.5,-179.5 1440.5,-179.5"/>
-<text text-anchor="start" x="1448.5" y="-197.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/driver</text>
-<text text-anchor="middle" x="1491.5" y="-186.5" font-family="Helvetica,sans-Serif" font-size="10.00">/driver_api.h</text>
-</a>
+<!-- Node17&#45;&gt;Node9 -->
+<g id="edge22" class="edge"><title>Node17&#45;&gt;Node9</title>
+<path fill="none" stroke="midnightblue" d="M1685.37,-248.949C1600.45,-222.879 1395.5,-159.965 1320.82,-137.04"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1684.53,-252.352 1695.12,-251.941 1686.59,-245.66 1684.53,-252.352"/>
 </g>
+<!-- Node17&#45;&gt;Node10 -->
+<g id="edge23" class="edge"><title>Node17&#45;&gt;Node10</title>
+<path fill="none" stroke="midnightblue" d="M1728.43,-241.915C1732.06,-229.488 1736.68,-213.717 1739.48,-204.127"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1725.01,-241.153 1725.56,-251.734 1731.73,-243.117 1725.01,-241.153"/>
 </g>
-<!-- Node20&#45;&gt;Node21 -->
-<g id="edge31" class="edge"><title>Node20&#45;&gt;Node21</title>
-<path fill="none" stroke="midnightblue" d="M1521.69,-238.157C1514.98,-228.679 1507.37,-217.919 1501.46,-209.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1518.88,-240.255 1527.52,-246.396 1524.6,-236.211 1518.88,-240.255"/>
+<!-- Node19&#45;&gt;Node5 -->
+<g id="edge29" class="edge"><title>Node19&#45;&gt;Node5</title>
+<path fill="none" stroke="midnightblue" d="M1124.73,-243.58C1089,-233.022 1043.54,-219.592 1009.76,-209.611"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1123.95,-246.999 1134.53,-246.476 1125.94,-240.286 1123.95,-246.999"/>
 </g>
-<!-- Node22 -->
-<g id="node22" class="node"><title>Node22</title>
-<g id="a_node22"><a xlink:href="interpreter_8h.html" target="_top" xlink:title="An interpreter for Relay. ">
-<polygon fill="white" stroke="black" points="2264,-185 2264,-204 2425,-204 2425,-185 2264,-185"/>
-<text text-anchor="middle" x="2344.5" y="-192" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/relay/interpreter.h</text>
+<!-- Node19&#45;&gt;Node18 -->
+<g id="edge31" class="edge"><title>Node19&#45;&gt;Node18</title>
+<path fill="none" stroke="midnightblue" d="M1201.09,-238.736C1209.52,-229.119 1219.19,-218.089 1226.66,-209.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1198.33,-236.57 1194.37,-246.396 1203.59,-241.185 1198.33,-236.57"/>
+</g>
+<!-- Node20 -->
+<g id="node20" class="node"><title>Node20</title>
+<g id="a_node20"><a xlink:href="interpreter_8h.html" target="_top" xlink:title="An interpreter for Relay. ">
+<polygon fill="white" stroke="black" points="670.5,-185 670.5,-204 831.5,-204 831.5,-185 670.5,-185"/>
+<text text-anchor="middle" x="751" y="-192" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/relay/interpreter.h</text>
 </a>
 </g>
 </g>
-<!-- Node20&#45;&gt;Node22 -->
-<g id="edge32" class="edge"><title>Node20&#45;&gt;Node22</title>
-<path fill="none" stroke="midnightblue" d="M1599.54,-255.503C1744.02,-243.866 2102.83,-214.966 2263.94,-201.989"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1599.2,-252.019 1589.52,-256.31 1599.77,-258.996 1599.2,-252.019"/>
+<!-- Node19&#45;&gt;Node20 -->
+<g id="edge30" class="edge"><title>Node19&#45;&gt;Node20</title>
+<path fill="none" stroke="midnightblue" d="M1119.69,-251.103C1035.91,-238.468 888.176,-216.188 807.488,-204.019"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1119.17,-254.564 1129.58,-252.595 1120.22,-247.643 1119.17,-254.564"/>
 </g>
-<!-- Node23&#45;&gt;Node20 -->
-<g id="edge35" class="edge"><title>Node23&#45;&gt;Node20</title>
-<path fill="none" stroke="midnightblue" d="M1539.39,-303.403C1538.97,-294.37 1538.51,-284.408 1538.15,-276.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1535.89,-303.568 1539.85,-313.396 1542.88,-303.245 1535.89,-303.568"/>
+<!-- Node21&#45;&gt;Node19 -->
+<g id="edge33" class="edge"><title>Node21&#45;&gt;Node19</title>
+<path fill="none" stroke="midnightblue" d="M1182,-303.108C1182,-294.154 1182,-284.323 1182,-276.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1178.5,-303.396 1182,-313.396 1185.5,-303.396 1178.5,-303.396"/>
 </g>
-<!-- Node24&#45;&gt;Node7 -->
-<g id="edge40" class="edge"><title>Node24&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M1355.87,-451.272C1432.4,-438.517 1561.86,-416.941 1633.31,-405.032"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1355.18,-447.839 1345.89,-452.936 1356.33,-454.744 1355.18,-447.839"/>
+<!-- Node22&#45;&gt;Node7 -->
+<g id="edge38" class="edge"><title>Node22&#45;&gt;Node7</title>
+<path fill="none" stroke="midnightblue" d="M1391.64,-444.404C1382.51,-431.773 1370.45,-415.104 1363.24,-405.127"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1388.97,-446.681 1397.66,-452.734 1394.64,-442.579 1388.97,-446.681"/>
 </g>
-<!-- Node25 -->
-<g id="node25" class="node"><title>Node25</title>
-<g id="a_node25"><a xlink:href="env__func_8h.html" target="_top" xlink:title="Serializable global function used in IR. ">
-<polygon fill="white" stroke="red" points="1244,-380.5 1244,-410.5 1345,-410.5 1345,-380.5 1244,-380.5"/>
-<text text-anchor="start" x="1252" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/env</text>
-<text text-anchor="middle" x="1294.5" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00">_func.h</text>
+<!-- Node23 -->
+<g id="node23" class="node"><title>Node23</title>
+<g id="a_node23"><a xlink:href="env__func_8h.html" target="_top" xlink:title="Serializable global function used in IR. ">
+<polygon fill="white" stroke="red" points="1481.5,-380.5 1481.5,-410.5 1582.5,-410.5 1582.5,-380.5 1481.5,-380.5"/>
+<text text-anchor="start" x="1489.5" y="-398.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/ir/env</text>
+<text text-anchor="middle" x="1532" y="-387.5" font-family="Helvetica,sans-Serif" font-size="10.00">_func.h</text>
 </a>
 </g>
 </g>
-<!-- Node24&#45;&gt;Node25 -->
-<g id="edge39" class="edge"><title>Node24&#45;&gt;Node25</title>
-<path fill="none" stroke="midnightblue" d="M1294.5,-442.411C1294.5,-432.222 1294.5,-419.901 1294.5,-410.589"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1291,-442.734 1294.5,-452.734 1298,-442.734 1291,-442.734"/>
+<!-- Node22&#45;&gt;Node23 -->
+<g id="edge37" class="edge"><title>Node22&#45;&gt;Node23</title>
+<path fill="none" stroke="midnightblue" d="M1430.2,-448.194C1451.73,-437.261 1482.16,-421.81 1504.13,-410.652"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1428.33,-445.22 1421,-452.869 1431.5,-451.461 1428.33,-445.22"/>
 </g>
-<!-- Node29 -->
-<g id="node29" class="node"><title>Node29</title>
-<g id="a_node29"><a xlink:href="func__registry_8h.html" target="_top" xlink:title="Defines generic string&#45;based function lookup structs. ">
-<polygon fill="white" stroke="black" points="81,-782.5 81,-812.5 194,-812.5 194,-782.5 81,-782.5"/>
-<text text-anchor="start" x="89" y="-800.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="137.5" y="-789.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/func_registry.h</text>
+<!-- Node27 -->
+<g id="node27" class="node"><title>Node27</title>
+<g id="a_node27"><a xlink:href="func__registry_8h.html" target="_top" xlink:title="Defines generic string&#45;based function lookup structs. ">
+<polygon fill="white" stroke="black" points="2110.5,-782.5 2110.5,-812.5 2223.5,-812.5 2223.5,-782.5 2110.5,-782.5"/>
+<text text-anchor="start" x="2118.5" y="-800.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="2167" y="-789.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/func_registry.h</text>
 </a>
 </g>
 </g>
-<!-- Node28&#45;&gt;Node29 -->
-<g id="edge44" class="edge"><title>Node28&#45;&gt;Node29</title>
-<path fill="none" stroke="midnightblue" d="M162.098,-840.576C156.641,-831.241 150.507,-820.748 145.729,-812.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="159.186,-842.529 167.255,-849.396 165.229,-838.997 159.186,-842.529"/>
+<!-- Node26&#45;&gt;Node27 -->
+<g id="edge42" class="edge"><title>Node26&#45;&gt;Node27</title>
+<path fill="none" stroke="midnightblue" d="M2156.23,-839.697C2158.61,-830.587 2161.25,-820.493 2163.32,-812.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="2152.83,-838.836 2153.69,-849.396 2159.61,-840.607 2152.83,-838.836"/>
 </g>
-<!-- Node30 -->
-<g id="node30" class="node"><title>Node30</title>
-<g id="a_node30"><a xlink:href="runtime_2crt_2module_8h.html" target="_top" xlink:title="Runtime container of the functions. ">
-<polygon fill="white" stroke="black" points="81,-715.5 81,-745.5 194,-745.5 194,-715.5 81,-715.5"/>
-<text text-anchor="start" x="89" y="-733.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="137.5" y="-722.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/module.h</text>
+<!-- Node28 -->
+<g id="node28" class="node"><title>Node28</title>
+<g id="a_node28"><a xlink:href="runtime_2crt_2module_8h.html" target="_top" xlink:title="Runtime container of the functions. ">
+<polygon fill="white" stroke="black" points="2070.5,-715.5 2070.5,-745.5 2183.5,-745.5 2183.5,-715.5 2070.5,-715.5"/>
+<text text-anchor="start" x="2078.5" y="-733.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="2127" y="-722.5" font-family="Helvetica,sans-Serif" font-size="10.00">/crt/module.h</text>
 </a>
 </g>
 </g>
-<!-- Node28&#45;&gt;Node30 -->
-<g id="edge48" class="edge"><title>Node28&#45;&gt;Node30</title>
-<path fill="none" stroke="midnightblue" d="M192.202,-840.928C202.401,-824.2 211.843,-801.091 202.5,-782 194.836,-766.341 179.715,-754.138 165.984,-745.653"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="189.137,-839.219 186.572,-849.499 194.988,-843.062 189.137,-839.219"/>
+<!-- Node26&#45;&gt;Node28 -->
+<g id="edge46" class="edge"><title>Node26&#45;&gt;Node28</title>
+<path fill="none" stroke="midnightblue" d="M2122.64,-842.653C2114.03,-834.457 2105.58,-824.296 2101,-813 2091.63,-789.877 2105.75,-762.004 2116.57,-745.573"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="2120.36,-845.304 2130.16,-849.329 2125.01,-840.071 2120.36,-845.304"/>
+</g>
+<!-- Node27&#45;&gt;Node28 -->
+<g id="edge43" class="edge"><title>Node27&#45;&gt;Node28</title>
+<path fill="none" stroke="midnightblue" d="M2153.07,-773.867C2147.28,-764.459 2140.74,-753.833 2135.66,-745.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="2150.1,-775.714 2158.32,-782.396 2156.06,-772.045 2150.1,-775.714"/>
+</g>
+<!-- Node28&#45;&gt;Node29 -->
+<g id="edge44" class="edge"><title>Node28&#45;&gt;Node29</title>
+<path fill="none" stroke="midnightblue" d="M2168.75,-711.407C2192.7,-701.029 2222.32,-688.195 2244.52,-678.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="2167.33,-708.208 2159.55,-715.396 2170.11,-714.631 2167.33,-708.208"/>
 </g>
 <!-- Node29&#45;&gt;Node30 -->
 <g id="edge45" class="edge"><title>Node29&#45;&gt;Node30</title>
-<path fill="none" stroke="midnightblue" d="M137.5,-772.108C137.5,-763.154 137.5,-753.323 137.5,-745.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="134,-772.396 137.5,-782.396 141,-772.396 134,-772.396"/>
-</g>
-<!-- Node30&#45;&gt;Node31 -->
-<g id="edge46" class="edge"><title>Node30&#45;&gt;Node31</title>
-<path fill="none" stroke="midnightblue" d="M124.098,-706.576C118.641,-697.241 112.507,-686.748 107.729,-678.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="121.186,-708.529 129.255,-715.396 127.229,-704.997 121.186,-708.529"/>
-</g>
-<!-- Node31&#45;&gt;Node32 -->
-<g id="edge47" class="edge"><title>Node31&#45;&gt;Node32</title>
-<path fill="none" stroke="midnightblue" d="M84.8755,-639.867C78.7969,-630.459 71.9308,-619.833 66.5956,-611.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="82.0197,-641.896 90.3866,-648.396 87.8991,-638.097 82.0197,-641.896"/>
-</g>
-<!-- Node33&#45;&gt;Node12 -->
-<g id="edge94" class="edge"><title>Node33&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M1058.53,-790.927C953.01,-781.796 752.934,-762.965 683.5,-746 646.055,-736.851 639.352,-726.302 602.5,-715 535.3,-694.391 512.41,-708.26 448.5,-679 392.526,-653.373 336.5,-659.062 336.5,-597.5 336.5,-597.5 336.5,-597.5 336.5,-461.5 336.5,-366.825 412.5,-357.175 412.5,-262.5 412.5,-262.5 412.5,-262.5 412.5,-193.5 412.5,-132.237 854.931,-86.6491 1016.95,-72.0085"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1058.48,-794.436 1068.74,-791.807 1059.08,-787.462 1058.48,-794.436"/>
-</g>
-<!-- Node33&#45;&gt;Node16 -->
-<g id="edge95" class="edge"><title>Node33&#45;&gt;Node16</title>
-<path fill="none" stroke="midnightblue" d="M1129.72,-772.408C1143.79,-693.964 1190.64,-447.061 1234.5,-380 1302.97,-275.317 1330.25,-252.449 1431.5,-179 1456.61,-160.784 1489.29,-145.917 1510.8,-137.143"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1126.26,-771.865 1127.96,-782.323 1133.16,-773.09 1126.26,-771.865"/>
-</g>
-<!-- Node33&#45;&gt;Node18 -->
-<g id="edge52" class="edge"><title>Node33&#45;&gt;Node18</title>
-<path fill="none" stroke="midnightblue" d="M1192.18,-794.029C1404.75,-785.276 2052.5,-751.468 2052.5,-664.5 2052.5,-664.5 2052.5,-664.5 2052.5,-595.5 2052.5,-483.214 2067.73,-455.942 2076.5,-344 2078.58,-317.431 2080.3,-285.932 2081.07,-271.072"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1191.94,-790.536 1182.09,-794.439 1192.23,-797.53 1191.94,-790.536"/>
-</g>
-<!-- Node33&#45;&gt;Node24 -->
-<g id="edge53" class="edge"><title>Node33&#45;&gt;Node24</title>
-<path fill="none" stroke="midnightblue" d="M1141.09,-773.669C1156.96,-750.195 1182.08,-712.448 1202.5,-679 1220.34,-649.779 1226.16,-643.084 1240.5,-612 1264.29,-560.439 1284.91,-495.169 1291.94,-472.072"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1138,-771.98 1135.29,-782.22 1143.8,-775.91 1138,-771.98"/>
+<path fill="none" stroke="midnightblue" d="M2291.62,-639.867C2297.7,-630.459 2304.57,-619.833 2309.9,-611.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="2288.6,-638.097 2286.11,-648.396 2294.48,-641.896 2288.6,-638.097"/>
+</g>
+<!-- Node32&#45;&gt;Node29 -->
+<g id="edge51" class="edge"><title>Node32&#45;&gt;Node29</title>
+<path fill="none" stroke="midnightblue" d="M2264.96,-705.697C2267.62,-696.587 2270.57,-686.493 2272.89,-678.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="2261.57,-704.816 2262.12,-715.396 2268.29,-706.78 2261.57,-704.816"/>
+</g>
+<!-- Node33&#45;&gt;Node11 -->
+<g id="edge91" class="edge"><title>Node33&#45;&gt;Node11</title>
+<path fill="none" stroke="midnightblue" d="M1778.6,-777.887C1795.75,-769.227 1815.52,-758.154 1832,-746 1866.14,-720.816 1870.72,-709.707 1900,-679 2042.67,-529.368 2184.8,-417.946 2070,-246 2008.11,-153.296 1964.71,-144.213 1858,-112 1746.19,-78.2477 1608.97,-69.5885 1532.54,-67.5043"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1776.79,-774.874 1769.38,-782.441 1779.89,-781.15 1776.79,-774.874"/>
+</g>
+<!-- Node33&#45;&gt;Node15 -->
+<g id="edge92" class="edge"><title>Node33&#45;&gt;Node15</title>
+<path fill="none" stroke="midnightblue" d="M1764.7,-775.819C1836.95,-717.607 2033.67,-543.546 2075,-344 2100.16,-222.544 1911.14,-158.681 1829.9,-137.068"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1762.16,-773.366 1756.52,-782.337 1766.52,-778.84 1762.16,-773.366"/>
+</g>
+<!-- Node33&#45;&gt;Node17 -->
+<g id="edge53" class="edge"><title>Node33&#45;&gt;Node17</title>
+<path fill="none" stroke="midnightblue" d="M1751.05,-773.302C1755.21,-764.937 1759.4,-755.269 1762,-746 1798.1,-617.504 1806.54,-579.441 1790,-447 1781.13,-375.988 1741.73,-297.328 1727.85,-271.352"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1747.82,-771.939 1746.29,-782.423 1754.02,-775.18 1747.82,-771.939"/>
+</g>
+<!-- Node33&#45;&gt;Node22 -->
+<g id="edge54" class="edge"><title>Node33&#45;&gt;Node22</title>
+<path fill="none" stroke="midnightblue" d="M1741.59,-771.994C1750.85,-706.892 1773.71,-533.265 1757,-514 1722.12,-473.775 1574.76,-464.806 1481.81,-463.251"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1738.09,-771.756 1740.13,-782.152 1745.01,-772.752 1738.09,-771.756"/>
 </g>
 <!-- Node34 -->
 <g id="node34" class="node"><title>Node34</title>
 <g id="a_node34"><a xlink:href="structural__equal_8h.html" target="_top" xlink:title="Structural equality comparison. ">
-<polygon fill="white" stroke="black" points="1610.5,-514.5 1610.5,-544.5 1758.5,-544.5 1758.5,-514.5 1610.5,-514.5"/>
-<text text-anchor="start" x="1618.5" y="-532.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/structural</text>
-<text text-anchor="middle" x="1684.5" y="-521.5" font-family="Helvetica,sans-Serif" font-size="10.00">_equal.h</text>
+<polygon fill="white" stroke="red" points="1434,-514.5 1434,-544.5 1582,-544.5 1582,-514.5 1434,-514.5"/>
+<text text-anchor="start" x="1442" y="-532.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/structural</text>
+<text text-anchor="middle" x="1508" y="-521.5" font-family="Helvetica,sans-Serif" font-size="10.00">_equal.h</text>
 </a>
 </g>
 </g>
 <!-- Node33&#45;&gt;Node34 -->
-<g id="edge54" class="edge"><title>Node33&#45;&gt;Node34</title>
-<path fill="none" stroke="midnightblue" d="M1192.29,-787.007C1297.04,-769.154 1501.78,-721.978 1635.5,-612 1657.69,-593.752 1672.35,-562.659 1679.47,-544.612"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1191.45,-783.598 1182.16,-788.694 1192.6,-790.503 1191.45,-783.598"/>
+<g id="edge55" class="edge"><title>Node33&#45;&gt;Node34</title>
+<path fill="none" stroke="midnightblue" d="M1689.37,-778.469C1627.22,-752.46 1522.37,-697.888 1478,-612 1466.37,-589.495 1482.93,-561.429 1495.68,-544.792"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1688.38,-781.847 1698.96,-782.398 1691.03,-775.369 1688.38,-781.847"/>
 </g>
 <!-- Node35 -->
 <g id="node35" class="node"><title>Node35</title>
 <g id="a_node35"><a xlink:href="structural__hash_8h.html" target="_top" xlink:title="include/tvm/node/structural\l_hash.h">
-<polygon fill="white" stroke="black" points="1444.5,-514.5 1444.5,-544.5 1592.5,-544.5 1592.5,-514.5 1444.5,-514.5"/>
-<text text-anchor="start" x="1452.5" y="-532.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/structural</text>
-<text text-anchor="middle" x="1518.5" y="-521.5" font-family="Helvetica,sans-Serif" font-size="10.00">_hash.h</text>
+<polygon fill="white" stroke="red" points="1600,-514.5 1600,-544.5 1748,-544.5 1748,-514.5 1600,-514.5"/>
+<text text-anchor="start" x="1608" y="-532.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/structural</text>
+<text text-anchor="middle" x="1674" y="-521.5" font-family="Helvetica,sans-Serif" font-size="10.00">_hash.h</text>
 </a>
 </g>
 </g>
 <!-- Node33&#45;&gt;Node35 -->
 <g id="edge58" class="edge"><title>Node33&#45;&gt;Node35</title>
-<path fill="none" stroke="midnightblue" d="M1154.99,-776.543C1231.02,-725.078 1432.35,-588.81 1497.7,-544.58"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1152.67,-773.883 1146.35,-782.386 1156.6,-779.68 1152.67,-773.883"/>
+<path fill="none" stroke="midnightblue" d="M1731.13,-772.521C1728.79,-764.15 1726.21,-754.679 1724,-746 1704.77,-670.442 1684.6,-579.208 1677.07,-544.67"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1727.83,-773.721 1733.92,-782.394 1734.57,-771.819 1727.83,-773.721"/>
 </g>
 <!-- Node33&#45;&gt;Node36 -->
-<g id="edge62" class="edge"><title>Node33&#45;&gt;Node36</title>
-<path fill="none" stroke="midnightblue" d="M1058.94,-784.994C987.06,-772.568 873.769,-752.983 806.044,-741.275"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1058.35,-788.444 1068.8,-786.698 1059.54,-781.546 1058.35,-788.444"/>
+<g id="edge61" class="edge"><title>Node33&#45;&gt;Node36</title>
+<path fill="none" stroke="midnightblue" d="M1671.06,-782.332C1548.25,-771.921 679.027,-741.467 447.509,-733.449"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1670.93,-785.837 1681.23,-783.36 1671.63,-778.872 1670.93,-785.837"/>
 </g>
 <!-- Node33&#45;&gt;Node37 -->
-<g id="edge90" class="edge"><title>Node33&#45;&gt;Node37</title>
-<path fill="none" stroke="midnightblue" d="M1103.6,-775.245C1075.58,-748.044 1027.81,-701.68 1003.95,-678.525"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1101.33,-777.92 1110.95,-782.374 1106.21,-772.897 1101.33,-777.92"/>
+<g id="edge87" class="edge"><title>Node33&#45;&gt;Node37</title>
+<path fill="none" stroke="midnightblue" d="M1671.07,-782.07C1311.9,-736.782 1208.5,-827.463 860,-746 799.168,-731.78 733.49,-697.933 699.116,-678.627"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1670.86,-785.572 1681.23,-783.378 1671.76,-778.629 1670.86,-785.572"/>
 </g>
 <!-- Node45 -->
 <g id="node45" class="node"><title>Node45</title>
 <g id="a_node45"><a xlink:href="bytecode_8h.html" target="_top" xlink:title="The bytecode for Relay virtual machine. ">
-<polygon fill="white" stroke="black" points="687,-581.5 687,-611.5 800,-611.5 800,-581.5 687,-581.5"/>
-<text text-anchor="start" x="695" y="-599.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="743.5" y="-588.5" font-family="Helvetica,sans-Serif" font-size="10.00">/vm/bytecode.h</text>
+<polygon fill="white" stroke="black" points="694.5,-581.5 694.5,-611.5 807.5,-611.5 807.5,-581.5 694.5,-581.5"/>
+<text text-anchor="start" x="702.5" y="-599.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="751" y="-588.5" font-family="Helvetica,sans-Serif" font-size="10.00">/vm/bytecode.h</text>
 </a>
 </g>
 </g>
 <!-- Node33&#45;&gt;Node45 -->
-<g id="edge91" class="edge"><title>Node33&#45;&gt;Node45</title>
-<path fill="none" stroke="midnightblue" d="M1089.11,-777.542C1012.85,-737.817 837.807,-646.629 770.528,-611.58"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1087.85,-780.832 1098.33,-782.348 1091.08,-774.624 1087.85,-780.832"/>
+<g id="edge88" class="edge"><title>Node33&#45;&gt;Node45</title>
+<path fill="none" stroke="midnightblue" d="M1671.19,-782.301C1549.32,-771.465 968.173,-783.538 893,-746 842.445,-720.755 851.941,-687.973 812,-648 798.784,-634.773 782.394,-621.348 769.933,-611.667"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1670.91,-785.791 1681.23,-783.385 1671.66,-778.832 1670.91,-785.791"/>
 </g>
 <!-- Node34&#45;&gt;Node7 -->
 <g id="edge56" class="edge"><title>Node34&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M1684.5,-504.317C1684.5,-474.376 1684.5,-424.911 1684.5,-405.097"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1681,-504.374 1684.5,-514.374 1688,-504.374 1681,-504.374"/>
+<path fill="none" stroke="midnightblue" d="M1508.5,-504.144C1507.59,-486.359 1503.57,-462.708 1490,-447 1470.54,-424.481 1439.93,-411.973 1412.98,-405.04"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1505,-504.413 1508.72,-514.335 1512,-504.262 1505,-504.413"/>
 </g>
-<!-- Node34&#45;&gt;Node11 -->
-<g id="edge55" class="edge"><title>Node34&#45;&gt;Node11</title>
-<path fill="none" stroke="midnightblue" d="M1606.94,-512.185C1565.09,-502.417 1518.43,-489.669 1500.5,-478 1453.38,-447.327 1467.46,-410.918 1420.5,-380 1371.86,-347.974 1344.39,-374.057 1294.5,-344 1242.81,-312.853 1249.97,-280.572 1200.5,-246 1167.88,-223.2 1147.64,-237.116 1118.5,-210 1096.01,-189.074 1083.07,-153.693 1078,-137.311"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1606.37,-515.644 1616.9,-514.475 1607.94,-508.822 1606.37,-515.644"/>
-</g>
-<!-- Node34&#45;&gt;Node24 -->
-<g id="edge57" class="edge"><title>Node34&#45;&gt;Node24</title>
-<path fill="none" stroke="midnightblue" d="M1600.23,-513.793C1505.25,-497.722 1406.31,-481.582 1347.22,-472.009"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1599.85,-517.279 1610.29,-515.499 1601.02,-510.377 1599.85,-517.279"/>
+<!-- Node34&#45;&gt;Node22 -->
+<g id="edge57" class="edge"><title>Node34&#45;&gt;Node22</title>
+<path fill="none" stroke="midnightblue" d="M1476.76,-508.975C1457.04,-496.649 1432.55,-481.345 1417.61,-472.007"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1475.1,-512.064 1485.43,-514.396 1478.81,-506.128 1475.1,-512.064"/>
 </g>
 <!-- Node35&#45;&gt;Node7 -->
-<g id="edge60" class="edge"><title>Node35&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M1544.1,-508.14C1581.04,-478.772 1647.68,-425.78 1673.69,-405.097"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1541.91,-505.41 1536.26,-514.374 1546.27,-510.889 1541.91,-505.41"/>
-</g>
-<!-- Node35&#45;&gt;Node11 -->
-<g id="edge59" class="edge"><title>Node35&#45;&gt;Node11</title>
-<path fill="none" stroke="midnightblue" d="M1493.31,-507.726C1483.29,-498.977 1471.92,-488.419 1462.5,-478 1426.24,-437.918 1435.11,-410.509 1390.5,-380 1339.6,-345.19 1308.86,-376.573 1256.5,-344 1207.3,-313.391 1217.81,-279.458 1170.5,-246 1135.33,-221.126 1105.73,-244.918 1080.5,-210 1064.56,-187.94 1069.72,-153.126 1073.29,-137.098"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1491.13,-510.465 1500.99,-514.318 1495.68,-505.151 1491.13,-510.465"/>
-</g>
-<!-- Node35&#45;&gt;Node24 -->
-<g id="edge61" class="edge"><title>Node35&#45;&gt;Node24</title>
-<path fill="none" stroke="midnightblue" d="M1460.5,-511.671C1416.72,-498.967 1358.47,-482.064 1323.89,-472.029"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1459.59,-515.05 1470.17,-514.476 1461.54,-508.328 1459.59,-515.05"/>
-</g>
-<!-- Node36&#45;&gt;Node15 -->
-<g id="edge89" class="edge"><title>Node36&#45;&gt;Node15</title>
-<path fill="none" stroke="midnightblue" d="M751.329,-705.371C751.304,-686.975 747.963,-662.422 732.5,-648 646.224,-567.53 568.899,-675.551 469.5,-612 403.489,-569.795 374.5,-541.85 374.5,-463.5 374.5,-463.5 374.5,-463.5 374.5,-126.5 374.5,-69.4427 549.97,-34.5332 647.301,-19.5582"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="747.831,-705.266 751.027,-715.367 754.827,-705.478 747.831,-705.266"/>
-</g>
-<!-- Node36&#45;&gt;Node24 -->
-<g id="edge63" class="edge"><title>Node36&#45;&gt;Node24</title>
-<path fill="none" stroke="midnightblue" d="M769.526,-707.545C799.901,-675.464 860.755,-615.818 923.5,-581 1031.76,-520.926 1175.13,-486.65 1247.87,-472.02"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="766.756,-705.383 762.478,-715.076 771.866,-710.167 766.756,-705.383"/>
+<g id="edge59" class="edge"><title>Node35&#45;&gt;Node7</title>
+<path fill="none" stroke="midnightblue" d="M1640,-509.498C1607.7,-491.902 1557.44,-465.708 1512,-447 1469.86,-429.654 1419.42,-414.152 1387.79,-405.035"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1638.57,-512.704 1649.02,-514.442 1641.93,-506.566 1638.57,-512.704"/>
+</g>
+<!-- Node35&#45;&gt;Node22 -->
+<g id="edge60" class="edge"><title>Node35&#45;&gt;Node22</title>
+<path fill="none" stroke="midnightblue" d="M1605.75,-512.069C1552.8,-499.323 1481.55,-482.169 1439.43,-472.029"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1605.2,-515.538 1615.75,-514.476 1606.84,-508.733 1605.2,-515.538"/>
+</g>
+<!-- Node36&#45;&gt;Node14 -->
+<g id="edge86" class="edge"><title>Node36&#45;&gt;Node14</title>
+<path fill="none" stroke="midnightblue" d="M368.77,-708.036C359.917,-699.235 349.769,-688.825 341,-679 287.375,-618.916 114,-477.034 114,-396.5 114,-396.5 114,-396.5 114,-126.5 114,-65.8226 546.262,-28.1034 716.38,-15.6083"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="366.568,-710.779 376.15,-715.299 371.478,-705.79 366.568,-710.779"/>
+</g>
+<!-- Node36&#45;&gt;Node22 -->
+<g id="edge62" class="edge"><title>Node36&#45;&gt;Node22</title>
+<path fill="none" stroke="midnightblue" d="M457.634,-727.718C636.793,-721.807 1124.19,-698.025 1256,-612 1298.77,-584.088 1279.38,-549.596 1316,-514 1335.45,-495.093 1363.35,-480.659 1382.49,-472.158"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="457.436,-724.222 447.554,-728.042 457.661,-731.219 457.436,-724.222"/>
 </g>
 <!-- Node36&#45;&gt;Node37 -->
-<g id="edge64" class="edge"><title>Node36&#45;&gt;Node37</title>
-<path fill="none" stroke="midnightblue" d="M810.978,-712.85C850.107,-702.252 900.229,-688.677 937.398,-678.611"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="810.018,-709.483 801.281,-715.476 811.848,-716.24 810.018,-709.483"/>
+<g id="edge63" class="edge"><title>Node36&#45;&gt;Node37</title>
+<path fill="none" stroke="midnightblue" d="M457.629,-714.251C506.445,-703.078 571.996,-688.075 618.309,-677.475"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="456.756,-710.86 447.789,-716.503 458.318,-717.683 456.756,-710.86"/>
 </g>
 <!-- Node36&#45;&gt;Node43 -->
-<g id="edge85" class="edge"><title>Node36&#45;&gt;Node43</title>
-<path fill="none" stroke="midnightblue" d="M717.919,-709.451C704.527,-699.452 689.78,-687.6 679.583,-678.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="715.894,-712.307 726.029,-715.396 720.033,-706.661 715.894,-712.307"/>
+<g id="edge82" class="edge"><title>Node36&#45;&gt;Node43</title>
+<path fill="none" stroke="midnightblue" d="M342.349,-711.544C316.726,-701.143 286.296,-688.238 264.676,-678.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="341.322,-714.904 351.906,-715.396 343.939,-708.411 341.322,-714.904"/>
 </g>
 <!-- Node36&#45;&gt;Node44 -->
-<g id="edge87" class="edge"><title>Node36&#45;&gt;Node44</title>
-<path fill="none" stroke="midnightblue" d="M691.19,-712.58C654.808,-702.022 608.525,-688.592 574.128,-678.611"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="690.592,-716.05 701.171,-715.476 692.543,-709.328 690.592,-716.05"/>
+<g id="edge84" class="edge"><title>Node36&#45;&gt;Node44</title>
+<path fill="none" stroke="midnightblue" d="M400.765,-705.991C404.581,-696.804 408.829,-686.578 412.153,-678.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="397.462,-704.818 396.859,-715.396 403.927,-707.504 397.462,-704.818"/>
 </g>
 <!-- Node37&#45;&gt;Node3 -->
-<g id="edge65" class="edge"><title>Node37&#45;&gt;Node3</title>
-<path fill="none" stroke="midnightblue" d="M922.391,-657.751C843.199,-651.105 717.253,-636.95 677.5,-612 640.061,-588.503 620.5,-574.702 620.5,-530.5 620.5,-530.5 620.5,-530.5 620.5,-461.5 620.5,-409.938 669.444,-365.626 698.505,-343.86"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="922.51,-661.273 932.763,-658.602 923.082,-654.296 922.51,-661.273"/>
-</g>
-<!-- Node37&#45;&gt;Node11 -->
-<g id="edge67" class="edge"><title>Node37&#45;&gt;Node11</title>
-<path fill="none" stroke="midnightblue" d="M1025.98,-642.815C1037.16,-634.824 1048.17,-624.494 1054.5,-612 1141.52,-440.269 1046.89,-370.517 1066.5,-179 1068.02,-164.117 1071.42,-147.019 1073.58,-137.023"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1023.86,-640.018 1017.47,-648.467 1027.74,-645.849 1023.86,-640.018"/>
-</g>
-<!-- Node37&#45;&gt;Node21 -->
-<g id="edge66" class="edge"><title>Node37&#45;&gt;Node21</title>
-<path fill="none" stroke="midnightblue" d="M1055.79,-649.347C1080.33,-641.776 1106.91,-630.038 1126.5,-612 1147.42,-592.741 1216.36,-401.895 1234.5,-380 1301.43,-299.219 1411.55,-236.276 1463.15,-209.52"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1054.77,-645.997 1046.15,-652.146 1056.73,-652.719 1054.77,-645.997"/>
+<g id="edge64" class="edge"><title>Node37&#45;&gt;Node3</title>
+<path fill="none" stroke="midnightblue" d="M741.612,-647.645C799.453,-630.35 874,-595.459 874,-530.5 874,-530.5 874,-530.5 874,-461.5 874,-414.638 840.938,-366.949 822.34,-343.83"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="740.406,-644.349 731.759,-650.472 742.337,-651.078 740.406,-644.349"/>
 </g>
-<!-- Node37&#45;&gt;Node24 -->
-<g id="edge76" class="edge"><title>Node37&#45;&gt;Node24</title>
-<path fill="none" stroke="midnightblue" d="M1050.86,-645.289C1075.33,-637.069 1103.16,-625.945 1126.5,-612 1196.68,-570.075 1264.77,-497.006 1286.97,-472.1"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1049.74,-641.975 1041.32,-648.409 1051.91,-648.629 1049.74,-641.975"/>
+<!-- Node37&#45;&gt;Node22 -->
+<g id="edge73" class="edge"><title>Node37&#45;&gt;Node22</title>
+<path fill="none" stroke="midnightblue" d="M741.546,-647.577C783.652,-637.923 839.163,-624.809 888,-612 1075.55,-562.808 1298.97,-495.52 1376.16,-472.016"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="740.656,-644.19 731.688,-649.83 742.216,-651.014 740.656,-644.19"/>
 </g>
 <!-- Node38 -->
 <g id="node38" class="node"><title>Node38</title>
 <g id="a_node38"><a xlink:href="node_2container_8h.html" target="_top" xlink:title="Array/Map container in the DSL graph. ">
-<polygon fill="white" stroke="red" points="1683,-587 1683,-606 1838,-606 1838,-587 1683,-587"/>
-<text text-anchor="middle" x="1760.5" y="-594" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/container.h</text>
+<polygon fill="white" stroke="red" points="1092.5,-587 1092.5,-606 1247.5,-606 1247.5,-587 1092.5,-587"/>
+<text text-anchor="middle" x="1170" y="-594" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/container.h</text>
 </a>
 </g>
 </g>
 <!-- Node37&#45;&gt;Node38 -->
-<g id="edge68" class="edge"><title>Node37&#45;&gt;Node38</title>
-<path fill="none" stroke="midnightblue" d="M1056.42,-658.098C1173.79,-650.193 1424.37,-632.49 1635.5,-612 1653.55,-610.248 1673.03,-608.108 1691.13,-606.016"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1055.9,-654.625 1046.16,-658.787 1056.37,-661.609 1055.9,-654.625"/>
+<g id="edge65" class="edge"><title>Node37&#45;&gt;Node38</title>
+<path fill="none" stroke="midnightblue" d="M741.873,-653.719C837.099,-641.214 1010.65,-618.425 1104.89,-606.049"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="741.242,-650.271 731.783,-655.044 742.153,-657.212 741.242,-650.271"/>
 </g>
 <!-- Node37&#45;&gt;Node39 -->
-<g id="edge77" class="edge"><title>Node37&#45;&gt;Node39</title>
-<path fill="none" stroke="midnightblue" d="M922.868,-655.155C846.294,-646.38 716.961,-630.538 606.5,-612 601.785,-611.209 596.906,-610.333 592.014,-609.415"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="922.524,-658.638 932.856,-656.294 923.317,-651.683 922.524,-658.638"/>
+<g id="edge74" class="edge"><title>Node37&#45;&gt;Node39</title>
+<path fill="none" stroke="midnightblue" d="M607.84,-654.007C505.769,-641.099 313.389,-616.771 217.834,-604.687"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="607.669,-657.513 618.029,-655.296 608.547,-650.568 607.669,-657.513"/>
 </g>
 <!-- Node37&#45;&gt;Node40 -->
-<g id="edge78" class="edge"><title>Node37&#45;&gt;Node40</title>
-<path fill="none" stroke="midnightblue" d="M982.955,-638.403C982.505,-629.37 982.796,-619.408 983.825,-611.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="979.47,-638.736 983.822,-648.396 986.444,-638.131 979.47,-638.736"/>
+<g id="edge75" class="edge"><title>Node37&#45;&gt;Node40</title>
+<path fill="none" stroke="midnightblue" d="M614.848,-645.58C579.975,-635.022 537.107,-621.592 506.476,-611.611"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="613.871,-648.941 624.455,-648.476 615.891,-642.239 613.871,-648.941"/>
 </g>
 <!-- Node41 -->
 <g id="node41" class="node"><title>Node41</title>
 <g id="a_node41"><a xlink:href="executable_8h.html" target="_top" xlink:title="The Relay virtual machine executable. ">
-<polygon fill="white" stroke="black" points="763,-514.5 763,-544.5 876,-544.5 876,-514.5 763,-514.5"/>
-<text text-anchor="start" x="771" y="-532.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="819.5" y="-521.5" font-family="Helvetica,sans-Serif" font-size="10.00">/vm/executable.h</text>
+<polygon fill="white" stroke="black" points="542.5,-514.5 542.5,-544.5 655.5,-544.5 655.5,-514.5 542.5,-514.5"/>
+<text text-anchor="start" x="550.5" y="-532.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="599" y="-521.5" font-family="Helvetica,sans-Serif" font-size="10.00">/vm/executable.h</text>
 </a>
 </g>
 </g>
 <!-- Node37&#45;&gt;Node41 -->
-<g id="edge83" class="edge"><title>Node37&#45;&gt;Node41</title>
-<path fill="none" stroke="midnightblue" d="M945.035,-644.273C927.166,-635.846 906.947,-624.842 890.5,-612 865.06,-592.136 841.754,-562.174 829.199,-544.65"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="943.584,-647.458 954.133,-648.442 946.499,-641.094 943.584,-647.458"/>
+<g id="edge80" class="edge"><title>Node37&#45;&gt;Node41</title>
+<path fill="none" stroke="midnightblue" d="M642.47,-642.358C631.834,-634.201 620.973,-623.883 614,-612 601.622,-590.906 599.069,-561.887 598.75,-544.771"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="640.718,-645.41 650.874,-648.426 644.815,-639.734 640.718,-645.41"/>
 </g>
 <!-- Node42 -->
 <g id="node42" class="node"><title>Node42</title>
 <g id="a_node42"><a xlink:href="runtime_2vm_2vm_8h.html" target="_top" xlink:title="The Relay virtual machine runtime. ">
-<polygon fill="white" stroke="black" points="848,-447.5 848,-477.5 961,-477.5 961,-447.5 848,-447.5"/>
-<text text-anchor="start" x="856" y="-465.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="904.5" y="-454.5" font-family="Helvetica,sans-Serif" font-size="10.00">/vm/vm.h</text>
+<polygon fill="white" stroke="black" points="542.5,-447.5 542.5,-477.5 655.5,-477.5 655.5,-447.5 542.5,-447.5"/>
+<text text-anchor="start" x="550.5" y="-465.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="599" y="-454.5" font-family="Helvetica,sans-Serif" font-size="10.00">/vm/vm.h</text>
 </a>
 </g>
 </g>
 <!-- Node37&#45;&gt;Node42 -->
-<g id="edge84" class="edge"><title>Node37&#45;&gt;Node42</title>
-<path fill="none" stroke="midnightblue" d="M953.455,-642.737C942.069,-634.695 930.614,-624.355 923.5,-612 898.338,-568.303 900.401,-505.554 902.876,-477.723"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="951.802,-645.839 962.074,-648.433 955.661,-639.999 951.802,-645.839"/>
+<g id="edge81" class="edge"><title>Node37&#45;&gt;Node42</title>
+<path fill="none" stroke="midnightblue" d="M635.397,-643.681C600.402,-624.488 552.053,-590.998 533,-545 527.727,-532.271 526.898,-526.353 533,-514 540.796,-498.219 556.151,-486.01 570.089,-477.552"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="633.956,-646.879 644.428,-648.483 637.242,-640.698 633.956,-646.879"/>
 </g>
 <!-- Node38&#45;&gt;Node7 -->
-<g id="edge72" class="edge"><title>Node38&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M1777.53,-579.8C1786.21,-570.612 1795.89,-558.235 1800.5,-545 1805.03,-531.988 1805.73,-526.746 1800.5,-514 1779.49,-462.819 1724.23,-422.025 1698.3,-405.047"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1774.92,-577.458 1770.31,-586.995 1779.86,-582.417 1774.92,-577.458"/>
+<g id="edge69" class="edge"><title>Node38&#45;&gt;Node7</title>
+<path fill="none" stroke="midnightblue" d="M1168.5,-576.538C1167.13,-545.45 1169.1,-483.463 1202,-447 1224.18,-422.414 1258.46,-409.753 1289.15,-403.252"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1165.03,-577.055 1169.13,-586.822 1172.02,-576.626 1165.03,-577.055"/>
 </g>
 <!-- Node38&#45;&gt;Node9 -->
-<g id="edge69" class="edge"><title>Node38&#45;&gt;Node9</title>
-<path fill="none" stroke="midnightblue" d="M1788.83,-582.327C1805.08,-573.612 1824.9,-560.871 1838.5,-545 1953.5,-410.826 1989.09,-184.35 1995.4,-137.241"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1787.05,-579.308 1779.74,-586.986 1790.24,-585.537 1787.05,-579.308"/>
+<g id="edge66" class="edge"><title>Node38&#45;&gt;Node9</title>
+<path fill="none" stroke="midnightblue" d="M1161.53,-577.422C1138.03,-524.626 1076.58,-366.776 1121,-246 1143.6,-184.568 1220.22,-151.267 1263.18,-137.028"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1158.47,-579.143 1165.8,-586.791 1164.84,-576.24 1158.47,-579.143"/>
 </g>
 <!-- Node38&#45;&gt;Node10 -->
-<g id="edge70" class="edge"><title>Node38&#45;&gt;Node10</title>
-<path fill="none" stroke="midnightblue" d="M1783.11,-581.017C1795.05,-572.119 1808.79,-559.63 1816.5,-545 1864.92,-453.177 1869.77,-413.427 1843.5,-313 1831.83,-268.409 1799.04,-222.865 1784.54,-204.241"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1781,-578.219 1774.85,-586.841 1785.04,-583.941 1781,-578.219"/>
+<g id="edge67" class="edge"><title>Node38&#45;&gt;Node10</title>
+<path fill="none" stroke="midnightblue" d="M1210.31,-584.027C1272.24,-565.673 1393.73,-526.76 1490,-478 1550.68,-447.266 1777,-299.602 1789,-277 1795.46,-264.83 1794.04,-258.822 1789,-246 1782.07,-228.39 1765.79,-213.281 1754.34,-204.293"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1209.15,-580.722 1200.54,-586.899 1211.12,-587.438 1209.15,-580.722"/>
 </g>
-<!-- Node38&#45;&gt;Node12 -->
-<g id="edge75" class="edge"><title>Node38&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M1768.22,-577.093C1774.31,-559.421 1779.91,-532.722 1767.5,-514 1682.98,-386.513 1593.68,-433.807 1450.5,-380 1402.74,-362.053 1385.69,-369.5 1341.5,-344 1284.79,-311.274 1286.52,-281.498 1231.5,-246 1198.4,-224.642 1173.29,-242.131 1150.5,-210 1125.21,-174.357 1164.61,-149.696 1142.5,-112 1132.43,-94.8367 1112.92,-82.8684 1097.54,-75.6127"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1764.82,-576.187 1764.52,-586.778 1771.36,-578.685 1764.82,-576.187"/>
+<!-- Node38&#45;&gt;Node11 -->
+<g id="edge72" class="edge"><title>Node38&#45;&gt;Node11</title>
+<path fill="none" stroke="midnightblue" d="M1157.04,-579.006C1101.42,-508.149 885.452,-232.181 878,-210 873.612,-196.94 872.588,-191.671 878,-179 894.915,-139.396 908.756,-129.734 948,-112 1030.06,-74.9182 1296.05,-68.2498 1415.26,-67.1532"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1154.34,-581.232 1163.27,-586.936 1159.84,-576.909 1154.34,-581.232"/>
 </g>
-<!-- Node38&#45;&gt;Node18 -->
-<g id="edge71" class="edge"><title>Node38&#45;&gt;Node18</title>
-<path fill="none" stroke="midnightblue" d="M1846.79,-585.579C1902.89,-577.473 1969.38,-564.194 1990.5,-545 2033.78,-505.667 2071.86,-313.967 2079.93,-271.04"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1846.23,-582.124 1836.81,-586.983 1847.2,-589.056 1846.23,-582.124"/>
+<!-- Node38&#45;&gt;Node17 -->
+<g id="edge68" class="edge"><title>Node38&#45;&gt;Node17</title>
+<path fill="none" stroke="midnightblue" d="M1182.41,-578.264C1205.48,-547.458 1257.99,-482.832 1317,-447 1363.14,-418.983 1385.28,-436.227 1433,-411 1452.58,-400.652 1452,-389.508 1472,-380 1539.39,-347.955 1569.96,-378.753 1636,-344 1671.53,-325.302 1703.17,-288.139 1716.42,-271.235"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1179.33,-576.54 1176.23,-586.669 1184.97,-580.69 1179.33,-576.54"/>
 </g>
 <!-- Node38&#45;&gt;Node34 -->
-<g id="edge73" class="edge"><title>Node38&#45;&gt;Node34</title>
-<path fill="none" stroke="midnightblue" d="M1742.29,-579.928C1729.72,-569.179 1713.15,-555.002 1700.97,-544.589"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1740.38,-582.894 1750.25,-586.734 1744.93,-577.574 1740.38,-582.894"/>
+<g id="edge70" class="edge"><title>Node38&#45;&gt;Node34</title>
+<path fill="none" stroke="midnightblue" d="M1224.45,-585.029C1281.56,-574.047 1371.62,-556.726 1434.89,-544.559"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1223.69,-581.61 1214.54,-586.936 1225.02,-588.484 1223.69,-581.61"/>
 </g>
 <!-- Node38&#45;&gt;Node35 -->
-<g id="edge74" class="edge"><title>Node38&#45;&gt;Node35</title>
-<path fill="none" stroke="midnightblue" d="M1718.94,-584.338C1677.97,-573.332 1615.2,-556.474 1570.84,-544.559"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1718.05,-587.722 1728.61,-586.936 1719.86,-580.961 1718.05,-587.722"/>
+<g id="edge71" class="edge"><title>Node38&#45;&gt;Node35</title>
+<path fill="none" stroke="midnightblue" d="M1257.92,-586.341C1343.22,-577.128 1476.23,-561.897 1591,-545 1593.89,-544.574 1596.84,-544.126 1599.82,-543.662"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1257.33,-582.884 1247.76,-587.434 1258.08,-589.844 1257.33,-582.884"/>
 </g>
 <!-- Node40&#45;&gt;Node37 -->
-<g id="edge79" class="edge"><title>Node40&#45;&gt;Node37</title>
-<path fill="none" stroke="midnightblue" d="M996.044,-621.565C996.495,-630.596 996.206,-640.56 995.178,-648.396"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="999.528,-621.236 995.175,-611.577 992.555,-621.842 999.528,-621.236"/>
+<g id="edge76" class="edge"><title>Node40&#45;&gt;Node37</title>
+<path fill="none" stroke="midnightblue" d="M527.322,-614.472C562.172,-625.024 604.962,-638.43 635.546,-648.396"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="528.304,-611.112 517.72,-611.577 526.284,-617.814 528.304,-611.112"/>
 </g>
 <!-- Node40&#45;&gt;Node41 -->
-<g id="edge80" class="edge"><title>Node40&#45;&gt;Node41</title>
-<path fill="none" stroke="midnightblue" d="M943.256,-577.819C915.931,-567.371 881.807,-554.323 856.315,-544.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="942.022,-581.094 952.613,-581.396 944.522,-574.555 942.022,-581.094"/>
+<g id="edge77" class="edge"><title>Node40&#45;&gt;Node41</title>
+<path fill="none" stroke="midnightblue" d="M504.867,-576.853C525.738,-566.576 551.226,-554.025 570.414,-544.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="503.067,-573.838 495.642,-581.396 506.16,-580.118 503.067,-573.838"/>
 </g>
 <!-- Node40&#45;&gt;Node42 -->
-<g id="edge82" class="edge"><title>Node40&#45;&gt;Node42</title>
-<path fill="none" stroke="midnightblue" d="M974.904,-572.834C957.281,-545.466 928.187,-500.284 913.531,-477.525"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="972.047,-574.861 980.404,-581.374 977.932,-571.071 972.047,-574.861"/>
+<g id="edge79" class="edge"><title>Node40&#45;&gt;Node42</title>
+<path fill="none" stroke="midnightblue" d="M471.965,-571.385C476.65,-553.724 485.223,-530.122 500,-514 514.993,-497.643 536.494,-485.791 555.63,-477.697"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="468.469,-570.952 469.53,-581.493 475.275,-572.591 468.469,-570.952"/>
 </g>
 <!-- Node41&#45;&gt;Node42 -->
-<g id="edge81" class="edge"><title>Node41&#45;&gt;Node42</title>
-<path fill="none" stroke="midnightblue" d="M846.088,-508.168C859.086,-498.228 874.404,-486.514 886.092,-477.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="843.761,-505.541 837.944,-514.396 848.013,-511.102 843.761,-505.541"/>
+<g id="edge78" class="edge"><title>Node41&#45;&gt;Node42</title>
+<path fill="none" stroke="midnightblue" d="M599,-504.108C599,-495.154 599,-485.323 599,-477.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="595.5,-504.396 599,-514.396 602.5,-504.396 595.5,-504.396"/>
 </g>
 <!-- Node43&#45;&gt;Node36 -->
-<g id="edge86" class="edge"><title>Node43&#45;&gt;Node36</title>
-<path fill="none" stroke="midnightblue" d="M699.04,-684.518C712.429,-694.514 727.18,-706.367 737.386,-715.396"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="701.068,-681.665 690.933,-678.577 696.93,-687.311 701.068,-681.665"/>
+<g id="edge83" class="edge"><title>Node43&#45;&gt;Node36</title>
+<path fill="none" stroke="midnightblue" d="M285.578,-682.426C311.192,-692.823 341.626,-705.728 363.263,-715.396"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="286.609,-679.068 276.025,-678.577 283.992,-685.561 286.609,-679.068"/>
 </g>
 <!-- Node44&#45;&gt;Node42 -->
-<g id="edge88" class="edge"><title>Node44&#45;&gt;Node42</title>
-<path fill="none" stroke="midnightblue" d="M495.892,-642.1C475.273,-625.362 454.02,-601.216 469.5,-581 515.513,-520.909 740.493,-484.528 847.604,-470.346"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="493.928,-645.006 503.982,-648.347 498.206,-639.465 493.928,-645.006"/>
+<g id="edge85" class="edge"><title>Node44&#45;&gt;Node42</title>
+<path fill="none" stroke="midnightblue" d="M405.718,-639.262C398.591,-622.476 392.537,-599.583 401,-581 426.68,-524.616 493.848,-493.435 542.851,-477.607"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="402.555,-640.761 409.936,-648.362 408.906,-637.817 402.555,-640.761"/>
 </g>
 <!-- Node45&#45;&gt;Node41 -->
-<g id="edge92" class="edge"><title>Node45&#45;&gt;Node41</title>
-<path fill="none" stroke="midnightblue" d="M767.606,-574.883C779.155,-565.005 792.69,-553.429 803.041,-544.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="765.316,-572.236 759.991,-581.396 769.865,-577.556 765.316,-572.236"/>
+<g id="edge89" class="edge"><title>Node45&#45;&gt;Node41</title>
+<path fill="none" stroke="midnightblue" d="M708.689,-577.407C684.423,-567.029 654.411,-554.195 631.917,-544.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="707.447,-580.682 718.018,-581.396 710.2,-574.246 707.447,-580.682"/>
 </g>
 <!-- Node45&#45;&gt;Node42 -->
-<g id="edge93" class="edge"><title>Node45&#45;&gt;Node42</title>
-<path fill="none" stroke="midnightblue" d="M740.092,-571.206C738.987,-553.454 740.578,-529.815 753.5,-514 776.325,-486.066 815.118,-473.488 847.668,-467.87"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="736.628,-571.749 741.051,-581.376 743.597,-571.092 736.628,-571.749"/>
+<g id="edge90" class="edge"><title>Node45&#45;&gt;Node42</title>
+<path fill="none" stroke="midnightblue" d="M740.031,-572.101C730.801,-554.387 716.204,-530.388 698,-514 680.506,-498.251 657.201,-486.043 637.598,-477.579"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="736.985,-573.836 744.595,-581.208 743.243,-570.699 736.985,-573.836"/>
 </g>
 <!-- Node46&#45;&gt;Node7 -->
-<g id="edge109" class="edge"><title>Node46&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M1505.32,-863.967C1671.57,-863.338 2090.5,-849.996 2090.5,-731.5 2090.5,-731.5 2090.5,-731.5 2090.5,-595.5 2090.5,-446.747 1868.03,-409.177 1752.46,-399.695"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1505.21,-860.468 1495.22,-863.991 1505.23,-867.468 1505.21,-860.468"/>
+<g id="edge106" class="edge"><title>Node46&#45;&gt;Node7</title>
+<path fill="none" stroke="midnightblue" d="M1153.35,-846.604C1207.81,-828.277 1276,-793.042 1276,-731.5 1276,-731.5 1276,-731.5 1276,-662.5 1276,-565.004 1276.45,-535.663 1317,-447 1324.54,-430.523 1338.55,-414.677 1347.83,-405.261"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1152.14,-843.316 1143.71,-849.724 1154.3,-849.976 1152.14,-843.316"/>
 </g>
 <!-- Node46&#45;&gt;Node9 -->
-<g id="edge100" class="edge"><title>Node46&#45;&gt;Node9</title>
-<path fill="none" stroke="midnightblue" d="M1505.37,-862.139C1694.7,-857.928 2225.34,-843.514 2297.5,-813 2346.9,-792.11 2386.5,-785.135 2386.5,-731.5 2386.5,-731.5 2386.5,-731.5 2386.5,-595.5 2386.5,-502.177 2453.5,-489.823 2453.5,-396.5 2453.5,-396.5 2453.5,-396.5 2453.5,-260.5 2453.5,-223.306 2462.26,-203.755 2434.5,-179 2406.36,-153.908 2162.16,-137.529 2051.09,-131.312"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1505.2,-858.642 1495.28,-862.361 1505.35,-865.641 1505.2,-858.642"/>
+<g id="edge97" class="edge"><title>Node46&#45;&gt;Node9</title>
+<path fill="none" stroke="midnightblue" d="M1074.5,-840.13C1070.4,-831.749 1066.1,-822.116 1063,-813 1011.89,-662.575 988,-622.37 988,-463.5 988,-463.5 988,-463.5 988,-327.5 988,-206.581 1158.22,-155.267 1244.33,-137.045"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1071.5,-841.954 1079.15,-849.287 1077.75,-838.786 1071.5,-841.954"/>
 </g>
 <!-- Node46&#45;&gt;Node10 -->
-<g id="edge101" class="edge"><title>Node46&#45;&gt;Node10</title>
-<path fill="none" stroke="midnightblue" d="M1505.44,-861.208C1660.49,-855.484 2034.88,-839.114 2085.5,-813 2119.99,-795.208 2127.99,-782.382 2141.5,-746 2180.18,-641.831 2220.57,-329.717 2147.5,-246 2107.53,-200.208 1927.63,-194.404 1834.21,-194.576"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1504.98,-857.723 1495.11,-861.586 1505.23,-864.718 1504.98,-857.723"/>
+<g id="edge98" class="edge"><title>Node46&#45;&gt;Node10</title>
+<path fill="none" stroke="midnightblue" d="M1153.81,-862.796C1315.54,-860.502 1724.35,-851.006 1855,-813 1927.48,-791.916 2004,-806.981 2004,-731.5 2004,-731.5 2004,-731.5 2004,-528.5 2004,-367.033 1811.38,-237.644 1756.46,-204.031"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1153.48,-859.3 1143.52,-862.937 1153.57,-866.3 1153.48,-859.3"/>
 </g>
-<!-- Node46&#45;&gt;Node14 -->
-<g id="edge131" class="edge"><title>Node46&#45;&gt;Node14</title>
-<path fill="none" stroke="midnightblue" d="M1371.7,-862.566C1247.14,-859.951 984.635,-850.079 902.5,-813 884.159,-804.72 887.656,-790.678 869.5,-782 764.247,-731.692 723.816,-769.259 609.5,-746 459.404,-715.461 298.5,-750.671 298.5,-597.5 298.5,-597.5 298.5,-597.5 298.5,-126.5 298.5,-70.3457 367.126,-34.819 404.656,-19.5888"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1371.8,-866.069 1381.87,-862.77 1371.94,-859.07 1371.8,-866.069"/>
+<!-- Node46&#45;&gt;Node13 -->
+<g id="edge128" class="edge"><title>Node46&#45;&gt;Node13</title>
+<path fill="none" stroke="midnightblue" d="M1020.03,-863.152C856.408,-861.664 440.078,-853.852 308,-813 161.32,-767.632 38,-751.036 38,-597.5 38,-597.5 38,-597.5 38,-126.5 38,-73.8728 202.031,-35.677 286.289,-19.5285"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1020.41,-866.655 1030.44,-863.241 1020.47,-859.656 1020.41,-866.655"/>
+</g>
+<!-- Node46&#45;&gt;Node16 -->
+<g id="edge99" class="edge"><title>Node46&#45;&gt;Node16</title>
+<path fill="none" stroke="midnightblue" d="M1153.67,-861.978C1361.25,-856.967 1984.18,-839.759 2018,-813 2047.61,-789.572 2042,-769.26 2042,-731.5 2042,-731.5 2042,-731.5 2042,-461.5 2042,-413.697 2020.83,-359.147 2011.9,-338.261"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1153.43,-858.483 1143.52,-862.222 1153.6,-865.481 1153.43,-858.483"/>
 </g>
 <!-- Node46&#45;&gt;Node17 -->
-<g id="edge102" class="edge"><title>Node46&#45;&gt;Node17</title>
-<path fill="none" stroke="midnightblue" d="M1505.27,-863.32C1683.76,-862.266 2162.26,-855.467 2218.5,-813 2249.18,-789.831 2247.5,-769.947 2247.5,-731.5 2247.5,-731.5 2247.5,-731.5 2247.5,-461.5 2247.5,-414.946 2250.64,-359.222 2251.95,-338.144"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1505.2,-859.821 1495.22,-863.375 1505.24,-866.82 1505.2,-859.821"/>
+<g id="edge100" class="edge"><title>Node46&#45;&gt;Node17</title>
+<path fill="none" stroke="midnightblue" d="M1154.01,-864.323C1308.91,-865.343 1687.38,-862.517 1803,-813 1893.09,-774.417 1966,-762.501 1966,-664.5 1966,-664.5 1966,-664.5 1966,-528.5 1966,-394.957 1796.99,-299.149 1740.87,-271.018"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1153.72,-860.821 1143.69,-864.247 1153.66,-867.821 1153.72,-860.821"/>
 </g>
-<!-- Node46&#45;&gt;Node18 -->
-<g id="edge103" class="edge"><title>Node46&#45;&gt;Node18</title>
-<path fill="none" stroke="midnightblue" d="M1505.78,-863.268C1641.96,-861.922 1945.31,-854.231 2039.5,-813 2088.63,-791.491 2128.5,-785.135 2128.5,-731.5 2128.5,-731.5 2128.5,-731.5 2128.5,-394.5 2128.5,-345.361 2099.24,-291.584 2086.89,-271.083"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1505.45,-859.77 1495.48,-863.36 1505.51,-866.77 1505.45,-859.77"/>
+<!-- Node46&#45;&gt;Node20 -->
+<g id="edge109" class="edge"><title>Node46&#45;&gt;Node20</title>
+<path fill="none" stroke="midnightblue" d="M1020.35,-864.403C908.786,-864.471 677.853,-858.425 489,-813 339.884,-777.133 255.077,-807.269 171,-679 163.447,-667.477 164.448,-660.12 171,-648 184.894,-622.3 209.122,-635.846 226,-612 247.547,-581.557 246,-567.797 246,-530.5 246,-530.5 246,-530.5 246,-327.5 246,-240.845 527.848,-209.994 670.423,-199.931"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1020.46,-867.903 1030.45,-864.379 1020.44,-860.903 1020.46,-867.903"/>
 </g>
 <!-- Node46&#45;&gt;Node22 -->
-<g id="edge112" class="edge"><title>Node46&#45;&gt;Node22</title>
-<path fill="none" stroke="midnightblue" d="M1505.47,-861.581C1686.46,-856.076 2176.33,-838.942 2244.5,-813 2299.38,-792.113 2348.5,-790.224 2348.5,-731.5 2348.5,-731.5 2348.5,-731.5 2348.5,-595.5 2348.5,-453.919 2353.17,-418.574 2354.5,-277 2354.63,-263.223 2355.84,-259.712 2354.5,-246 2353.03,-231.057 2349.19,-213.978 2346.71,-204.002"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1505.18,-858.089 1495.29,-861.889 1505.39,-865.085 1505.18,-858.089"/>
+<g id="edge107" class="edge"><title>Node46&#45;&gt;Node22</title>
+<path fill="none" stroke="midnightblue" d="M1153.63,-853.331C1194.28,-845.585 1246.6,-832.808 1290,-813 1347.32,-786.84 1406,-794.508 1406,-731.5 1406,-731.5 1406,-731.5 1406,-595.5 1406,-548.974 1404.74,-493.234 1404.22,-472.148"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1152.76,-849.933 1143.56,-855.193 1154.03,-856.816 1152.76,-849.933"/>
 </g>
 <!-- Node46&#45;&gt;Node24 -->
-<g id="edge110" class="edge"><title>Node46&#45;&gt;Node24</title>
-<path fill="none" stroke="midnightblue" d="M1430.82,-839.528C1412.43,-782.575 1363.85,-634.867 1316.5,-514 1310.71,-499.219 1303.18,-482.373 1298.6,-472.355"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1427.6,-840.932 1433.99,-849.378 1434.26,-838.786 1427.6,-840.932"/>
-</g>
-<!-- Node46&#45;&gt;Node26 -->
-<g id="edge111" class="edge"><title>Node46&#45;&gt;Node26</title>
-<path fill="none" stroke="midnightblue" d="M1371.75,-858.774C1266.06,-851.01 1053.77,-834.27 874.5,-813 860.153,-811.298 844.67,-809.151 830.342,-807.036"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1371.72,-862.282 1381.95,-859.52 1372.23,-855.3 1371.72,-862.282"/>
+<g id="edge108" class="edge"><title>Node46&#45;&gt;Node24</title>
+<path fill="none" stroke="midnightblue" d="M1154.04,-855.31C1232.48,-845.633 1366.26,-828.84 1481,-813 1494.54,-811.13 1509.13,-809.023 1522.8,-807.007"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1153.31,-851.873 1143.81,-856.57 1154.16,-858.821 1153.31,-851.873"/>
 </g>
 <!-- Node46&#45;&gt;Node36 -->
-<g id="edge126" class="edge"><title>Node46&#45;&gt;Node36</title>
-<path fill="none" stroke="midnightblue" d="M1371.83,-859.773C1264.25,-853.131 1057.45,-837.593 988.5,-813 964.58,-804.469 963.73,-792.262 940.5,-782 897.203,-762.874 845.083,-749.694 806.234,-741.606"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1371.73,-863.273 1381.92,-860.388 1372.15,-856.286 1371.73,-863.273"/>
+<g id="edge123" class="edge"><title>Node46&#45;&gt;Node36</title>
+<path fill="none" stroke="midnightblue" d="M1020.27,-863.572C921.557,-862.193 731.523,-853.829 576,-813 516.085,-797.271 450.848,-764.49 416.108,-745.647"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1020.36,-867.073 1030.4,-863.693 1020.44,-860.074 1020.36,-867.073"/>
 </g>
 <!-- Node46&#45;&gt;Node37 -->
-<g id="edge127" class="edge"><title>Node46&#45;&gt;Node37</title>
-<path fill="none" stroke="midnightblue" d="M1371.55,-862.137C1276.3,-858.954 1107.84,-848.347 1059.5,-813 1013.95,-779.689 996.984,-709.072 991.636,-678.784"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1371.56,-865.639 1381.66,-862.456 1371.78,-858.642 1371.56,-865.639"/>
+<g id="edge124" class="edge"><title>Node46&#45;&gt;Node37</title>
+<path fill="none" stroke="midnightblue" d="M1020.37,-860.953C933.707,-856.423 788.378,-844.284 746,-813 700.479,-779.395 682.901,-708.939 677.272,-678.739"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1020.21,-864.45 1030.38,-861.457 1020.56,-857.459 1020.21,-864.45"/>
 </g>
 <!-- Node46&#45;&gt;Node38 -->
-<g id="edge104" class="edge"><title>Node46&#45;&gt;Node38</title>
-<path fill="none" stroke="midnightblue" d="M1454.69,-841.253C1486.6,-798.726 1561.67,-705.06 1643.5,-648 1671.27,-628.639 1707.63,-614.382 1732.4,-606.044"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1451.87,-839.171 1448.72,-849.286 1457.49,-843.347 1451.87,-839.171"/>
+<g id="edge101" class="edge"><title>Node46&#45;&gt;Node38</title>
+<path fill="none" stroke="midnightblue" d="M1153.09,-846.143C1169.18,-838.644 1184.49,-828.011 1194,-813 1237.73,-744.003 1190.08,-637.136 1174.55,-606.177"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1151.38,-843.065 1143.56,-850.206 1154.13,-849.504 1151.38,-843.065"/>
 </g>
 <!-- Node46&#45;&gt;Node40 -->
-<g id="edge125" class="edge"><title>Node46&#45;&gt;Node40</title>
-<path fill="none" stroke="midnightblue" d="M1371.73,-848.984C1340.57,-840.772 1303.59,-828.914 1272.5,-813 1257.73,-805.438 1068.16,-658.578 1007.81,-611.73"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1371.07,-852.428 1381.63,-851.522 1372.81,-845.647 1371.07,-852.428"/>
+<g id="edge122" class="edge"><title>Node46&#45;&gt;Node40</title>
+<path fill="none" stroke="midnightblue" d="M1020.18,-859.138C927.976,-852.326 767.099,-837.359 714,-813 613.559,-766.923 611.906,-719.912 528,-648 513.49,-635.564 496.753,-621.744 484.485,-611.71"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1020.02,-862.636 1030.25,-859.87 1020.53,-855.654 1020.02,-862.636"/>
 </g>
 <!-- Node46&#45;&gt;Node41 -->
-<g id="edge128" class="edge"><title>Node46&#45;&gt;Node41</title>
-<path fill="none" stroke="midnightblue" d="M1371.81,-864.187C1288.76,-863.06 1143.67,-854.507 1027.5,-813 1003.23,-804.329 1000.28,-795.773 978.5,-782 884.29,-722.425 865.676,-698.881 766.5,-648 728.536,-628.523 701.353,-647.379 677.5,-612 652.235,-574.526 713.131,-552.163 762.907,-540.595"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1371.83,-867.688 1381.87,-864.292 1371.91,-860.688 1371.83,-867.688"/>
+<g id="edge125" class="edge"><title>Node46&#45;&gt;Node41</title>
+<path fill="none" stroke="midnightblue" d="M1019.9,-860.623C941.355,-855.873 817.71,-843.637 784,-813 727.834,-761.955 790.132,-704.982 740,-648 712.087,-616.274 684.557,-638.939 652,-612 629.504,-593.385 613.209,-562.71 605.007,-544.776"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1020,-864.136 1030.19,-861.219 1020.41,-857.147 1020,-864.136"/>
 </g>
 <!-- Node46&#45;&gt;Node42 -->
-<g id="edge130" class="edge"><title>Node46&#45;&gt;Node42</title>
-<path fill="none" stroke="midnightblue" d="M1411.64,-843.383C1317.47,-772.838 1003.58,-537.716 923.317,-477.595"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1409.71,-846.304 1419.81,-849.498 1413.9,-840.702 1409.71,-846.304"/>
+<g id="edge127" class="edge"><title>Node46&#45;&gt;Node42</title>
+<path fill="none" stroke="midnightblue" d="M1039.47,-845.624C990.05,-825.72 911.844,-790.422 853,-746 805.803,-710.37 810.778,-682.848 763,-648 732.153,-625.501 708.851,-641.814 685,-612 657.173,-577.217 687.687,-551.724 664,-514 654.425,-498.751 638.529,-486.279 624.854,-477.545"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1038.37,-848.953 1048.95,-849.398 1040.95,-842.449 1038.37,-848.953"/>
 </g>
 <!-- Node46&#45;&gt;Node44 -->
-<g id="edge129" class="edge"><title>Node46&#45;&gt;Node44</title>
-<path fill="none" stroke="midnightblue" d="M1371.5,-861.461C1254.19,-857.109 1016.78,-844.594 940.5,-813 920.363,-804.66 922.369,-790.959 902.5,-782 812.579,-741.454 777.145,-776.989 683.5,-746 632.075,-728.982 576.902,-696.995 547.289,-678.56"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1371.67,-864.97 1381.79,-861.833 1371.92,-857.974 1371.67,-864.97"/>
+<g id="edge126" class="edge"><title>Node46&#45;&gt;Node44</title>
+<path fill="none" stroke="midnightblue" d="M1020.2,-864.216C933.186,-863.26 777.378,-855.031 652,-813 560.988,-782.49 469.718,-709.348 433.864,-678.549"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1020.44,-867.718 1030.46,-864.299 1020.49,-860.719 1020.44,-867.718"/>
 </g>
 <!-- Node47 -->
 <g id="node47" class="node"><title>Node47</title>
 <g id="a_node47"><a xlink:href="functor_8h.html" target="_top" xlink:title="Defines the Functor data structures. ">
-<polygon fill="white" stroke="red" points="1482.5,-587 1482.5,-606 1626.5,-606 1626.5,-587 1482.5,-587"/>
-<text text-anchor="middle" x="1554.5" y="-594" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/functor.h</text>
+<polygon fill="white" stroke="red" points="1487,-587 1487,-606 1631,-606 1631,-587 1487,-587"/>
+<text text-anchor="middle" x="1559" y="-594" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/node/functor.h</text>
 </a>
 </g>
 </g>
 <!-- Node46&#45;&gt;Node47 -->
-<g id="edge105" class="edge"><title>Node46&#45;&gt;Node47</title>
-<path fill="none" stroke="midnightblue" d="M1441.19,-839.296C1446.73,-797.548 1462.24,-710.476 1500.5,-648 1511.23,-630.486 1529.64,-615.138 1541.94,-606.09"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1437.7,-838.997 1439.94,-849.353 1444.65,-839.863 1437.7,-838.997"/>
+<g id="edge102" class="edge"><title>Node46&#45;&gt;Node47</title>
+<path fill="none" stroke="midnightblue" d="M1153.52,-859.845C1237.76,-854.132 1377.13,-840.736 1420,-813 1501.86,-760.04 1545.39,-639.982 1556.26,-606.385"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1153.28,-856.353 1143.53,-860.504 1153.74,-863.338 1153.28,-856.353"/>
 </g>
 <!-- Node48 -->
 <g id="node48" class="node"><title>Node48</title>
 <g id="a_node48"><a xlink:href="runtime_2container_8h.html" target="_top" xlink:title="Common POD(plain old data) container types. ">
-<polygon fill="white" stroke="red" points="1628,-715.5 1628,-745.5 1741,-745.5 1741,-715.5 1628,-715.5"/>
-<text text-anchor="start" x="1636" y="-733.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="1684.5" y="-722.5" font-family="Helvetica,sans-Serif" font-size="10.00">/container.h</text>
+<polygon fill="white" stroke="red" points="902.5,-715.5 902.5,-745.5 1015.5,-745.5 1015.5,-715.5 902.5,-715.5"/>
+<text text-anchor="start" x="910.5" y="-733.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="959" y="-722.5" font-family="Helvetica,sans-Serif" font-size="10.00">/container.h</text>
 </a>
 </g>
 </g>
 <!-- Node46&#45;&gt;Node48 -->
-<g id="edge113" class="edge"><title>Node46&#45;&gt;Node48</title>
-<path fill="none" stroke="midnightblue" d="M1505.41,-856.363C1563.66,-848.791 1642.18,-834.86 1664.5,-813 1682.61,-795.262 1685.34,-763.986 1685.23,-745.769"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1504.64,-852.933 1495.15,-857.658 1505.51,-859.877 1504.64,-852.933"/>
+<g id="edge110" class="edge"><title>Node46&#45;&gt;Node48</title>
+<path fill="none" stroke="midnightblue" d="M1062.21,-842.764C1051.96,-833.885 1040.14,-823.222 1030,-813 1007.67,-790.482 983.88,-762.176 970.289,-745.538"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1060.15,-845.611 1070.02,-849.46 1064.71,-840.296 1060.15,-845.611"/>
 </g>
 <!-- Node49 -->
 <g id="node49" class="node"><title>Node49</title>
 <g id="a_node49"><a xlink:href="runtime_2memory_8h.html" target="_top" xlink:title="Runtime memory management. ">
-<polygon fill="white" stroke="black" points="1543,-782.5 1543,-812.5 1656,-812.5 1656,-782.5 1543,-782.5"/>
-<text text-anchor="start" x="1551" y="-800.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
-<text text-anchor="middle" x="1599.5" y="-789.5" font-family="Helvetica,sans-Serif" font-size="10.00">/memory.h</text>
+<polygon fill="white" stroke="black" points="1072.5,-782.5 1072.5,-812.5 1185.5,-812.5 1185.5,-782.5 1072.5,-782.5"/>
+<text text-anchor="start" x="1080.5" y="-800.5" font-family="Helvetica,sans-Serif" font-size="10.00">include/tvm/runtime</text>
+<text text-anchor="middle" x="1129" y="-789.5" font-family="Helvetica,sans-Serif" font-size="10.00">/memory.h</text>
 </a>
 </g>
 </g>
 <!-- Node46&#45;&gt;Node49 -->
-<g id="edge119" class="edge"><title>Node46&#45;&gt;Node49</title>
-<path fill="none" stroke="midnightblue" d="M1482.98,-845.544C1508.74,-835.143 1540.7,-822.238 1564.63,-812.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1481.4,-842.407 1473.43,-849.396 1484.02,-848.898 1481.4,-842.407"/>
+<g id="edge116" class="edge"><title>Node46&#45;&gt;Node49</title>
+<path fill="none" stroke="midnightblue" d="M1101.62,-840.867C1107.7,-831.459 1114.57,-820.833 1119.9,-812.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1098.6,-839.097 1096.11,-849.396 1104.48,-842.896 1098.6,-839.097"/>
 </g>
-<!-- Node47&#45;&gt;Node12 -->
-<g id="edge108" class="edge"><title>Node47&#45;&gt;Node12</title>
-<path fill="none" stroke="midnightblue" d="M1507.56,-584.022C1483.67,-576.245 1455.53,-563.866 1435.5,-545 1375.89,-488.854 1416.82,-431.934 1353.5,-380 1306.15,-341.163 1274.53,-372.831 1220.5,-344 1105.95,-282.871 1053.85,-264.664 1008.5,-143 1003.69,-130.09 1002.02,-124.159 1008.5,-112 1017.79,-94.5648 1037.17,-82.6843 1052.71,-75.5289"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1506.57,-587.38 1517.16,-586.98 1508.63,-580.69 1506.57,-587.38"/>
+<!-- Node47&#45;&gt;Node11 -->
+<g id="edge105" class="edge"><title>Node47&#45;&gt;Node11</title>
+<path fill="none" stroke="midnightblue" d="M1637.6,-585.399C1686.2,-577.359 1742.06,-564.23 1757,-545 1765.45,-534.12 1760.84,-527.233 1757,-514 1699.19,-314.631 1522.89,-118.9 1482.37,-75.7633"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1637,-581.951 1627.68,-586.989 1638.1,-588.863 1637,-581.951"/>
 </g>
 <!-- Node47&#45;&gt;Node34 -->
-<g id="edge106" class="edge"><title>Node47&#45;&gt;Node34</title>
-<path fill="none" stroke="midnightblue" d="M1580.82,-582.342C1602.7,-571.4 1633.78,-555.86 1656.2,-544.652"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1579.14,-579.266 1571.76,-586.869 1582.27,-585.527 1579.14,-579.266"/>
+<g id="edge103" class="edge"><title>Node47&#45;&gt;Node34</title>
+<path fill="none" stroke="midnightblue" d="M1545.8,-578.673C1537.5,-568.094 1526.91,-554.602 1519.05,-544.589"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1543.2,-581.027 1552.12,-586.734 1548.7,-576.706 1543.2,-581.027"/>
 </g>
 <!-- Node47&#45;&gt;Node35 -->
-<g id="edge107" class="edge"><title>Node47&#45;&gt;Node35</title>
-<path fill="none" stroke="midnightblue" d="M1544.65,-577.713C1538.87,-567.287 1531.68,-554.301 1526.3,-544.589"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1541.74,-579.682 1549.64,-586.734 1547.86,-576.29 1541.74,-579.682"/>
+<g id="edge104" class="edge"><title>Node47&#45;&gt;Node35</title>
+<path fill="none" stroke="midnightblue" d="M1583.32,-581.753C1602.65,-570.827 1629.59,-555.603 1649.07,-544.589"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1581.49,-578.766 1574.51,-586.734 1584.94,-584.86 1581.49,-578.766"/>
 </g>
 <!-- Node48&#45;&gt;Node7 -->
-<g id="edge115" class="edge"><title>Node48&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M1751.23,-717.795C1814.68,-702.378 1900.5,-668.418 1900.5,-597.5 1900.5,-597.5 1900.5,-597.5 1900.5,-528.5 1900.5,-452.25 1801.29,-418.724 1737.34,-405.034"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1750.2,-714.44 1741.25,-720.113 1751.78,-721.259 1750.2,-714.44"/>
+<g id="edge112" class="edge"><title>Node48&#45;&gt;Node7</title>
+<path fill="none" stroke="midnightblue" d="M959.041,-705.285C960.297,-662.504 968.38,-572.549 1012,-514 1076.55,-427.36 1208.47,-404.023 1289.18,-398.089"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="955.541,-705.268 958.849,-715.333 962.539,-705.402 955.541,-705.268"/>
 </g>
-<!-- Node48&#45;&gt;Node22 -->
-<g id="edge116" class="edge"><title>Node48&#45;&gt;Node22</title>
-<path fill="none" stroke="midnightblue" d="M1751.19,-716.44C1790.15,-707.842 1839.88,-695.18 1882.5,-679 2004.04,-632.864 2039.99,-624.945 2142.5,-545 2236.36,-471.799 2268.33,-451.472 2319.5,-344 2342.88,-294.902 2344.77,-227.555 2344.66,-204.026"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1750.05,-713.107 1741.02,-718.647 1751.53,-719.947 1750.05,-713.107"/>
+<!-- Node48&#45;&gt;Node20 -->
+<g id="edge113" class="edge"><title>Node48&#45;&gt;Node20</title>
+<path fill="none" stroke="midnightblue" d="M945.388,-706.503C931.508,-680.834 912,-637.621 912,-597.5 912,-597.5 912,-597.5 912,-394.5 912,-357.306 912.576,-344.625 893,-313 860.664,-260.762 796.957,-220.787 767.124,-204.065"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="942.447,-708.415 950.379,-715.438 948.558,-705.001 942.447,-708.415"/>
 </g>
 <!-- Node48&#45;&gt;Node37 -->
-<g id="edge117" class="edge"><title>Node48&#45;&gt;Node37</title>
-<path fill="none" stroke="midnightblue" d="M1617.62,-723.245C1481.32,-710.498 1173.54,-681.712 1046.25,-669.807"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1617.64,-726.762 1627.92,-724.208 1618.29,-719.792 1617.64,-726.762"/>
+<g id="edge114" class="edge"><title>Node48&#45;&gt;Node37</title>
+<path fill="none" stroke="midnightblue" d="M892.371,-714.251C843.555,-703.078 778.004,-688.075 731.691,-677.475"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="891.682,-717.683 902.211,-716.503 893.244,-710.86 891.682,-717.683"/>
 </g>
 <!-- Node48&#45;&gt;Node38 -->
-<g id="edge114" class="edge"><title>Node48&#45;&gt;Node38</title>
-<path fill="none" stroke="midnightblue" d="M1697.75,-706.489C1714.91,-676.683 1743.99,-626.172 1755.55,-606.097"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1694.59,-704.961 1692.63,-715.374 1700.66,-708.454 1694.59,-704.961"/>
+<g id="edge111" class="edge"><title>Node48&#45;&gt;Node38</title>
+<path fill="none" stroke="midnightblue" d="M990.079,-710.057C1036.7,-680.889 1122.85,-626.997 1156.26,-606.097"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="988.202,-707.103 981.581,-715.374 991.915,-713.037 988.202,-707.103"/>
 </g>
 <!-- Node48&#45;&gt;Node42 -->
-<g id="edge118" class="edge"><title>Node48&#45;&gt;Node42</title>
-<path fill="none" stroke="midnightblue" d="M1633.87,-712.234C1489.53,-663.011 1077.91,-522.636 945.83,-477.595"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1632.79,-715.565 1643.39,-715.48 1635.05,-708.94 1632.79,-715.565"/>
+<g id="edge115" class="edge"><title>Node48&#45;&gt;Node42</title>
+<path fill="none" stroke="midnightblue" d="M932.079,-708.972C921.115,-700.17 908.563,-689.508 898,-679 857.737,-638.948 857.385,-619.892 816,-581 779.401,-546.606 768.884,-538.427 725,-514 698.437,-499.214 666.65,-486.507 641.65,-477.537"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="930.078,-711.852 940.094,-715.307 934.419,-706.36 930.078,-711.852"/>
 </g>
 <!-- Node49&#45;&gt;Node7 -->
-<g id="edge121" class="edge"><title>Node49&#45;&gt;Node7</title>
-<path fill="none" stroke="midnightblue" d="M1666.14,-784.585C1764.88,-765.007 1938.5,-722.356 1938.5,-664.5 1938.5,-664.5 1938.5,-664.5 1938.5,-528.5 1938.5,-444.389 1827.48,-413.798 1752.23,-402.724"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1665.38,-781.167 1656.24,-786.52 1666.73,-788.036 1665.38,-781.167"/>
+<g id="edge118" class="edge"><title>Node49&#45;&gt;Node7</title>
+<path fill="none" stroke="midnightblue" d="M1116.3,-773.445C1096.03,-733.568 1060.83,-649.42 1083,-581 1105.66,-511.057 1119.78,-489.183 1180,-447 1212.19,-424.449 1254.31,-411.91 1289.38,-404.968"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1113.23,-775.131 1120.96,-782.372 1119.43,-771.888 1113.23,-775.131"/>
 </g>
-<!-- Node49&#45;&gt;Node24 -->
-<g id="edge122" class="edge"><title>Node49&#45;&gt;Node24</title>
-<path fill="none" stroke="midnightblue" d="M1582.19,-774.243C1543.57,-725.142 1446.55,-604.696 1354.5,-514 1338.59,-498.328 1318.44,-482.011 1306.01,-472.306"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1579.57,-776.582 1588.5,-782.294 1585.08,-772.264 1579.57,-776.582"/>
+<!-- Node49&#45;&gt;Node22 -->
+<g id="edge119" class="edge"><title>Node49&#45;&gt;Node22</title>
+<path fill="none" stroke="midnightblue" d="M1147.97,-774.196C1177.33,-739.623 1235.44,-670.933 1284,-612 1327.42,-559.302 1378.66,-495.258 1397.02,-472.251"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1144.96,-772.337 1141.15,-782.224 1150.29,-776.869 1144.96,-772.337"/>
 </g>
 <!-- Node49&#45;&gt;Node38 -->
-<g id="edge120" class="edge"><title>Node49&#45;&gt;Node38</title>
-<path fill="none" stroke="midnightblue" d="M1601.06,-772.217C1603.1,-755.137 1607.77,-732.329 1618.5,-715 1650.84,-662.762 1714.54,-622.787 1744.38,-606.065"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1597.55,-772.208 1600.07,-782.498 1604.51,-772.881 1597.55,-772.208"/>
+<g id="edge117" class="edge"><title>Node49&#45;&gt;Node38</title>
+<path fill="none" stroke="midnightblue" d="M1134,-772.244C1143.15,-727.823 1162.21,-635.314 1168.17,-606.36"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1130.54,-771.688 1131.95,-782.188 1137.39,-773.1 1130.54,-771.688"/>
 </g>
 <!-- Node49&#45;&gt;Node40 -->
-<g id="edge124" class="edge"><title>Node49&#45;&gt;Node40</title>
-<path fill="none" stroke="midnightblue" d="M1538.66,-779.719C1504.91,-770.215 1462.25,-757.882 1424.5,-746 1277.4,-699.697 1104.5,-638.678 1028.58,-611.547"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1537.81,-783.115 1548.38,-782.448 1539.7,-776.375 1537.81,-783.115"/>
+<g id="edge121" class="edge"><title>Node49&#45;&gt;Node40</title>
+<path fill="none" stroke="midnightblue" d="M1062.41,-792.383C989.169,-786.661 868.514,-773.699 768,-746 694.071,-725.628 675.861,-716.551 609,-679 588.458,-667.463 586.108,-660.278 566,-648 543.769,-634.426 517.454,-621.125 497.544,-611.583"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1062.19,-795.876 1072.43,-793.143 1062.72,-788.896 1062.19,-795.876"/>
 </g>
 <!-- Node49&#45;&gt;Node48 -->
-<g id="edge123" class="edge"><title>Node49&#45;&gt;Node48</title>
-<path fill="none" stroke="midnightblue" d="M1626.09,-776.168C1639.09,-766.228 1654.4,-754.514 1666.09,-745.577"/>
-<polygon fill="midnightblue" stroke="midnightblue" points="1623.76,-773.541 1617.94,-782.396 1628.01,-779.102 1623.76,-773.541"/>
+<g id="edge120" class="edge"><title>Node49&#45;&gt;Node48</title>
+<path fill="none" stroke="midnightblue" d="M1082.76,-778.819C1055.43,-768.371 1021.31,-755.323 995.815,-745.577"/>
+<polygon fill="midnightblue" stroke="midnightblue" points="1081.52,-782.094 1092.11,-782.396 1084.02,-775.555 1081.52,-782.094"/>
 </g>
 </g>
 </svg>
diff --git a/docs/api/doxygen/classes.html b/docs/api/doxygen/classes.html
index faff824..d7edf4d 100644
--- a/docs/api/doxygen/classes.html
+++ b/docs/api/doxygen/classes.html
@@ -90,187 +90,188 @@ var searchBox = new SearchBox("searchBox", "search",false,'Search');
 <div class="qindex"><a class="qindex" href="#letter_A">A</a>&#160;|&#160;<a class="qindex" href="#letter_B">B</a>&#160;|&#160;<a class="qindex" href="#letter_C">C</a>&#160;|&#160;<a class="qindex" href="#letter_D">D</a>&#160;|&#160;<a class="qindex" href="#letter_E">E</a>&#160;|&#160;<a class="qindex" href="#letter_F">F</a>&#160;|&#160;<a class="qindex" href="#letter_G">G</a>&#160;|&#160;<a class="qindex" href="#letter_H">H</a>&#160;|&#160;<a class="qindex" href="#letter_I">I</a>&#160;|& [...]
 <table class="classindex">
 <tr><td rowspan="2" valign="bottom"><a name="letter_A"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;A&#160;&#160;</div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CostModel.html">CostModel</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IRModuleNode.html">IRModuleNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1PlaceholderOpNode.html">PlaceholderOpNode</a> (<a class="el" href="names [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CostModelNode.html">CostModelNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1IterAdapter.html">IterAdapter</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1PointerType.html">PointerType</a> (<a clas [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AccessAnalyzer.html">AccessAnalyzer</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CropAndResizeAttrs.html">CropAndResizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1Iterator.htm [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CostModel.html">CostModel</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IRModule.html">IRModule</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1PlaceholderOp.html">PlaceholderOp</a> (<a class="el" href="namespacetvm_1_1te.ht [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CostModelNode.html">CostModelNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IRModuleNode.html">IRModuleNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1PlaceholderOpNode.html">PlaceholderOpNode</a> (<a class="el" href [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AccessAnalyzer.html">AccessAnalyzer</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CropAndResizeAttrs.html">CropAndResizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1IterAdapter.html">It [...]
 <tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AccessAnalyzerNode.html">AccessAnalyzerNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_D"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;D&#160;&#160;</div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1IteratorNode.html">IteratorNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1PragmaStep.html">PragmaStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1Sta [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool2DAttrs.html">AdaptivePool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1auto__scheduler_1_1AttachMapNode_1_1IterKeyHash.html">AttachMapNode::IterKeyHash</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1au [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool3DAttrs.html">AdaptivePool3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataProducer.html">DataProducer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapExpr.html">IterMapExpr</a> (<a class="el" href="name [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Add.html">Add</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataProducerNode.html">DataProducerNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapExprNode.html">IterMapExprNode</a> (<a class="el" href="namespacetvm_1_1arith.html" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AddNode.html">AddNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DataType.html">DataType</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMark.html">IterMark</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arit [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ADT.html">ADT</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DataTypePattern.html">DataTypePattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMarkNode.html">IterMarkNode</a> (<a class="el" href="namespacetvm_1_1a [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ADTObj.html">ADTObj</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DataTypePatternNode.html">DataTypePatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSplitExpr.html">IterSplitExpr</a> (<a class="el" href="n [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AffineGridAttrs.html">AffineGridAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DebugAttrs.html">DebugAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSplitExprNode.html">IterSplitExprNode</a> (<a class="el" hre [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Allocate.html">Allocate</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DeformableConv2DAttrs.html">DeformableConv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSumExpr.html">IterSumExpr</a> (<a class="el" href="namespac [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AllocateNode.html">AllocateNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DenseAttrs.html">DenseAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSumExprNode.html">IterSumExprNode</a> (<a class="el" href="namespacetvm_1 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1Allocator.html">Allocator</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DenseMapNode.html">DenseMapNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IterVar.html">IterVar</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::t [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllocStorageAttrs.html">AllocStorageAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1qnn_1_1DequantizeAttrs.html">DequantizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay_1_1qnn.html">tvm::relay::qnn</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarAttr.html">IterVarAttr</a>  [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllocTensorAttrs.html">AllocTensorAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DeviceAPI.html">DeviceAPI</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarAttrNode.html">IterVarAttrNode</a> (<a class="el" href= [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AltPattern.html">AltPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DeviceCopyAttrs.html">DeviceCopyAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IterVarNode.html">IterVarNode</a> (<a class="el" href="namespacetvm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AltPatternNode.html">AltPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPattern.html">DFPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarRelation.html">IterVarRelation</a> (<a class="el" href="namespacet [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1Analyzer.html">Analyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternCallback.html">DFPatternCallback</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarRelationNode.html">IterVarRelationNode</a> (<a class="el" href [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1And.html">And</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternCallbackNode.html">DFPatternCallbackNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_L"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#1 [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ProducerRealizeNode.html">ProducerRealizeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1StorageObj.html">StorageObj</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AndNode.html">AndNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor.html">DFPatternFunctor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ProducerStore.html">ProducerStore</a> (<a class="el" href="namespacetvm_1_1tir [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AnnotationStep.html">AnnotationStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor_3_01R_07const_01DFPattern_01_6n_00_01Args_8_8_8_08_4.html">DFPatternFunctor&lt; R(const DFPattern &amp;n, Args...)&gt;</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160; [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AnnotationStepNode.html">AnnotationStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternNode.html">DFPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LayerNormAttrs.html">Lay [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Any.html">Any</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternVisitor.html">DFPatternVisitor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Layout.html">Layout</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&# [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AnyNode.html">AnyNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Diagnostic.html">Diagnostic</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LayoutAxis.html">LayoutAxis</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td> [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArangeAttrs.html">ArangeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticBuilder.html">DiagnosticBuilder</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LayoutNode.html">LayoutNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::ti [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArgsortAttrs.html">ArgsortAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticContext.html">DiagnosticContext</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LayoutTransformAttrs.html">LayoutTransformAttrs</a> (<a class="el" href="namespac [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticContextNode.html">DiagnosticContextNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LE.html">LE</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#1 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator_1_1ArrayHandler.html">SimpleObjAllocator::ArrayHandler</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticNode.html">DiagnosticNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LeakyReluAttrs.html">LeakyReluAttrs</a>  [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ArrayNode.html">ArrayNode</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticRenderer.html">DiagnosticRenderer</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1LENode.html">LENode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AssertStmt.html">AssertStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticRendererNode.html">DiagnosticRendererNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Let.html">Let</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;& [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AssertStmtNode.html">AssertStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DictAttrs.html">DictAttrs</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Let.html">Let</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;< [...]
-</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1SubPixelAttrs.html">SubPixelAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AttachMap.html">AttachMap</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DictAttrsNode.html">DictAttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetNode.html">LetNode</a> (<a class="el" href="namespacetvm_1_1tir.htm [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1Iterator.html">Iterator</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1PointerTypeNode.html">PointerTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1auto__scheduler_1_1StageAttributes.html">StageAttributes</a> (<a class="e [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool2DAttrs.html">AdaptivePool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1IteratorNode.html">IteratorNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1PragmaStep.htm [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AdaptivePool3DAttrs.html">AdaptivePool3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataProducer.html">DataProducer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1auto__scheduler_1_1AttachMapNode_1_1IterKeyHash.html">AttachMapNode [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Add.html">Add</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DataProducerNode.html">DataProducerNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapExpr.html">IterMapExpr</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::ar [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AddNode.html">AddNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DataType.html">DataType</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMapExprNode.html">IterMapExprNode</a> (<a class="el" href="namespacetvm_1_1arith.h [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ADT.html">ADT</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DataTypePattern.html">DataTypePattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMark.html">IterMark</a> (<a class="el" href="namespacetvm_1_1arith.htm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ADTObj.html">ADTObj</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DataTypePatternNode.html">DataTypePatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterMarkNode.html">IterMarkNode</a> (<a class="el" href="nam [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AffineGridAttrs.html">AffineGridAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DebugAttrs.html">DebugAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSplitExpr.html">IterSplitExpr</a> (<a class="el" href="names [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Allocate.html">Allocate</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DeformableConv2DAttrs.html">DeformableConv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSplitExprNode.html">IterSplitExprNode</a> (<a class="el" hr [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AllocateNode.html">AllocateNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DenseAttrs.html">DenseAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSumExpr.html">IterSumExpr</a> (<a class="el" href="namespacetvm_1_1arith. [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1Allocator.html">Allocator</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DenseMapNode.html">DenseMapNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IterSumExprNode.html">IterSumExprNode</a> (<a class="el" href="namespacetvm_1 [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllocStorageAttrs.html">AllocStorageAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1qnn_1_1DequantizeAttrs.html">DequantizeAttrs</a> (<a class="el" href="namespacetvm_1_1relay_1_1qnn.html">tvm::relay::qnn</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IterVar.html">IterVar</a> (<a cla [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AllocTensorAttrs.html">AllocTensorAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1DeviceAPI.html">DeviceAPI</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarAttr.html">IterVarAttr</a> (<a class="el" href="namespa [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AltPattern.html">AltPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DeviceCopyAttrs.html">DeviceCopyAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarAttrNode.html">IterVarAttrNode</a> (<a class="el" href="names [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AltPatternNode.html">AltPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPattern.html">DFPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IterVarNode.html">IterVarNode</a> (<a class="el" href="namespacetvm_1_1t [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1Analyzer.html">Analyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternCallback.html">DFPatternCallback</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarRelation.html">IterVarRelation</a> (<a class="el" href="namesp [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1And.html">And</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternCallbackNode.html">DFPatternCallbackNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1IterVarRelationNode.html">IterVarRelationNode</a> (<a class="el" href="namesp [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AndNode.html">AndNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor.html">DFPatternFunctor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_L"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160 [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ProducerRealizeNode.html">ProducerRealizeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1StorageAlignStepNode.html">StorageAlignStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AnnotationStep.html">AnnotationStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternFunctor_3_01R_07const_01DFPattern_01_6n_00_01Args_8_8_8_08_4.html">DFPatternFunctor&lt; R(const DFPattern &amp;n, Args...)&gt;</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160; [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AnnotationStepNode.html">AnnotationStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternNode.html">DFPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1L2NormalizeAttrs.html">L [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Any.html">Any</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DFPatternVisitor.html">DFPatternVisitor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LayerNormAttrs.html">LayerNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay. [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AnyNode.html">AnyNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Diagnostic.html">Diagnostic</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Layout.html">Layout</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td vali [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArangeAttrs.html">ArangeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticBuilder.html">DiagnosticBuilder</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LayoutAxis.html">LayoutAxis</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::ti [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ArgsortAttrs.html">ArgsortAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticContext.html">DiagnosticContext</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LayoutNode.html">LayoutNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm:: [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Array.html">Array</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticContextNode.html">DiagnosticContextNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LayoutTransformAttrs.html">LayoutTransformAttrs</a> (<a class="el" href="namespace [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator_1_1ArrayHandler.html">SimpleObjAllocator::ArrayHandler</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticNode.html">DiagnosticNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LE.html">LE</a> (<a class="el" href="namesp [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ArrayNode.html">ArrayNode</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticRenderer.html">DiagnosticRenderer</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LeakyReluAttrs.html">LeakyReluAttrs</a> (<a class="el" href="namespacetvm_1_1rel [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AssertStmt.html">AssertStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DiagnosticRendererNode.html">DiagnosticRendererNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1LENode.html">LENode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>) [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AssertStmtNode.html">AssertStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DictAttrs.html">DictAttrs</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Let.html">Let</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;< [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AttachMap.html">AttachMap</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1DictAttrsNode.html">DictAttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Let.html">Let</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm:: [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AttachMapNode.html">AttachMapNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DilateAttrs.html">DilateAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetNode.html">LetNode</a> (<a class="el" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrDocEntry.html">AttrDocEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Dilation2DAttrs.html">Dilation2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1LetNode.html">LetNode</a> (<a class="el" href="namespacetv [...]
+</td><td rowspan="2" valign="bottom"><a name="letter_T"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;T&#160;&#160;</div></td></tr></table>
 </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1AttachMapNode.html">AttachMapNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DilateAttrs.html">DilateAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1LetNode.html">LetNode</a> (<a class="e [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrDocEntry.html">AttrDocEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Dilation2DAttrs.html">Dilation2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetStmt.html">LetStmt</a> (<a class="el" href="namespacetvm_ [...]
-</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1TakeAttrs.html">TakeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrDocVisitor.html">AttrDocVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Div.html">Div</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetStmtNode.html">LetStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::t [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1AttrError.html">AttrError</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DivNode.html">DivNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LinkedParam.html">LinkedParam</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrExistVisitor.html">AttrExistVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DominatorPattern.html">DominatorPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LinkedParamNode.html">LinkedParamNode</a> (<a class [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrFieldInfo.html">AttrFieldInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DominatorPatternNode.html">DominatorPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Load.html">Load</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrFieldInfoNode.html">AttrFieldInfoNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DropoutAttrs.html">DropoutAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LoadNode.html">LoadNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir< [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrDocVisitor.html">AttrDocVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Div.html">Div</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetStmt.html">LetStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)& [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1AttrError.html">AttrError</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1DivNode.html">DivNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LetStmtNode.html">LetStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrExistVisitor.html">AttrExistVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DominatorPattern.html">DominatorPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LinkedParam.html">LinkedParam</a> (<a class="el" hr [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1Target.html">Target</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrFieldInfo.html">AttrFieldInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1DominatorPatternNode.html">DominatorPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LinkedParamNode.html">LinkedParamNode</a> (<a class="el" href="namespacetvm_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrFieldInfoNode.html">AttrFieldInfoNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1DropoutAttrs.html">DropoutAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Load.html">Load</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#16 [...]
 <tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrInitEntry.html">AttrInitEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_E"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;E&#160;&#160;</div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalBuilder.html">LocalBuilder</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Range.html">Range</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1TargetNode.html">TargetNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160; [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrInitVisitor.html">AttrInitVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalBuilderNode.html">LocalBuilderNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1RangeNode.html">RangeNode</a> ( [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrNonDefaultVisitor.html">AttrNonDefaultVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1EnvFunc.html">EnvFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalRunner.html">LocalRunner</a> (<a class="el" href="namespacetvm_1_1a [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrNopEntry.html">AttrNopEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1EnvFuncNode.html">EnvFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalRunnerNode.html">LocalRunnerNode</a> (<a class="el" href="namespacetvm_1_1au [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrNormalVisitor.html">AttrNormalVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EQ.html">EQ</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LRNAttrs.html">LRNAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AttrPattern.html">AttrPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EQNode.html">EQNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LT.html">LT</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&# [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AttrPatternNode.html">AttrPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Error.html">Error</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LTNode.html">LTNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;< [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistryMap.html">AttrRegistryMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1ErrorBuilder.html">ErrorBuilder</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_M"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;M&#160;&#160;</div></td></tr [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReaderNode.html">RecordReaderNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1TensorComputeOp.html">TensorComputeOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistryMapContainerMap.html">AttrRegistryMapContainerMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1ErrorReporter.html">ErrorReporter</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordToFile.html">RecordToFile</a> (<a class="el" href="namespacetvm_1_1aut [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1Attrs.html">Attrs</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Evaluate.html">Evaluate</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Map.html">Map</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="clas [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrsNode.html">AttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EvaluateNode.html">EvaluateNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1MapNode.html">MapNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"> [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrsSEqualVisitor.html">AttrsSEqualVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1Executable.html">Executable</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Match.html">Match</a> (<a class="el [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrsSHashVisitor.html">AttrsSHashVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ExpandDimsAttrs.html">ExpandDimsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MatchNode.html">MatchNode</a> (<a class="el" hre [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AttrStmt.html">AttrStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1ExprDeepEqual.html">ExprDeepEqual</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MatrixSetDiagAttrs.html">MatrixSetDiagAttrs</a> (<a class="el" href="namespacetvm_1_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AttrStmtNode.html">AttrStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprFunctor.html">ExprFunctor</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Max.html">Max</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160; [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html">AttrTriggerNonDefaultEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html">ExprFunctor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MaxNode.html">MaxNode</a> (<a class="el" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1AttrVisitor.html">AttrVisitor</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprFunctor_3_01R_07const_01Expr_01_6n_00_01Args_8_8_8_08_4.html">ExprFunctor&lt; R(const Expr &amp;n, Args...)&gt;</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool1DA [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AutoSchedulerLayoutTransformAttrs.html">AutoSchedulerLayoutTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprFunctor_3_01R_07const_01PrimExpr_01_6n_00_01Args_8_8_8_08_4.html">ExprFunctor&lt; R(const PrimExpr &amp;n, Args...)&gt;</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</ [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool1DAttrs.html">AvgPool1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprMutator.html">ExprMutator</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool3DAttrs.html">MaxPool3DAttrs</a> (<a class="el" href="namespace [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool2DAttrs.html">AvgPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprMutator.html">ExprMutator</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureCallback.html">MeasureCallback</a> (<a class="el [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool3DAttrs.html">AvgPool3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprPattern.html">ExprPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureCallbackNode.html">MeasureCallbackNode</a> (<a c [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LoadNode.html">LoadNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1RampNode.html">RampNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1TargetKindNode.html">TargetKindNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#16 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrInitVisitor.html">AttrInitVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalBuilder.html">LocalBuilder</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1RandomModel.html">Ra [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrNonDefaultVisitor.html">AttrNonDefaultVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1EnvFunc.html">EnvFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalBuilderNode.html">LocalBuilderNode</a> (<a class="el" href="namespa [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrNopEntry.html">AttrNopEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1EnvFuncNode.html">EnvFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalRunner.html">LocalRunner</a> (<a class="el" href="namespacetvm_1_1auto__sche [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrNormalVisitor.html">AttrNormalVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EQ.html">EQ</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1LocalRunnerNode.html">LocalRunnerNode</a> (<a class="el" href="namespac [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AttrPattern.html">AttrPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EQNode.html">EQNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1LRNAttrs.html">LRNAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1AttrPatternNode.html">AttrPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Error.html">Error</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LT.html">LT</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td  [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistryMap.html">AttrRegistryMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1ErrorBuilder.html">ErrorBuilder</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1LTNode.html">LTNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td va [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrRegistryMapContainerMap.html">AttrRegistryMapContainerMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1ErrorReporter.html">ErrorReporter</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_M"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;M&# [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1RecClosureObj.html">RecClosureObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Tensor.html">Tensor</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1Attrs.html">Attrs</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Evaluate.html">Evaluate</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1RecordReader.html">RecordReader</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_schedu [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrsNode.html">AttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1EvaluateNode.html">EvaluateNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Map.html">Map</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrsSEqualVisitor.html">AttrsSEqualVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1Executable.html">Executable</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1MapNode.html">MapNode</a> (<a class="el" hre [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1detail_1_1AttrsSHashVisitor.html">AttrsSHashVisitor</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ExpandDimsAttrs.html">ExpandDimsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Match.html">Match</a> (<a class="el" href="names [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AttrStmt.html">AttrStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1tir_1_1ExprDeepEqual.html">ExprDeepEqual</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MatchNode.html">MatchNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::r [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1AttrStmtNode.html">AttrStmtNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprFunctor.html">ExprFunctor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MatrixSetDiagAttrs.html">MatrixSetDiagAttrs</a> (<a class="el" href="namesp [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1detail_1_1AttrTriggerNonDefaultEntry.html">AttrTriggerNonDefaultEntry</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprFunctor.html">ExprFunctor</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Max.html">Max</a> (<a class="el" href="namespa [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1AttrVisitor.html">AttrVisitor</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprFunctor_3_01R_07const_01Expr_01_6n_00_01Args_8_8_8_08_4.html">ExprFunctor&lt; R(const Expr &amp;n, Args...)&gt;</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MaxNode.html" [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AutoSchedulerLayoutTransformAttrs.html">AutoSchedulerLayoutTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprFunctor_3_01R_07const_01PrimExpr_01_6n_00_01Args_8_8_8_08_4.html">ExprFunctor&lt; R(const PrimExpr &amp;n, Args...)&gt;</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</ [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool1DAttrs.html">AvgPool1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprMutator.html">ExprMutator</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool2DAttrs.html">MaxPool2DAttrs</a> (<a class="el" href="namespace [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool2DAttrs.html">AvgPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprMutator.html">ExprMutator</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MaxPool3DAttrs.html">MaxPool3DAttrs</a> (<a class="el" href="nam [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1AvgPool3DAttrs.html">AvgPool3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprPattern.html">ExprPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureCallback.html">MeasureCallback</a> (<a class="el [...]
 <tr><td rowspan="2" valign="bottom"><a name="letter_B"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;B&#160;&#160;</div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprPatternNode.html">ExprPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureInput.html">MeasureInput</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1RefWrite.html">RefWrite</a> (<a  [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprRewriter.html">ExprRewriter</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureInputNode.html">MeasureInputNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1RefWriteNode.html">RefWriteNode [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseAttrsNode.html">BaseAttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprVisitor.html">ExprVisitor</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureResult.html">MeasureResult</a> (<a class="el" href="namespacetvm_1_1auto__schedule [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1BaseComputeOpNode.html">BaseComputeOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprVisitor.html">ExprVisitor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureResultNode.html">MeasureResultNode</a> (<a class="el [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseExpr.html">BaseExpr</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ExternOp.html">ExternOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1MemoryInfo.html">MemoryInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a clas [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseExprNode.html">BaseExprNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ExternOpNode.html">ExternOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1MemoryInfoNode.html">MemoryInfoNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td> [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprPatternNode.html">ExprPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureCallbackNode.html">MeasureCallbackNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1RefReadNode.html"> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprRewriter.html">ExprRewriter</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureInput.html">MeasureInput</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1RefValue.html">RefValue</a> (<a class=" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseAttrsNode.html">BaseAttrsNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ExprVisitor.html">ExprVisitor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureInputNode.html">MeasureInputNode</a> (<a class="el" href="namespacetvm_1_1au [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1BaseComputeOpNode.html">BaseComputeOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ExprVisitor.html">ExprVisitor</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureResult.html">MeasureResult</a> (<a class="el" href="namesp [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseExpr.html">BaseExpr</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ExternOp.html">ExternOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1MeasureResultNode.html">MeasureResultNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseExprNode.html">BaseExprNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ExternOpNode.html">ExternOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1MemoryInfo.html">MemoryInfo</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td vali [...]
 <tr><td valign="top"><a class="el" href="classtvm_1_1BaseFunc.html">BaseFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_F"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;F&#160;&#160;</div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1MemoryManager.html">MemoryManager</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1RelayNode.html">RelayNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1TupleGetItemPatternNode.html">TupleGetItemPattern [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseFuncNode.html">BaseFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MeshgridAttrs.html">MeshgridAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1RelayRefType.html">RelayRefType</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#1 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseTensorType.html">BaseTensorType</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FeatureSet.html">FeatureSet</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Min.html">Min</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseTensorTypeNode.html">BaseTensorTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FIFOBufferAttrs.html">FIFOBufferAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MinNode.html">MinNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseValueEqual.html">BaseValueEqual</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FixedPointMultiplyAttrs.html">FixedPointMultiplyAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MirrorPadAttrs.html">MirrorPadAttrs</a> (<a class="el" href="name [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1BaseValueHash.html">BaseValueHash</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1SeqStmt_1_1Flattener.html">SeqStmt::Flattener</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MixedModeMutator.html">MixedModeMutator</a> (<a class="el" href="namespacetvm_1_1re [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchNormAttrs.html">BatchNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1FloatImm.html">FloatImm</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MixedModeVisitor.html">MixedModeVisitor</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchToSpaceNDAttrs.html">BatchToSpaceNDAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1FloatImmNode.html">FloatImmNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Mod.html">Mod</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&# [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BiasAddAttrs.html">BiasAddAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorDiv.html">FloorDiv</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ModNode.html">ModNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BijectiveLayout.html">BijectiveLayout</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorDivNode.html">FloorDivNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ModularSet.html">ModularSet</a> (<a class="el" href="namespacetvm_1_1arith [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BijectiveLayoutNode.html">BijectiveLayoutNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorMod.html">FloorMod</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ModularSetAnalyzer.html">ModularSetAnalyzer</a> (<a class="el" href="names [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BinaryConv2DAttrs.html">BinaryConv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorModNode.html">FloorModNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ModularSetNode.html">ModularSetNode</a> (<a class="el" href="na [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BinaryDenseAttrs.html">BinaryDenseAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowFusedSplitStep.html">FollowFusedSplitStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Module.html" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BinaryOpNode.html">BinaryOpNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowFusedSplitStepNode.html">FollowFusedSplitStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ModuleNode.html">Mo [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BitPackAttrs.html">BitPackAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowSplitStep.html">FollowSplitStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Mul.html">Mul</a> (<a class="el" h [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1Bool.html">Bool</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowSplitStepNode.html">FollowSplitStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MulNode.html">MulNode</a> (<a class="el" href="namespacetvm_1_1tir.h [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Broadcast.html">Broadcast</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1For.html">For</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultiBoxPriorAttrs.html">MultiBoxPriorAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::r [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BroadcastNode.html">BroadcastNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ForNode.html">ForNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultiBoxTransformLocAttrs.html">MultiBoxTransformLocAttrs</a> (<a class="el" href="name [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1vm_1_1Buffer.html">Buffer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1FrameBuffer.html">FrameBuffer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1micro__rpc.html">tvm::runtime::micro_rpc</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_N"></a><table border="0" [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1RfactorStepNode.html">RfactorStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structTVMParallelGroupEnv.html">TVMParallelGroupEnv</a>&#160;&#160;&#160;</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Buffer.html">Buffer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html">Framer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1micro__rpc.html">tvm::runtime::micro_rpc</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ROIAlignAttrs.html">ROIAlignAttrs</a> (<a class="el" [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferLoad.html">BufferLoad</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1StringObj_1_1FromStd.html">StringObj::FromStd</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray.html">NDArray</a> (<a class="el" href="namespace [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferLoadNode.html">BufferLoadNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Function.html">Function</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NdarraySizeAttrs.html">NdarraySizeAttrs</a> (<a class="el" href="namespacetvm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferNode.html">BufferNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionNode.html">FunctionNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NE.html">NE</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#16 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRealize.html">BufferRealize</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1FuncType.html">FuncType</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NENode.html">NENode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td> [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1TypeCall.html">TypeCall</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRealizeNode.html">BufferRealizeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1FuncTypeNode.html">FuncTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1NodeFunctor.html">NodeFunctor</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#16 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferStore.html">BufferStore</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Fuse.html">Fuse</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1NodeFunctor_3_01R_07const_01ObjectRef_01_6n_00_01Args_8_8_8_08_4.html">NodeFunctor&lt; R(const ObjectRef &amp; [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferStoreNode.html">BufferStoreNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1FuseNode.html">FuseNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NonMaximumSuppressionAttrs.html">NonMaximumSuppressionAttrs</a> (<a class="el" href= [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1BuildResult.html">BuildResult</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FuseStep.html">FuseStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Not.html">Not</a> (<a  [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1BuildResultNode.html">BuildResultNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FuseStepNode.html">FuseStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NotNod [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1MemoryInfoNode.html">MemoryInfoNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Registry.html">Registry</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1TupleGetItemNode.html">TupleGetItemNode</a> (<a class="el" href="namespacetvm_1_1relay.htm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseFuncNode.html">BaseFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1vm_1_1MemoryManager.html">MemoryManager</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1RelayExpr.html">RelayExpr</a> (<a class="el" href="namespacetvm.html">tvm</a>) [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseTensorType.html">BaseTensorType</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FeatureSet.html">FeatureSet</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structMemoryManagerInterface.html">MemoryManagerInterface</a>&#160;&#160;&#160;</td><td valign="top"><a class="el" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseTensorTypeNode.html">BaseTensorTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FIFOBufferAttrs.html">FIFOBufferAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MeshgridAttrs.html">MeshgridAttrs</a> (<a class="el" href="namespacetvm_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseValueEqual.html">BaseValueEqual</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1FixedPointMultiplyAttrs.html">FixedPointMultiplyAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Min.html">Min</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1BaseValueHash.html">BaseValueHash</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1SeqStmt_1_1Flattener.html">SeqStmt::Flattener</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MinNode.html">MinNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a> [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchMatmulAttrs.html">BatchMatmulAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1FloatImm.html">FloatImm</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MirrorPadAttrs.html">MirrorPadAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tv [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchNormAttrs.html">BatchNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1FloatImmNode.html">FloatImmNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MixedModeMutator.html">MixedModeMutator</a> (<a class="el" href="namespacetvm_1_1relay.h [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BatchToSpaceNDAttrs.html">BatchToSpaceNDAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorDiv.html">FloorDiv</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1MixedModeVisitor.html">MixedModeVisitor</a> (<a class="el" href="na [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BiasAddAttrs.html">BiasAddAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorDivNode.html">FloorDivNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Mod.html">Mod</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BijectiveLayout.html">BijectiveLayout</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorMod.html">FloorMod</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ModNode.html">ModNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BijectiveLayoutNode.html">BijectiveLayoutNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1FloorModNode.html">FloorModNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ModularSet.html">ModularSet</a> (<a class="el" href="namespacetvm_ [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BinaryConv2DAttrs.html">BinaryConv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowFusedSplitStep.html">FollowFusedSplitStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ModularSetAn [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BinaryDenseAttrs.html">BinaryDenseAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowFusedSplitStepNode.html">FollowFusedSplitStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1Modula [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BinaryOpNode.html">BinaryOpNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowSplitStep.html">FollowSplitStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Module.html">Module</a> (<a class="el [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1BitPackAttrs.html">BitPackAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FollowSplitStepNode.html">FollowSplitStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ModuleNode.html">Modul [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1Bool.html">Bool</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1For.html">For</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Mul.html">Mul</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" hre [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Broadcast.html">Broadcast</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ForNode.html">ForNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1MulNode.html">MulNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160; [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BroadcastNode.html">BroadcastNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1FrameBuffer.html">FrameBuffer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1micro__rpc.html">tvm::runtime::micro_rpc</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultiBoxPriorAttrs.html">Mul [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1vm_1_1Buffer.html">Buffer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1Framer.html">Framer</a> (<a class="el" href="namespacetvm_1_1runtime_1_1micro__rpc.html">tvm::runtime::micro_rpc</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1MultiBoxTransformLocAtt [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Buffer.html">Buffer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1StringObj_1_1FromStd.html">StringObj::FromStd</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_N"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class= [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1RfactorStep.html">RfactorStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structTVMParallelGroupEnv.html">TVMParallelGroupEnv</a>&#160;&#160;&#160;</td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferLoad.html">BufferLoad</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Function.html">Function</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1RfactorStepNode.html">RfactorStepNode</a> (<a class="el" href="namespacetvm_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferLoadNode.html">BufferLoadNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionNode.html">FunctionNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray.html">NDArray</a> (<a class="el" href="namespacetvm_1_1runti [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferNode.html">BufferNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionPattern.html">FunctionPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NdarraySizeAttrs.html">NdarraySizeAttrs</a> (<a class="el" href="namesp [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRealize.html">BufferRealize</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1FunctionPatternNode.html">FunctionPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NE.html">NE</a> (<a class="el" href="namespacetvm_1_1tir.htm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferRealizeNode.html">BufferRealizeNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1FuncType.html">FuncType</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NENode.html">NENode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferStore.html">BufferStore</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1FuncTypeNode.html">FuncTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1NodeFunctor.html">NodeFunctor</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td r [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1TypeCallNode.html">TypeCallNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1BufferStoreNode.html">BufferStoreNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Fuse.html">Fuse</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1NodeFunctor_3_01R_07const_01ObjectRef_01_6n_00_01Args_8_8_8_08_4.html">NodeFunctor&lt; R(const ObjectR [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1BuildResult.html">BuildResult</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1FuseNode.html">FuseNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1NonMaximumSuppressionAttrs.html">NonMaximumSuppressionAtt [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1BuildResultNode.html">BuildResultNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FuseStep.html">FuseStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Not.html">Not< [...]
 <tr><td rowspan="2" valign="bottom"><a name="letter_C"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;C&#160;&#160;</div></td></tr></table>
-</td><td rowspan="2" valign="bottom"><a name="letter_G"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;G&#160;&#160;</div></td></tr></table>
-</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1NullOptType.html">NullOptType</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ScatterNDAttrs.html">ScatterNDAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1TypedEnvFunc.html">TypedEnvFunc</a> (<a class="el" href="namespace [...]
-<tr><td rowspan="2" valign="bottom"><a name="letter_O"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;O&#160;&#160;</div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Schedule.html">Schedule</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1TypedEnvFunc_3_01R_07Args_8_8_8_08_4.html">TypedEnvFunc&lt; R(Args...)&gt;</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheReadStep.html">CacheReadStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GatherAttrs.html">GatherAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1ScheduleNode.html">ScheduleNode</a> (<a c [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheReadStepNode.html">CacheReadStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GE.html">GE</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjAllocatorBase.html">ObjAllocatorBase</a> (<a class [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheWriteStep.html">CacheWriteStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GenericFunc.html">GenericFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Object.html">Object</a> (<a class="el" href="namespacetvm_1_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheWriteStepNode.html">CacheWriteStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GenericFuncNode.html">GenericFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectEqual.html">ObjectEqual</a> (<a class [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Call.html">Call</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GENode.html">GENode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectHash.html">ObjectHash</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Call.html">Call</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GetValidCountsAttrs.html">GetValidCountsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html">ObjectPtr</a> (<a class="el" href="namespacetvm_1_1 [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CallNode.html">CallNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GlobalPool2DAttrs.html">GlobalPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectPtrEqual.html">ObjectPtrEqual</a> (<a class="el" href="namespa [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallNode.html">CallNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GlobalTypeVar.html">GlobalTypeVar</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectPtrHash.html">ObjectPtrHash</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::ru [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallPattern.html">CallPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GlobalTypeVarNode.html">GlobalTypeVarNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">ObjectRef</a> (<a class="el" href="namespacetvm_1_1runtime.html">tv [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallPatternNode.html">CallPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GlobalVar.html">GlobalVar</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker.html">ObjectTypeChecker</a> (<a class="el" href="namespacetvm_1_1runtime [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1CanonicalSimplifier.html">CanonicalSimplifier</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GlobalVarNode.html">GlobalVarNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Array_3_01T_01_4_01_4.html">ObjectTypeChecker&lt; Ar [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Cast.html">Cast</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GridSampleAttrs.html">GridSampleAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Map_3_01K_00_01V_01_4_01_4.html">ObjectTypeChecker&lt; Map [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CastAttrs.html">CastAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GroupNormAttrs.html">GroupNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1OnDeviceAttrs.html">OnDeviceAttrs</a> (<a class="el" href="namespac [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CastHintAttrs.html">CastHintAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GT.html">GT</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1OneHotAttrs.html">OneHotAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::re [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CastNode.html">CastNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GTNode.html">GTNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Op.html">Op</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a cl [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Clause.html">Clause</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_H"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;H&#160;&#160;</div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1OpAttrMap.html">OpAttrMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1SelectVisitAttrs_3_01T_00_01TraitName_00_01false_01_4.html">SelectVisitAttrs&lt; T, TraitName, false &gt;</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1TypePattern.html">Ty [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ClauseNode.html">ClauseNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Operation.html">Operation</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1SeqStmt.html">SeqStmt</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#1 [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ClipAttrs.html">ClipAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator_1_1Handler.html">SimpleObjAllocator::Handler</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1OperationNode.html">OperationNode</a>  [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Closure.html">Closure</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1SEqualReducer_1_1Handler.html">SEqualReducer::Handler</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpImplementation.html">OpImplementation</a> (<a class="el" href="namespacet [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ClosureObj.html">ClosureObj</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1SHashReducer_1_1Handler.html">SHashReducer::Handler</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpImplementationNode.html">OpImplementationNode</a> (<a class="el" href [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CmpOpNode.html">CmpOpNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structdmlc_1_1serializer_1_1Handler_3_01DLContext_01_4.html">Handler&lt; DLContext &gt;</a> (<a class="el" href="namespacedmlc_1_1serializer.html">dmlc::serializer</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1OpNode.html">OpNode</a> (<a class="el" hre [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CommReducer.html">CommReducer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structdmlc_1_1serializer_1_1Handler_3_01DLDataType_01_4.html">Handler&lt; DLDataType &gt;</a> (<a class="el" href="namespacedmlc_1_1serializer.html">dmlc::serializer</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1OpRegEntry.html">OpRegEntry</a> (<a  [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CommReducerNode.html">CommReducerNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1HardwareParams.html">HardwareParams</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpSpecialization.html">OpSpecializa [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CompilerAttrs.html">CompilerAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1HardwareParamsNode.html">HardwareParamsNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpSpecializationNode.htm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeAtStep.html">ComputeAtStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1HybridOp.html">HybridOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpStrategy.html">OpStrategy</a> (<a class="el" href="n [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1FuseStepNode.html">FuseStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1NotNode.html">NotNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ScatterAddAttrs.html">ScatterAddAttrs</a> (<a class=" [...]
+<tr><td rowspan="2" valign="bottom"><a name="letter_G"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;G&#160;&#160;</div></td></tr></table>
+</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1NullOptType.html">NullOptType</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ScatterAttrs.html">ScatterAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1TypedEnvFunc.html">TypedEnvFunc</a> (<a class="el" href="namespacetvm. [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheReadStep.html">CacheReadStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_O"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;O&#160;&#160;</div></td></tr></table>
+</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ScatterNDAttrs.html">ScatterNDAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1TypedEnvFunc_3_01R_07Args_8_8_8_08_4.html">TypedEnvFunc&lt; R(Args...)&gt;</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td></tr>
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheReadStepNode.html">CacheReadStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GatherAttrs.html">GatherAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Schedule.html">Schedule</a> (<a c [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheWriteStep.html">CacheWriteStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GE.html">GE</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjAllocatorBase.html">ObjAllocatorBase</a> (<a class="el"  [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1CacheWriteStepNode.html">CacheWriteStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GenericFunc.html">GenericFunc</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Object.html">Object</a> (<a class="el" href="namespa [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Call.html">Call</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GenericFuncNode.html">GenericFuncNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectEqual.html">ObjectEqual</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#16 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Call.html">Call</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GENode.html">GENode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectHash.html">ObjectHash</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CallNode.html">CallNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GetValidCountsAttrs.html">GetValidCountsAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjectPtr.html">ObjectPtr</a> (<a class="el" href="namespacetvm_1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallNode.html">CallNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GlobalPool2DAttrs.html">GlobalPool2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectPtrEqual.html">ObjectPtrEqual</a> (<a class="el" href="n [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallPattern.html">CallPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GlobalTypeVar.html">GlobalTypeVar</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectPtrHash.html">ObjectPtrHash</a> (<a class="el" href="namespacetvm_1_1runtime.html">t [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1CallPatternNode.html">CallPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GlobalTypeVarNode.html">GlobalTypeVarNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ObjectRef.html">ObjectRef</a> (<a class="el" href="namespacetvm_1_1runtime. [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1CanonicalSimplifier.html">CanonicalSimplifier</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GlobalVar.html">GlobalVar</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker.html">ObjectTypeChecker</a> (<a class="el" href="namespacetvm_1_ [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Cast.html">Cast</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1GlobalVarNode.html">GlobalVarNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Array_3_01T_01_4_01_4.html">ObjectTypeChecker&lt; Array&lt; T &gt; &gt;</a> (<a class="e [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CastAttrs.html">CastAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GridSampleAttrs.html">GridSampleAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1ObjectTypeChecker_3_01Map_3_01K_00_01V_01_4_01_4.html">ObjectTy [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CastHintAttrs.html">CastHintAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1GroupNormAttrs.html">GroupNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1OnDeviceAttrs.html">OnDeviceAttrs</a> (<a class="el" href=" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CastNode.html">CastNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GT.html">GT</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1OneHotAttrs.html">OneHotAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Clause.html">Clause</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1GTNode.html">GTNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Op.html">Op</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a  [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ClauseNode.html">ClauseNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_H"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;H&#160;&#160;</div></td></tr></table>
+</td><td valign="top"><a class="el" href="classtvm_1_1OpAttrMap.html">OpAttrMap</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1SelectVisitAttrs.html">SelectVisitAttrs</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1TypePattern.html">TypePattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ClipAttrs.html">ClipAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1Operation.html">Operation</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1SelectVisitAttrs_3_01T_00_01TraitName_00_01false_01_4.html">SelectVisitAttrs&lt; T, Tr [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Closure.html">Closure</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator_1_1Handler.html">SimpleObjAllocator::Handler</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1OperationNode.html">OperationNode</a> [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1ClosureObj.html">ClosureObj</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1SEqualReducer_1_1Handler.html">SEqualReducer::Handler</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpImplementation.html">OpImplementation</a> (<a class="el" href="name [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CmpOpNode.html">CmpOpNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1SHashReducer_1_1Handler.html">SHashReducer::Handler</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpImplementationNode.html">OpImplementationNode</a> (<a class="el" href="namespacetvm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CommReducer.html">CommReducer</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structdmlc_1_1serializer_1_1Handler_3_01DLContext_01_4.html">Handler&lt; DLContext &gt;</a> (<a class="el" href="namespacedmlc_1_1serializer.html">dmlc::serializer</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1OpNode.html">OpNode</a> (<a class="el" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1tir_1_1CommReducerNode.html">CommReducerNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structdmlc_1_1serializer_1_1Handler_3_01DLDataType_01_4.html">Handler&lt; DLDataType &gt;</a> (<a class="el" href="namespacedmlc_1_1serializer.html">dmlc::serializer</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1OpRegEntry.html">OpRegEntry< [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CompilerAttrs.html">CompilerAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1HardwareParams.html">HardwareParams</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpSpecialization.html">OpSpecial [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeAtStep.html">ComputeAtStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1HardwareParamsNode.html">HardwareParamsNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeAtStepNode.html">ComputeAtStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1HybridOp.html">HybridOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpStrategy.html">OpStrategy</a> (<a class="el" [...]
 </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeAtStepNode.html">ComputeAtStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1HybridOpNode.html">HybridOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpStrategyNode.html">OpStrategyNode</a [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeDAG.html">ComputeDAG</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_I"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;I&#160;&#160;</div></td></tr></table>
-</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Optional.html">Optional</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ShapePatternNode.html">ShapePatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html">Unframer</a> (<a class="el" hr [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeDAGNode.html">ComputeDAGNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Or.html">Or</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1SHashReducer.html">SHashReducer</a> (<a class="el" href="namespacetvm. [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeInlineStep.html">ComputeInlineStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Id.html">Id</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1OrNode.html">OrNode</a> (<a class="el" href="namesp [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeInlineStepNode.html">ComputeInlineStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IdNode.html">IdNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_P"></a><table border="0" cellspacing="0" ce [...]
-</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ShuffleNode.html">ShuffleNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_V"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;V&#160;&#160;</div></td></tr></table>
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeDAG.html">ComputeDAG</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1HybridOpNode.html">HybridOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1OpStrategyNode.html">OpStrategyNode</a> (<a class="e [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeDAGNode.html">ComputeDAGNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_I"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;I&#160;&#160;</div></td></tr></table>
+</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1Optional.html">Optional</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ShapePattern.html">ShapePattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1Unframer.html">Unframer</a> (<a class="el" href="name [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeInlineStep.html">ComputeInlineStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Or.html">Or</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ShapePatternNode.html">ShapePatternNode</a> (<a class=" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeInlineStepNode.html">ComputeInlineStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Id.html">Id</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1OrNode.html">OrNode</a> (<a class="el" href [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1ComputeOp.html">ComputeOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IdNode.html">IdNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_P"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;P&#160;&#160 [...]
+</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1Shuffle.html">Shuffle</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td rowspan="2" valign="bottom"><a name="letter_V"></a><table border="0" cellspacing="0" cellpadding="0"><tr><td><div class="ah">&#160;&#160;V&#160;&#160;</div></td></tr></table>
 </td></tr>
-<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1ComputeOp.html">ComputeOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1If.html">If</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1SimpleObjAllocator.html">SimpleObjAllocator</a> (<a class="el" href="namespacetvm_1_1runtime.html">tv [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1ComputeOpNode.html">ComputeOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IfNode.html">IfNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1PackedFunc.html">PackedFunc</a> (<a class="el" href="namespacetvm_1_1runtime.html">tv [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeRootStep.html">ComputeRootStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IfThenElse.html">IfThenElse</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter.html">PackedFun [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeRootStepNode.html">ComputeRootStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IfThenElseNode.html">IfThenElseNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConcatenateAttrs.html">ConcatenateAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSEqualReduce.html">ImplSEqualReduce</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01PrimExpr_01_4.h [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Constant.html">Constant</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSEqualReduce_3_01T_00_01true_01_4.html">ImplSEqualReduce&lt; T, true &gt;</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverte [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantNode.html">ConstantNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSHashReduce.html">ImplSHashReduce</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01tvm_1_1Integer_01_4.html"> [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantPattern.html">ConstantPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSHashReduce_3_01T_00_01true_01_4.html">ImplSHashReduce&lt; T, true &gt;</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncV [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantPatternNode.html">ConstantPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplVisitAttrs.html">ImplVisitAttrs</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1PacketFieldSizeBytes.html">Pac [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBound.html">ConstIntBound</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplVisitAttrs_3_01T_00_01true_01_4.html">ImplVisitAttrs&lt; T, true &gt;</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1PadAttrs.html">PadA [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html">ConstIntBoundAnalyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IncompleteType.html">IncompleteType</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1Pass.html">Pass</a> (<a class="el" href="namespacetvm_1_1transform. [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBoundNode.html">ConstIntBoundNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IncompleteTypeNode.html">IncompleteTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassContext.html">PassContext</a> (<a class="el" href="namespacetvm [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstraintContext.html">ConstraintContext</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1InitOpAttrs.html">InitOpAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassContextNode.html">PassContextNode</a> (<a class="el [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1Constructor.html">Constructor</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1InplaceArrayBase.html">InplaceArrayBase</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassInfo.html">PassInfo</a> (<a class="el" href="namespacetvm_1_1transform.ht [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1ConstructorNode.html">ConstructorNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1InstanceNormAttrs.html">InstanceNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassInfoNode.html">PassInfoNode</a> (<a class="el" href="namespacetvm_1_ [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstructorValue.html">ConstructorValue</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1vm_1_1Instruction.html">Instruction</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassNode.html">PassNode</a> (<a cla [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1te_1_1ComputeOpNode.html">ComputeOpNode</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1If.html">If</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1ShuffleNode.html">ShuffleNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#1 [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeRootStep.html">ComputeRootStep</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1IfNode.html">IfNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1PackedFunc.html">PackedFunc</a> (<a class=" [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1auto__scheduler_1_1ComputeRootStepNode.html">ComputeRootStepNode</a> (<a class="el" href="namespacetvm_1_1auto__scheduler.html">tvm::auto_scheduler</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IfThenElse.html">IfThenElse</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter.html">P [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConcatenateAttrs.html">ConcatenateAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1tir_1_1IfThenElseNode.html">IfThenElseNode</a> (<a class="el" href="namespacetvm_1_1tir.html">tvm::tir</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01Optional_3_01T_01_4_01_4.html [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Constant.html">Constant</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSEqualReduce.html">ImplSEqualReduce</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01PrimExpr_01_4.html">PackedFuncVa [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantNode.html">ConstantNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSEqualReduce_3_01T_00_01true_01_4.html">ImplSEqualReduce&lt; T, true &gt;</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValue [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantPattern.html">ConstantPattern</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSHashReduce.html">ImplSHashReduce</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1PackedFuncValueConverter_3_01tvm_1_1Integer_01_4. [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstantPatternNode.html">ConstantPatternNode</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplSHashReduce_3_01T_00_01true_01_4.html">ImplSHashReduce&lt; T, true &gt;</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1Pac [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBound.html">ConstIntBound</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplVisitAttrs.html">ImplVisitAttrs</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1micro__rpc_1_1PacketFieldSizeBytes.html">PacketFieldSize [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBoundAnalyzer.html">ConstIntBoundAnalyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1detail_1_1ImplVisitAttrs_3_01T_00_01true_01_4.html">ImplVisitAttrs&lt; T, true &gt;</a> (<a class="el" href="namespacetvm_1_1detail.html">tvm::detail</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Pad [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstIntBoundNode.html">ConstIntBoundNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IncompleteType.html">IncompleteType</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1Pass.html">Pass</a> (<a class="el" href="namespacetvm_1_1transform.html">tv [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1arith_1_1ConstraintContext.html">ConstraintContext</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IncompleteTypeNode.html">IncompleteTypeNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassContext.html">PassContext</a> (<a class="el" href="namespacetvm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1Constructor.html">Constructor</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1InitOpAttrs.html">InitOpAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassContextNode.html">PassContextNode</a> (<a class="el" href="namespacetvm_1_1transform.htm [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1ConstructorNode.html">ConstructorNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1InplaceArrayBase.html">InplaceArrayBase</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassInfo.html">PassInfo</a> (<a class="el" href="namespacetvm_1_1tran [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1relay_1_1ConstructorValue.html">ConstructorValue</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1relay_1_1InstanceNormAttrs.html">InstanceNormAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassInfoNode.html">PassInfoNode</a> (<a class [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConstructorValueObj.html">ConstructorValueObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="structtvm_1_1runtime_1_1vm_1_1Instruction.html">Instruction</a> (<a class="el" href="namespacetvm_1_1runtime_1_1vm.html">tvm::runtime::vm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1transform_1_1PassNode.html">PassNode</a>  [...]
 </td></tr>
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConstructorValueObj.html">ConstructorValueObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraints.html">IntConstraints</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Pattern.html">Pattern</a> (<a class="el" href="na [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray_1_1Container.html">NDArray::Container</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsNode.html">IntConstraintsNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternConstructor.html">PatternCons [...]
-<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray_1_1ContainerBase.html">NDArray::ContainerBase</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsTransform.html">IntConstraintsTransform</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternConstructor [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv1DAttrs.html">Conv1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsTransformNode.html">IntConstraintsTransformNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternFunctor.html">PatternFunctor</a> [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv1DTransposeAttrs.html">Conv1DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Integer.html">Integer</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternFunctor_3_01R_07const_01Pattern_01_6n_00_01Args_8_8_8_08_4.html">PatternFunctor [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DAttrs.html">Conv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1InterpreterClosure.html">InterpreterClosure</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternMutator.html">PatternMutator</a> (<a class="el" hr [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray_1_1Container.html">NDArray::Container</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraints.html">IntConstraints</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Pattern.html">Pattern</a> (<a class="el" hre [...]
+<tr><td valign="top"><a class="el" href="classtvm_1_1runtime_1_1NDArray_1_1ContainerBase.html">NDArray::ContainerBase</a> (<a class="el" href="namespacetvm_1_1runtime.html">tvm::runtime</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsNode.html">IntConstraintsNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternConstructor.html">Pat [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv1DAttrs.html">Conv1DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsTransform.html">IntConstraintsTransform</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternConstructorNode.html">PatternConstructor [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv1DTransposeAttrs.html">Conv1DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntConstraintsTransformNode.html">IntConstraintsTransformNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternFunctor.html"> [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DAttrs.html">Conv2DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1Integer.html">Integer</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternFunctor_3_01R_07const_01Pattern_01_6n_00_01Args_8_8_8_08_4.html">PatternFunctor&lt; R(const Patte [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DTransposeAttrs.html">Conv2DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1InterpreterClosure.html">InterpreterClosure</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternMutator.html">PatternMutator</a> [...]
 </td></tr>
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DTransposeAttrs.html">Conv2DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1InterpreterClosureObj.html">InterpreterClosureObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternNode.html">PatternNode</a> [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradAttrs.html">Conv2DWinogradAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntGroupBounds.html">IntGroupBounds</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternTuple.html">PatternTuple</a> (<a class="el [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradNNPACKWeightTransformAttrs.html">Conv2DWinogradNNPACKWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntGroupBoundsNode.html">IntGroupBoundsNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_ [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradAttrs.html">Conv2DWinogradAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1InterpreterClosureObj.html">InterpreterClosureObj</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternNode.html">PatternNode</a> ( [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv2DWinogradNNPACKWeightTransformAttrs.html">Conv2DWinogradNNPACKWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntGroupBounds.html">IntGroupBounds</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1Pattern [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DAttrs.html">Conv3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntGroupBoundsNode.html">IntGroupBoundsNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternTupleNode.html">PatternTupleNode</a> (<a class="el [...]
 </td></tr>
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DAttrs.html">Conv3DAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IntImm.html">IntImm</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#16 [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DTransposeAttrs.html">Conv3DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IntImmNode.html">IntImmNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVarNode.html">PatternVarNode</a> (<a class="el" href="namespacetvm_1_1rel [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DWinogradAttrs.html">Conv3DWinogradAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntSet.html">IntSet</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVisitor.html">PatternVisitor</a> (<a class="el" href="name [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConvGemmWeightTransformAttrs.html">ConvGemmWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntSetAnalyzer.html">IntSetAnalyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternWildcard.html">PatternWi [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConvWinogradWeightTransformAttrs.html">ConvWinogradWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntSetNode.html">IntSetNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternWildcardNode.html">Patte [...]
-<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CorrelationAttrs.html">CorrelationAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IRModule.html">IRModule</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1te_1_1PlaceholderOp.html">PlaceholderOp</a> (<a class="el" href="namespacetvm_1_1te.html">tvm::te</a> [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DTransposeAttrs.html">Conv3DTransposeAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IntImm.html">IntImm</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVar.html">PatternVar</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::re [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1Conv3DWinogradAttrs.html">Conv3DWinogradAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1IntImmNode.html">IntImmNode</a> (<a class="el" href="namespacetvm.html">tvm</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVarNode.html">PatternVarNode</a> (<a class="el" href="namespacetvm_1_1relay [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConvGemmWeightTransformAttrs.html">ConvGemmWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntSet.html">IntSet</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternVisitor.html">PatternVisitor</a> (<a cla [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1ConvWinogradWeightTransformAttrs.html">ConvWinogradWeightTransformAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntSetAnalyzer.html">IntSetAnalyzer</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternWildcard.html">P [...]
+<tr><td valign="top"><a class="el" href="structtvm_1_1relay_1_1CorrelationAttrs.html">CorrelationAttrs</a> (<a class="el" href="namespacetvm_1_1relay.html">tvm::relay</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1arith_1_1IntSetNode.html">IntSetNode</a> (<a class="el" href="namespacetvm_1_1arith.html">tvm::arith</a>)&#160;&#160;&#160;</td><td valign="top"><a class="el" href="classtvm_1_1relay_1_1PatternWildcardNode.html">PatternWildcardNode</a> (<a class="el [...]
 <tr><td></td><td></td><td></td><td></td><td></td></tr>
 </table>
 <div class="qindex"><a class="qindex" href="#letter_A">A</a>&#160;|&#160;<a class="qindex" href="#letter_B">B</a>&#160;|&#160;<a class="qindex" href="#letter_C">C</a>&#160;|&#160;<a class="qindex" href="#letter_D">D</a>&#160;|&#160;<a class="qindex" href="#letter_E">E</a>&#160;|&#160;<a class="qindex" href="#letter_F">F</a>&#160;|&#160;<a class="qindex" href="#letter_G">G</a>&#160;|&#160;<a class="qindex" href="#letter_H">H</a>&#160;|&#160;<a class="qindex" href="#letter_I">I</a>&#160;|& [...]
diff --git a/docs/api/doxygen/classtvm_1_1BaseAttrsNode.html b/docs/api/doxygen/classtvm_1_1BaseAttrsNode.html
index 2af0b84..25d1558 100644
--- a/docs/api/doxygen/classtvm_1_1BaseAttrsNode.html
... 50061 lines suppressed ...