You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/05/30 05:27:04 UTC

[GitHub] [tvm-rfcs] cgerum commented on a diff in pull request #72: [RFC] DietCode: An Auto-Scheduler for Dynamic Tensor Programs

cgerum commented on code in PR #72:
URL: https://github.com/apache/tvm-rfcs/pull/72#discussion_r884430096


##########
rfcs/0072-dynamic-autoscheduler.md:
##########
@@ -0,0 +1,207 @@
+- Feature Name: DietCode: An Auto-Scheduler for Dynamic Tensor Programs
+- Start Date: (2022-05-10)
+- RFC PR: [apache/tvm-rfcs#xx](https://github.com/apache/tvm-rfcs/pull/xx)
+- GitHub Issue: [apache/tvm#yy](https://github.com/apache/tvm/pull/yy)
+
+# Summary
+[summary]: #summary
+
+We propose to integrate DietCode, an auto-scheduler for dynamic tensor programs,
+to AutoTIR. DietCode offers the following features:
+- A shape-generic search space to cover possible shapes in dynamic shape
+  workloads.
+- A dynamic-shape aware cost model to judge the quality of schedule candidates.
+- Enhancement to the TVM CUDA codegen for imperfect tiling.
+
+DietCode has been published by MLSys 2022 so please see [the
+paper](https://proceedings.mlsys.org/paper/2022/hash/fa7cdfad1a5aaf8370ebeda47a1ff1c3-Abstract.html)
+for more details and evaluations. Meanwhile, the latest DietCode codebase is also publicly
+available [here](https://github.com/UofT-EcoSystem/DietCode).
+
+# Motivation
+[motivation]: #motivation
+
+Achieving high performance for compute-intensive operators in machine learning
+workloads is a crucial but challenging task. Many machine learning and system
+practitioners rely on vendor libraries or auto-schedulers to do the job. While
+the former requires significant engineering efforts, the latter in TVM only supports
+static-shape workloads in existing works. It is difficult, if not impractical,
+to apply the existing auto-scheduler directly to **dynamic-shape workloads**, as
+this leads to extremely long tuning time.
+
+We observe that the key challenge faced by existing auto-schedulers when
+handling a dynamic-shape workload is that they cannot construct a conclusive search
+space for all the possible shapes of the workload, because their search space is
+shape-dependent. To address this, this RFC aims to add dynamic-shape supports to
+AutoTIR by integrating DietCode framework, which constructs **a shape-generic
+search space and cost model** to auto-schedule dynamic-shape workloads
+efficiently.
+
+Our evaluation shows that DietCode has the following key strengths when
+auto-scheduling an entire model end-to-end: 
+
+1. reduces the auto-scheduling time by up to 5.88x less than the current
+   auto-scheduler on 8 uniformly sampled dynamic shapes, and
+1. improves performance by up to 69.5% better than the auto-scheduler and 18.6%
+   better than the vendor library. All these advantages make DietCode an
+   efficient and practical solution for dynamic-shape workloads.
+
+
+# Guide-Level Explanation
+[guide-level-explanation]: #guide-level-explanation
+
+The existing experiments are largely conducted with auto-scheduler. However,
+having been syncing with the AutoTIR team for quarters, we plan to integrate
+this RFC to MetaSchedule (AutoTIR), because it provides more systematic
+interface and cleaner integration path with less hacks.
+
+To provide an example of additional information users are required to feed the
+system:
+
+```python
+# A symbolic shape constraint
+T = tir.ShapeVar('T’)
+# The candidate values of `T`
+T_vals = list(range(1, 128))
+
+task = Task(func=Dense,
+            args=(16*T, 768, 2304),
+            shape_vars=(T,),
+            wkl_insts=(T_vals,)
+            wkl_inst_weights=([1. for _ in T_vals],))
+```
+
+To enable auto-scheduling for dynamic shape workloads, users only need to:
+1. Have `ShapeVar` in the TE/TensorIR compututation.
+2. Specify the weight/distribution of each shape value.
+

Review Comment:
   Does DietCode have relay integration? If it does could you provide an example of how dynamic shapes, and wkl_insts are identified in relay. Otherwise this should probably mentioned in the current drawbacks and/or future possibilities. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org