You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/01/01 08:47:47 UTC

[GitHub] [incubator-tvm] MarisaKirisame edited a comment on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR

MarisaKirisame edited a comment on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR
URL: https://github.com/apache/incubator-tvm/issues/4468#issuecomment-570034759
 
 
   @DKXXXL
   1. while it is true that tensor is the bottleneck, the high level code dictate what tensor will get computed. For example, you can do dead code elimination to remove unnecessary tensor operation(this is a real need rn), or deforestation(which require an effect analysis) to make tensor operation into one graph, which can then be further optimize by all graph-level pass. right now, we have dead code elimination but it is simply incorrect - it assume there is no effect, and @jroesch need it fixed.
   2. I dont think there is much value in CFA. However, we do need pointer analysis because the AD pass create reference (including reference of a closure that modify reference) everywhere, and AAM bring pointer analysis for free.
   3. Again, AAM bring pointer analysis for free because AAM map every variable to a location (which is what pointer analysis do). Also dataflow framework do not do closure, or we will stick with it. For backward analysis, I dont see a problem: the idea of AAM is to map every variable to a finite location, and you can do you backward transition on those locations just similarly.
   Happy new year!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services