You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/03/31 16:49:19 UTC

[GitHub] [tvm] YuhengHuang42 commented on issue #7563: Accuracy drop when use batch and opt_level=3

YuhengHuang42 commented on issue #7563:
URL: https://github.com/apache/tvm/issues/7563#issuecomment-811245830


   Hi, I'm interested in this bug and did some experiments. Here are some findings:
   
   1. If you do the FuseOps transform first on the relay graph, i.e.
   
   ```
   seq = tvm.transform.Sequential(
           [
               transform.SimplifyInference(),
               transform.FuseOps()
           ]
       )   
   with tvm.transform.PassContext(opt_level=opt_level):
       mod = seq(mod)
   # build the model
   # ...
   ```
   
   Then the final result is correct.
   
   2. If you use opt_level=4 to build the model, then the final result is also correct:
   
   ```
   opt_level = 4
   with tvm.transform.PassContext(opt_level=opt_level):
       lib = relay.build(mod, target='llvm', params=param)
   ```
   
   This seems pretty weird to me. So I disable some passes here, trying to dig deeper.
   
   ```
   disabled_pass = ["CombineParallelConv2D", "CombineParallelDense", "CombineParallelBatchMatmul", "FastMath"]
   opt_level = 4
   with tvm.transform.PassContext(opt_level=opt_level, disabled_pass=disabled_pass):
       lib = relay.build(mod, target='llvm', params=param)
   ```
   
   As far as I know, these four passes are the only ones that are enabled by default. However, disable these passes doesn't influence the final result: the result is still correct.
   
   My environment:
   
   Build from source at 2988a08e3ff4a8956ac9b23e662374f6d8f7f4d9,
   
   OS: macOS 10.15.7
   
   As I'm new to TVM, I'm stuck here and can't dig deeper at present. Hope these info can help you find the root cause of the bug.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org