You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2019/11/22 22:30:17 UTC

[GitHub] [incubator-tvm] zxy844288792 opened a new pull request #4404: [AutoTVM] select model with the most tuned schedules

zxy844288792 opened a new pull request #4404: [AutoTVM] select model with the most tuned schedules
URL: https://github.com/apache/incubator-tvm/pull/4404
 
 
   Recently I noticed that the performance for gpu drops significantly for conv2d with untuned workload. I did some investigation and some results are shown below:
   test script: https://gist.github.com/zxy844288792/af05f12895ceaa2ba754ff22116b341d
   
   After Commit [Bump up CUDA log version in tophub.py] 888a3c35cf53f6af6cb9bcca4ff3e917ea6fafca 
   Node Name        Ops              Time(us)  Time(%)  Shape              Inputs  Outputs  
   ---------        ---              --------  -------  -----              ------  -------  
   fused_nn_conv2d  fused_nn_conv2d  15119.7   100.0    (1, 32, 512, 512)  2       1        
   Total_time       -                15119.7   -        -                  -       -   
   
   Before Commit [Bump up CUDA log version in tophub.py] 888a3c35cf53f6af6cb9bcca4ff3e917ea6fafca 
   Node Name        Ops              Time(us)  Time(%)  Shape              Inputs  Outputs  
   ---------        ---              --------  -------  -----              ------  -------  
   fused_nn_conv2d  fused_nn_conv2d  68.82     100.0    (1, 32, 512, 512)  2       1 
   
   The performance drops 200x than before. The commit [Bump up CUDA log version in tophub.py] only modifies the tuned schedule file by adding one more line of schedules for conv2d_transpose_nchw. In fact the root cause of this issue is that for a untuned workload, we are trying to find the most similar tuned one from the file to mimic its parameters. And the formal logic was trying to use the "model" of the last line of tuned schedule file as a reference. In this case, as we added one more line to the schedule file with "model" = v100, so it is trying to use all tuned schedules for v100 to mimic the parameters. However, we only have one tuned schedule for v100 so it does not help here. 
   
   The change I made is instead of using the last line as the reference, we fisrt will count the number of tuned schedules and if we need to mimic the parameters, it will choose the model with the most tuned schedules.
   
   After applying the above logic, the performance goes back to normal level as shown below:
   Node Name        Ops              Time(us)  Time(%)  Shape              Inputs  Outputs  
   ---------        ---              --------  -------  -----              ------  -------  
   fused_nn_conv2d  fused_nn_conv2d  68.675    100.0    (1, 32, 512, 512)  2       1 
   
   @merrymercy @eqy 
   I want to hear some feedbacks if the change here is valid.
   
   Thanks for contributing to TVM!   Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services