You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/12/01 13:12:16 UTC

[GitHub] [tvm] Meteorix opened a new issue #7008: [RELAY][BUG]type inference is slow

Meteorix opened a new issue #7008:
URL: https://github.com/apache/tvm/issues/7008


   For a large model, tvm compilation is really slow. I perf it and find that type inference costs most of the time.
   ```
   # Children      Self  Command  Shared Object                         Symbol
   # ........  ........  .......  ....................................  .....................................................................................................................................................................
   #
       93.18%     0.00%  python   libtvm.so                             [.] tvm::relay::PatternRewriter::Rewrite
               |
               ---tvm::relay::PatternRewriter::Rewrite
                  |
                   --93.17%--tvm::relay::InferTypeWithModule
                             |
                              --93.05%--tvm::transform::Pass::operator()
                                        tvm::transform::PassNode::operator()
                                        tvm::transform::ModulePassNode::operator()
                                        tvm::runtime::PackedFunc::operator()<tvm::IRModule, tvm::transform::PassContext>
                                        std::function<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()
                                        std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), void tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tv
                                        void tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassCont
                                        |
                                         --93.03%--tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}::operator()
                                                   |
                                                   |--79.48%--tvm::relay::TypeInferencer::Infer
                                                   |          |
                                                   |          |--49.03%--tvm::relay::TypeInferencer::GetType
   ```
   
   From my understanding, ``PatternRewriter`` rewrite every function in a module, then each time ``PatternRewriter`` calls ``InferType`` to infer every function. It should be incremental. Is there any reason why this ``incremental`` inference is commented? https://github.com/apache/tvm/blob/main/src/relay/transforms/type_infer.cc#L805
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Meteorix closed issue #7008: [RELAY][BUG]type inference is slow

Posted by GitBox <gi...@apache.org>.
Meteorix closed issue #7008:
URL: https://github.com/apache/tvm/issues/7008


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on issue #7008: [RELAY][BUG]type inference is slow

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on issue #7008:
URL: https://github.com/apache/tvm/issues/7008#issuecomment-752811155


   If there is no actionable item, I suggest we merge this particular issue to relay improvements, and close this thread.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi commented on issue #7008: [RELAY][BUG]type inference is slow

Posted by GitBox <gi...@apache.org>.
t-vi commented on issue #7008:
URL: https://github.com/apache/tvm/issues/7008#issuecomment-737327528


   #6900 is my attempt at getting incremental type inference into the PyToch frontend. It will need some serious cleanup, but I think it is useful. It removes the N² thing of re-checking everything in a linearly growing model and empirically reduces the translation time for BERT by ~3x.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Meteorix commented on issue #7008: [RELAY][BUG]type inference is slow

Posted by GitBox <gi...@apache.org>.
Meteorix commented on issue #7008:
URL: https://github.com/apache/tvm/issues/7008#issuecomment-737631890


   @t-vi Thanks, I have read your PR and discuss thread. But when I use `relay.build`, it still calls c++ build pipeline then c++ type inference. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jroesch commented on issue #7008: [RELAY][BUG]type inference is slow

Posted by GitBox <gi...@apache.org>.
jroesch commented on issue #7008:
URL: https://github.com/apache/tvm/issues/7008#issuecomment-739588404


   @Meteorix the type inferencer in its current form has some issues being incrementalized, the reason it is so slow is people are using it in a N^2 with number of nodes way like @t-vi mentioned. It wasn't originally designed to be used this way and probably needs to be refactored but doing that is relatively complicated. There are some other issues I would like to address at same time and probably won't happen until q1 2021.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org