You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/06/11 14:53:03 UTC

[GitHub] [incubator-tvm] mbrookhart commented on pull request #5755: Edit onnx parser to infer values in post order

mbrookhart commented on pull request #5755:
URL: https://github.com/apache/incubator-tvm/pull/5755#issuecomment-642711335


   Yes, that's the idea, and why the hugging face model import speeds up so much. It seems like there is significant overhead each time we call MCJIT though, which is why BERT-Squad, with many ops but fewer infer_value calls, slows down. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org