You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by Jhagege via Apache TVM Discuss <no...@discuss.tvm.ai> on 2020/11/09 10:42:43 UTC

[Apache TVM Discuss] [Questions] Difference with ONNX


Hi, just learned about TVM and seems like a very interesting project.
Does it have similar design goals to ONNX ? i.e: Portability and efficient inference on different target hardwares ? 
Would be glad to understand if they are located on different layers in the Inference stack, and how their philosophy differs, if at all. 
I hope question is not too trivial / basic :) 

Thanks much for any insights.





---
[Visit Topic](https://discuss.tvm.apache.org/t/difference-with-onnx/8416/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/160676eb67c42e35c6daeaf17f3a466b6574fa87b0ca2461bab9610f78e0702e).

[Apache TVM Discuss] [Questions] Difference with ONNX

Posted by Sky via Apache TVM Discuss <no...@discuss.tvm.ai>.

I believe ONNX is an exchange format to port DL models from one frame work to another. TVM is more like a DL compiler. It compiles the DL model based on different frameworks for different target hardwares. During compilation it also tries to optimize using scheduling  and various other techniques. 
There are many other features associated with TVM like autoTVM, uTVM etc.





---
[Visit Topic](https://discuss.tvm.apache.org/t/difference-with-onnx/8416/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/c29f1cf591f65cc0c6b31ce31e39788b4a70b193c1967dd1c5140baf50fbfcdc).