You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by yulongl via Apache TVM Discuss <no...@discuss.tvm.ai> on 2021/05/24 20:23:13 UTC

[Apache TVM Discuss] [Questions] TVM performance vs. ONNX


Hi, 

I am following this post to compile/tune the resnet50 model on CPU:
https://tvm.apache.org/docs/tutorials/get_started/auto_tuning_with_python.html#sphx-glr-tutorials-get-started-auto-tuning-with-python-py

I am using the same parameters except "trials": 1500 instead of 10 to get better tuning result. I also specified the mcpu parameter based on my CPU model. 
On Intel I9 I got:

w/o tuning, mean time: 35.59ms/iter, std:2.4

w/  tuning, mean time: 22.9ms/iter, std:1.3

However, when I run the same ONNX model through ONNX runtime, I got:
mean time: 22.9ms/iter, std:0.9
if turning on the GraphOptimization in ONNX, I got mean time: 13.5ms/iter, std:0.34 

Seems using the same model, 1. TVM runtime is slower than ONNX runtime, 2. the tuning does not provide better optimization compared to ONNX GraphOptimization either. I also tested Vgg16 and with intel Xeon CPU, the results are consistent. 
Is this expected? Or are there other more optimizations can be applied in TVM to further improve the performance?

P.s. I also tried to first optimize the model with ONNX runtime, then compile/tune in TVM but got errors, so could not compare that with ONNX





---
[Visit Topic](https://discuss.tvm.apache.org/t/tvm-performance-vs-onnx/10078/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/c4a4df7addfd3282fad59321f06d972970e5086feee6a1e428478c66ea7465d9).

[Apache TVM Discuss] [Questions] TVM performance vs. ONNX

Posted by Chenfan via Apache TVM Discuss <no...@discuss.tvm.ai>.

[quote="yulongl, post:1, topic:10078"]
https://tvm.apache.org/docs/tutorials/get_started/auto_tuning_with_python.html#sphx-glr-tutorials-get-started-auto-tuning-with-python-py
[/quote]

What's the 'target' you use when testing with tvm? In your i9 CPU, you can try with:
```python
target = "llvm -mcpu=skylake-avx512"
```
or higher mcpu option according to your CPU arch.





---
[Visit Topic](https://discuss.tvm.apache.org/t/tvm-performance-vs-onnx/10078/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/d4b3f77264cc0b876573da4451917256cc42dcc56101dd279f4339d66163d7f3).