You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by venkataraju koppada via TVM Discuss <no...@discuss.tvm.ai> on 2020/04/15 13:19:12 UTC

[TVM Discuss] [Questions] Can we schedule bunch of OPS in CPU and other some in GPU ? while running inference in TVM


Hi Expert,

I have just started looking into the TVM framework. 
I am exploring possibilities like how do we get best latency numbers using TVM.

As a part of this I wanted to know that, is there anyway user can attached device info per OPS?
Also can use create multiple graphs (like one with Object detection model and another one with Classification model) and schedule in one application using TVM.

Thanks and Regards,
Raju





---
[Visit Topic](https://discuss.tvm.ai/t/can-we-schedule-bunch-of-ops-in-cpu-and-other-some-in-gpu-while-running-inference-in-tvm/6377/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/0ecff650db19f3dfbb0114f72d2771b143258efc5a6aff30e094d9e44c5ae5dd).