You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by yanyu1268 via Apache TVM Discuss <no...@discuss.tvm.ai> on 2021/11/25 23:46:53 UTC

[Apache TVM Discuss] [Questions] How to do heterogeneous execution on cpu and gpu?


Hello,

I have read some posts from forum,but I still confused about that. 
1) If I want to using Relay to build a simple network and heterogeneous execution some Ops on gpu and others on cpu. There seem to be two different ways.

* One is through relay.annotation.on_device, relay.device_copy  and relay.transform.RewriteAnnotatedOps. After that, relay graph will rewrite, and I can do relay.build.
But my TVM version is 0.8, it seems not working.  Or Is my usage wrong? I'm not sure how to do in current version.
* Another way is part of BYOC, but I just want to try heterogeneous execution on gpu and cpu. It doesn't seem to be needed?

2) I want to check that do heterogeneous execution what difference will be on the json. I have read some code about Jsonreader and graph_executor. I guess if i do heterogeneous execution,json will have some tvm_op which func_name is "__copy" to copy data between device,and device_index will denote every node should execution on which device. Is my guess correct?

I'm new to TVM, any help or suggestions are massively appreciated!





---
[Visit Topic](https://discuss.tvm.apache.org/t/how-to-do-heterogeneous-execution-on-cpu-and-gpu/11561/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/a68180dff30d51b59e919e40cb7813da4a26cb0296b0fd90fe19273c8213f2a9).

[Apache TVM Discuss] [Questions] How to do heterogeneous execution on cpu and gpu?

Posted by wrongtest via Apache TVM Discuss <no...@discuss.tvm.ai>.

If you are using `relay.build()` -> `graph_executor.GraphModule` path, the point I remember is that it should pass a multi-target dict into `target` argument of build and pass a device list into GraphModule like
 ```python
lib = relay.build(relay_mod, target={"cpu": "llvm", "gpu": "cuda"}, params=params)
m = graph_executor.GraphModule(lib["default"](tvm.cpu(), tvm.gpu()))
```





---
[Visit Topic](https://discuss.tvm.apache.org/t/how-to-do-heterogeneous-execution-on-cpu-and-gpu/11561/3) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/0e823c9c4118e3d4b9c4de99d3b6f2e78c4d7b44a9fed14480b0664bd09f63e6).

[Apache TVM Discuss] [Questions] How to do heterogeneous execution on cpu and gpu?

Posted by wrongtest via Apache TVM Discuss <no...@discuss.tvm.ai>.

Hi~ Can this unittest case help you?
https://github.com/apache/tvm/blob/be03d62e5b0afd607964365bc73e94f72fdfaaef/tests/python/relay/test_vm.py#L1071





---
[Visit Topic](https://discuss.tvm.apache.org/t/how-to-do-heterogeneous-execution-on-cpu-and-gpu/11561/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/43014ee8bbb80d8b94366fad0c6cba25f45fea321944ed0c22772aaeb7588d5c).