You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by Aurel333 via TVM Discuss <no...@discuss.tvm.ai> on 2020/08/24 13:13:07 UTC

[TVM Discuss] [Questions] [OpenCL] async memory transfer and double buffering


Hello, I am working with opencl and I am trying to make a new schedule optimized for a custom device. As I have a big shared memory, I tried using double buffering with the conv2d_direct schedule for CUDA.

When I checked the source code of the generated kernels, I noticed that indeed the memory cost is doubled but there is no asynchronous memory transfer to leverage double buffering. That lead me to wonder: is double buffering completely supported in TVM for opencl?





---
[Visit Topic](https://discuss.tvm.ai/t/opencl-async-memory-transfer-and-double-buffering/7706/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/147ec3d52b55eabf2d00937eabcf0d71e813e820f951e6866cc413981ac4ed5f).

[TVM Discuss] [Questions] [OpenCL] async memory transfer and double buffering

Posted by Aurel333 via TVM Discuss <no...@discuss.tvm.ai>.

I have dug into this further and now I understand why there is no asynchronous memory access. TVM was made with GPU in mind (for OpenCL) and GPU change warp if the active one stall due to memory access. 

While this is completely justified for CUDA, I think there should be asynchronous memory access for OpenCL as it is meant to target generic devices and people who wants to use the OpenCL backend for their device will likely run into performance issues because of this.

Besides I noticed that TVM doesn't detect accelerator or custom OpenCL device. Are you interested in a pull request to fix this?





---
[Visit Topic](https://discuss.tvm.ai/t/opencl-async-memory-transfer-and-double-buffering/7706/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/1abf722b994e39be69a6ed47a39b791a9da1c18cf095ddfbdad15db92e9cc45b).