You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/04/28 19:29:10 UTC

[GitHub] [tvm] masahi commented on a change in pull request #7935: [SPARSE] Improve sparse performance on ROCM

masahi commented on a change in pull request #7935:
URL: https://github.com/apache/tvm/pull/7935#discussion_r622475973



##########
File path: python/tvm/topi/cuda/sparse.py
##########
@@ -170,6 +170,16 @@ def gen_ir(data, w_data, w_indices, w_indptr, out):
         # TODO(tkonolige): seperate implementation for large block sizes
         ib = tvm.tir.ir_builder.create()
 
+        if tvm.target.Target.current(allow_none=False).kind.name == "rocm":

Review comment:
       I've never tested this kernel on vulkan, but since our vulkan and opencl target do not have a default `thread_warp_size` specified, I'm pretty sure trying to use warp instruction there wouldn't work. Moreover, even if we had a default `thread_warp_size` for vulkan for example, lowering warp instruction needs dedicated support from codegen that is not there for vulkan and opencl.
   
   So for now, I think `use_warp_storage` should be True only for CUDA.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org