You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/02/10 07:23:42 UTC

[GitHub] [tvm] yzh119 edited a comment on pull request #10207: Support sub warp reduction for CUDA target.

yzh119 edited a comment on pull request #10207:
URL: https://github.com/apache/tvm/pull/10207#issuecomment-1034575574


   There are some issues to be solved:
   
   If in the following case:
   ```python
   @T.prim_func
   def reduce(a: T.handle, b: T.handle, n: T.int32) -> None:
       A = T.match_buffer(a, [1024, 4, 8])
       B = T.match_buffer(b, [1024, 4])
   
       for i, j, k in T.grid(1024, 4, 8):
           with T.block("reduce"):
               vi, vj, vk = T.axis.remap("SSR", [i, j, k])
               with T.init():
                   B[vi, vj] = 0.
               B[vi, vj] = B[vi, vj] + A[vi, vj, vk]
   ```
   we bind j to `threadIdx.y` and k to `threadIdx.x`, different `j`'s might be mapped to the same warp, we need different masks for different `j` to distinguish them.
   
   Another thing worth noting is, we can only allow cross warp reduction by shuffle-down, thus warp size might be a multiple of `blockDim.x` when `blockDim.y * blockDim.z != 1`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org