You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/02/10 06:14:45 UTC

[GitHub] [tvm] yzh119 commented on pull request #10207: Support sub warp reduction for CUDA target.

yzh119 commented on pull request #10207:
URL: https://github.com/apache/tvm/pull/10207#issuecomment-1034535980


   Sure, below is the measured time of the kernel:
   ```python
   @T.prim_func
   def reduce(a: T.handle, b: T.handle, n: T.int32) -> None:
       A = T.match_buffer(a, [1048576, n])
       B = T.match_buffer(b, [1048576])
   
       for i, j in T.grid(1048576, n):
           with T.block("reduce"):
               vi, vj = T.axis.remap("SR", [i, j])
               with T.init():
                   B[vi] = 0.
               B[vi] = B[vi] + A[vi, vj]
   ```
   and change n between 2,4,8,16,32.
   
   | n      | 2                  | 4                 | 8                  | 16                 | 32                 |
   |----------|--------------------|-------------------|--------------------|--------------------|--------------------|
   | shuffle-down time(ms) | 1.840511957804362  | 1.877586046854655 | 2.1820863087972007 | 2.2471348444620767 | 2.1001497904459634 |
   | shared mem time(ms) | 1.7892122268676758 | 1.922925313313802 | 2.053538958231608  | 2.0630757013956704 | 2.1170775095621743 |
   
   there are some variance across multiple runs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org