You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/09/11 06:41:45 UTC

[GitHub] [incubator-tvm] lsy643 edited a comment on pull request #6108: Fix CUDA Compute Function For `get_valid_counts` and `nms`

lsy643 edited a comment on pull request #6108:
URL: https://github.com/apache/incubator-tvm/pull/6108#issuecomment-690903238


   @yongwww 
   I have added a test case for nms cuda version in `test_op_level5.p` with test data assumed getting from a `get_valid_count`.
   
   Since there is no `rearrange_indices_out ` for nms cuda version, I only compare it with the llvm verison
   1. For test data with shape `[1, 5, 6]`
   - cuda time: 90us
   - llvm time: 32us
   
   2 For test data with shape `[1, 20000, 6]`
   - cuda time: 6230us
   - llvm time: 219209us
   
   The inference time for llvm with large dataset is too large.
   
   Test data I use
   ```python
   
   data_length = 20000
   np_valid_count = np.array([20000]).astype("int32")
   v = []
   for i in range(20000):
       v.append(i)
   np_indices = np.array([v]).astype("int32")
   
   np_data = np.array([[[0, 0.8, 1, 20, 25, 45], [1, 0.7, 30, 60, 50, 80],
                           [0, 0.4, 4, 21, 19, 40], [2, 0.9, 35, 61, 52, 79],
                           [1, 0.5, 100, 60, 70, 110]]]).astype("float32")
   np_data = np_data.repeat(20000/5, axis=1)
   ```
   
   The compute and schedule functions I use
   ```
       use_cuda = False
       if use_cuda:
           device = 'cuda'
           fcompute = topi.cuda.non_max_suppression
           fschedule = topi.cuda.schedule_nms
       else:
           device = 'llvm'
           fcompute = topi.vision.non_max_suppression
           fschedule = topi.generic.schedule_nms
   
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org