You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/01/07 19:27:09 UTC

[GitHub] [tvm] anijain2305 edited a comment on pull request #7123: Parallelize cumsum in get_valid_counts

anijain2305 edited a comment on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-756327373


   > I don't this this is valid if num_anchors is zero, it could lead to undefined behavior. Could you wrap that in an `with ib.if_scope(num_anchors > 0)` and see if that fixes the problem?
   > 
   > https://github.com/apache/tvm/blob/9815ae2d9e17eece1a1009eb6436c80f931c734e/python/tvm/topi/cuda/nms.py#L209-L214
   
   
   ~~~
   @@ -210,8 +211,9 @@ def get_valid_indices_ir(valid_boxes, valid_count, valid_indices):
            bx = te.thread_axis("blockIdx.x")
            ib.scope_attr(bx, "thread_extent", batch_size)
            with ib.if_scope(bx < batch_size):
   -            valid_count[bx] = valid_indices[(bx + 1) * num_anchors - 1]
   -            valid_indices[(bx + 1) * num_anchors - 1] = 0
   +            with ib.if_scope(num_anchors > 0):
   +                valid_count[bx] = valid_indices[(bx + 1) * num_anchors - 1]
   +                valid_indices[(bx + 1) * num_anchors - 1] = 0
   
        with ib.for_range(0, lim, dtype="int64") as l2_width:
            width = 2 << (lim - l2_width - 1)
   ~~~
   
   I tried this yesterday. Unfortunately, this is not the source. The test still failed.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org