You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/04/22 11:31:24 UTC

[GitHub] [tvm] masahi opened a new pull request #7910: [Relay] Shape func fix for all_class_nms and where op

masahi opened a new pull request #7910:
URL: https://github.com/apache/tvm/pull/7910


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart edited a comment on pull request #7910: [Relay] Shape func fix for all_class_nms and where op

Posted by GitBox <gi...@apache.org>.
mbrookhart edited a comment on pull request #7910:
URL: https://github.com/apache/tvm/pull/7910#issuecomment-825041882


   The subgraph that's causing the where issues is this:
   ```
     %5 = take(%p3, 1 /* ty=int32 */) /* ty=int64 */;
     %6 = add(4 /* ty=int64 */, %5) /* ty=int64 */;
     %7 = where(False /* ty=bool */, %6, 4 /* ty=int64 */) /* ty=int64 */;
     %8 = take(%p4, %7, axis=1) /* ty=Tensor[(?), float32] */;
   ```
   And this unit test reproduces it:
   ```
   diff --git a/tests/python/relay/test_any.py b/tests/python/relay/test_any.py
   index ef02b6f10..7b21c01b2 100644
   --- a/tests/python/relay/test_any.py
   +++ b/tests/python/relay/test_any.py
   @@ -1512,6 +1512,22 @@ def test_any_where():
            any_dims(2), any_dims(2), any_dims(2), (3, 4), (3, 1), (1, 4), y_np_shape_invalid=(2, 4)
        )
    
   +    # Test scalar where in a dynamically shaped graph
   +    x_np = np.random.randn(2).astype("int64")
   +    y_np = np.random.randn(2, 6).astype("float32")
   +    expected = y_np[:, 4]
   +    x = relay.var("x", shape=any_dims(1), dtype="int64")
   +    y = relay.var("y", shape=any_dims(2), dtype="float32")
   +
   +    left = relay.take(x, relay.const(1, dtype="int32")) + relay.const(4, "int64")
   +    right = relay.const(4, "int64")
   +    where = relay.where(relay.const(False, "bool"), left, right)
   +    z = relay.take(y, where, axis=1)
   +
   +    mod = tvm.IRModule()
   +    mod["main"] = relay.Function([x, y], z)
   +    check_result([x_np, y_np], mod, expected)
   +
    
    @tvm.testing.uses_gpu
    def test_non_max_suppression():
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on pull request #7910: [Relay] Shape func fix for all_class_nms and where op

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on pull request #7910:
URL: https://github.com/apache/tvm/pull/7910#issuecomment-825041882


   The subgraph that's causing the where issues is this:
   ```
     %p0: Tensor[(?), int64]
     %p1: Tensor[(2), int64]
     %0 = less(%p0, 0 /* ty=int64 */) /* ty=Tensor[(?), bool] */;
     %1 = take(%p1, 0 /* ty=int32 */) /* ty=int64 */;
     %2 = add(%p0, %1) /* ty=Tensor[(?), int64] */;
     %3 = where(%0, %2, %p0) /* ty=Tensor[(?), int64] */;
   ```
   And this unit test reproduces it:
   ```
   diff --git a/tests/python/relay/test_any.py b/tests/python/relay/test_any.py
   index ef02b6f10..7b21c01b2 100644
   --- a/tests/python/relay/test_any.py
   +++ b/tests/python/relay/test_any.py
   @@ -1512,6 +1512,22 @@ def test_any_where():
            any_dims(2), any_dims(2), any_dims(2), (3, 4), (3, 1), (1, 4), y_np_shape_invalid=(2, 4)
        )
    
   +    # Test scalar where in a dynamically shaped graph
   +    x_np = np.random.randn(2).astype("int64")
   +    y_np = np.random.randn(2, 6).astype("float32")
   +    expected = y_np[:, 4]
   +    x = relay.var("x", shape=any_dims(1), dtype="int64")
   +    y = relay.var("y", shape=any_dims(2), dtype="float32")
   +
   +    left = relay.take(x, relay.const(1, dtype="int32")) + relay.const(4, "int64")
   +    right = relay.const(4, "int64")
   +    where = relay.where(relay.const(False, "bool"), left, right)
   +    z = relay.take(y, where, axis=1)
   +
   +    mod = tvm.IRModule()
   +    mod["main"] = relay.Function([x, y], z)
   +    check_result([x_np, y_np], mod, expected)
   +
    
    @tvm.testing.uses_gpu
    def test_non_max_suppression():
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on pull request #7910: [Relay] Shape func fix for all_class_nms and where op

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on pull request #7910:
URL: https://github.com/apache/tvm/pull/7910#issuecomment-825793458


   Thanks @masahi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart merged pull request #7910: [Relay] Shape func fix for all_class_nms and where op

Posted by GitBox <gi...@apache.org>.
mbrookhart merged pull request #7910:
URL: https://github.com/apache/tvm/pull/7910


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on pull request #7910: [Relay] Shape func fix for all_class_nms and where op

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on pull request #7910:
URL: https://github.com/apache/tvm/pull/7910#issuecomment-825042408


   I find it interesting that if I freeze the weights, I still fail to compile the all_class_non_max_suppression function:
   ```
   ---------------------------------------------------------------
   An internal invariant was violated during the execution of TVM.
   Please read TVM's error reporting guidelines.
   More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
   ---------------------------------------------------------------
   
     Check failed: arg.dtype() == value.dtype() (int32 vs. int64) : 
   Error during compile function
   -----------------------------
   #[version = "0.0.5"]
   fn (%p0: Tensor[(1, 1344, 4), float32], %p1: Tensor[(1, 1, 1344), float32], %p2: int64, %p3: float32, %p4: float32, Primitive=1) -> (Tensor[(1344, 3), int64], Tensor[(1), int64]) {
     vision.all_class_non_max_suppression(%p0, %p1, %p2, %p3, %p4, meta[relay.attrs.AllClassNonMaximumSuppressionAttrs][0]) /* ty=(Tensor[(1344, 3), int64], Tensor[(1), int64]) */
   }
   ```
   
   I'm not sure why that would be, probably something isn't constant folding correctly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7910: [Relay] Shape func fix for all_class_nms and where op

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7910:
URL: https://github.com/apache/tvm/pull/7910#issuecomment-825178134


   > I find it interesting that if I freeze the weights, I still fail to compile the all_class_non_max_suppression function
   
   Indeed it is weird `freeze_params=True` breaks compiling. I have no idea where in `all_class_non_max_suppression` int32 vs int64 issue could arise.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7910: [Relay] Shape func fix for all_class_nms and where op

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7910:
URL: https://github.com/apache/tvm/pull/7910#issuecomment-825170666


   > 
   > 
   > And this unit test reproduces it:
   
   @mbrookhart Thanks! Added the test for `where` op scalar shape func.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on pull request #7910: [Relay] Shape func fix for all_class_nms and where op

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on pull request #7910:
URL: https://github.com/apache/tvm/pull/7910#issuecomment-825001640


   I don't like putting full models in CI, we have a few, and they're the slowest parts about running the frontend tests. I looks like all of the full model tests are getting the model from torchvision, though. I'll take a look at the model and see if I can see a unit test we could include.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org