You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/04/28 23:03:42 UTC

[GitHub] [tvm] ganler opened a new pull request, #11174: [fix] vec * mat in matmul in onnx converter

ganler opened a new pull request, #11174:
URL: https://github.com/apache/tvm/pull/11174

   Should fix https://github.com/apache/tvm/issues/10651 for ONNX converter.
   
   cc: @masahi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ganler commented on a diff in pull request #11174: [fix] vec * mat in matmul in onnx converter

Posted by GitBox <gi...@apache.org>.
ganler commented on code in PR #11174:
URL: https://github.com/apache/tvm/pull/11174#discussion_r861375353


##########
python/tvm/relay/frontend/onnx.py:
##########
@@ -324,6 +324,9 @@ def flatten_to_nd(x, x_shape, nd=3):
             0,
         )
         return _op.reshape(output, fold_constant(final_shape))
+    elif a_rank == 1:
+        return _op.multiply(inputs[0], inputs[1])

Review Comment:
   In [ONNX](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#MatMul-13)'s spec, it says it simply follows numpy (PyTorch also largely follows numpy).
   
   In NumPy's [doc](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html), there is one case: 
   
   > If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
   
   So basically: (N) * (N, K) -> (K).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ganler commented on pull request #11174: [fix] vec * mat in matmul in onnx converter

Posted by GitBox <gi...@apache.org>.
ganler commented on PR #11174:
URL: https://github.com/apache/tvm/pull/11174#issuecomment-1113850267

   @masahi CI finally passed! I think we can also close the issue after the merge. :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] masahi merged pull request #11174: [fix] vec * mat in matmul in onnx converter

Posted by GitBox <gi...@apache.org>.
masahi merged PR #11174:
URL: https://github.com/apache/tvm/pull/11174


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ganler commented on a diff in pull request #11174: [fix] vec * mat in matmul in onnx converter

Posted by GitBox <gi...@apache.org>.
ganler commented on code in PR #11174:
URL: https://github.com/apache/tvm/pull/11174#discussion_r861375353


##########
python/tvm/relay/frontend/onnx.py:
##########
@@ -324,6 +324,9 @@ def flatten_to_nd(x, x_shape, nd=3):
             0,
         )
         return _op.reshape(output, fold_constant(final_shape))
+    elif a_rank == 1:
+        return _op.multiply(inputs[0], inputs[1])

Review Comment:
   In [ONNX](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#MatMul-13)'s spec, it says it simply follows numpy (PyTorch also largely follows numpy).
   
   In NumPy's [doc](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html), there is one case: 
   
   > If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] masahi commented on a diff in pull request #11174: [fix] vec * mat in matmul in onnx converter

Posted by GitBox <gi...@apache.org>.
masahi commented on code in PR #11174:
URL: https://github.com/apache/tvm/pull/11174#discussion_r861372925


##########
python/tvm/relay/frontend/onnx.py:
##########
@@ -324,6 +324,9 @@ def flatten_to_nd(x, x_shape, nd=3):
             0,
         )
         return _op.reshape(output, fold_constant(final_shape))
+    elif a_rank == 1:
+        return _op.multiply(inputs[0], inputs[1])

Review Comment:
   So `op.multiply` with shapes `(3), (3, 1)` does vec * mat?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ganler commented on a diff in pull request #11174: [fix] vec * mat in matmul in onnx converter

Posted by GitBox <gi...@apache.org>.
ganler commented on code in PR #11174:
URL: https://github.com/apache/tvm/pull/11174#discussion_r861382672


##########
python/tvm/relay/frontend/onnx.py:
##########
@@ -324,6 +324,9 @@ def flatten_to_nd(x, x_shape, nd=3):
             0,
         )
         return _op.reshape(output, fold_constant(final_shape))
+    elif a_rank == 1:
+        return _op.multiply(inputs[0], inputs[1])

Review Comment:
   @masahi Oh sorry, I double-checked your comment and my previous fix of `op. multiply` is actually incorrect as its output shape is `[3, 1]` not `[1]` (I passed the compile but the output shape is incorrect). Just fixed that with `_op.nn.matmul`.
   
   I also noticed that in TVM, the behaviour of matmul is more explicit than numpy that `vec` as `tensor_a` is unacceptable. Hence, I need to do unsqueeze and squeeze tricks here...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ganler commented on a diff in pull request #11174: [fix] vec * mat in matmul in onnx converter

Posted by GitBox <gi...@apache.org>.
ganler commented on code in PR #11174:
URL: https://github.com/apache/tvm/pull/11174#discussion_r861374363


##########
python/tvm/relay/frontend/onnx.py:
##########
@@ -324,6 +324,9 @@ def flatten_to_nd(x, x_shape, nd=3):
             0,
         )
         return _op.reshape(output, fold_constant(final_shape))
+    elif a_rank == 1:
+        return _op.multiply(inputs[0], inputs[1])

Review Comment:
   Yes, this is also valid in ONNX standard. Both PyTorch and ONNXRuntime can pass it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ganler commented on a diff in pull request #11174: [fix] vec * mat in matmul in onnx converter

Posted by GitBox <gi...@apache.org>.
ganler commented on code in PR #11174:
URL: https://github.com/apache/tvm/pull/11174#discussion_r861382672


##########
python/tvm/relay/frontend/onnx.py:
##########
@@ -324,6 +324,9 @@ def flatten_to_nd(x, x_shape, nd=3):
             0,
         )
         return _op.reshape(output, fold_constant(final_shape))
+    elif a_rank == 1:
+        return _op.multiply(inputs[0], inputs[1])

Review Comment:
   @masahi Oh sorry, I double-checked your comment and my previous fix of `op. multiply` is actually incorrect as its output shape is still `[3, 1]` not `[1]` (I passed the compile but the output shape is incorrect). Just fixed that with `_op.nn.matmul`.
   
   I also noticed that in TVM, the behaviour of matmul is more explicit than numpy that `vec` as `tensor_a` is unacceptable. Hence, I need to do unsqueeze and squeeze tricks here...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ganler commented on a diff in pull request #11174: [fix] vec * mat in matmul in onnx converter

Posted by GitBox <gi...@apache.org>.
ganler commented on code in PR #11174:
URL: https://github.com/apache/tvm/pull/11174#discussion_r861374363


##########
python/tvm/relay/frontend/onnx.py:
##########
@@ -324,6 +324,9 @@ def flatten_to_nd(x, x_shape, nd=3):
             0,
         )
         return _op.reshape(output, fold_constant(final_shape))
+    elif a_rank == 1:
+        return _op.multiply(inputs[0], inputs[1])

Review Comment:
   ~~Yes, this is also valid in ONNX standard. Both PyTorch and ONNXRuntime can pass it.~~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org