You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/03/17 07:10:38 UTC
[GitHub] [tvm] ganler opened a new issue #10651: [Bug][ONNX] MatMul in dense_alter_op
ganler opened a new issue #10651:
URL: https://github.com/apache/tvm/issues/10651
Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :smile_cat:
Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.
Here's a simple `MatMul` operation where:
- data tensor: [3]
- weight tensor's shape: [3, 1]
```python
import torch
class Net(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.weight = torch.nn.Parameter(torch.randn(3, 1))
def forward(self, x):
return torch.matmul(x, self.weight)
net = Net().eval()
i = torch.zeros((3), dtype=torch.float)
o = net(i)
print(i.shape, o.shape)
with torch.no_grad():
torch.onnx.export(net, (i), "output.onnx", verbose=True, opset_version=14)
```
### Expected behavior
Should be compiled by TVM, as it follows correct ONNX specification and can be executed by PyTorch and ONNXRuntime.
### Actual behavior
```
...
File "/home/ganler/Documents/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun
rv = local_pyfunc(*pyargs)
File "/home/ganler/Documents/tvm/python/tvm/relay/op/nn/_nn.py", line 112, in alter_op_layout_dense
return topi.nn.dense_alter_layout(attrs, inputs, tinfos, out_type)
File "/home/ganler/miniconda3/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "/home/ganler/Documents/tvm/python/tvm/target/generic_func.py", line 286, in dispatch_func
return dispatch_dict[k](*args, **kwargs)
File "/home/ganler/Documents/tvm/python/tvm/topi/x86/dense_alter_op.py", line 48, in _alter_dense_layout
M, K = get_const_tuple(data_tensor.shape)
ValueError: not enough values to unpack (expected 2, got 1)
```
This is simply because in
https://github.com/apache/tvm/blob/894772975ab33443cf25f40d9f1e2f7b96224978/python/tvm/topi/x86/dense_alter_op.py#L48
This optimization assumes data tensor's rank is 2. Maybe we simply need to add `unsqueeze()` to data tensor when converting such ONNX models.
### Environment
678e76b3efd57b171940f0017bee89451e381785 TVM with LLVM.
cc: @masahi @junrushao1994 @kevinthesun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [tvm] ganler commented on issue #10651: [Bug][ONNX] MatMul in dense_alter_op
Posted by GitBox <gi...@apache.org>.
ganler commented on issue #10651:
URL: https://github.com/apache/tvm/issues/10651#issuecomment-1070442633
> What happens if you use the PT frontend?
It seems that cannot work as well
```python
import torch
class Net(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.weight = torch.nn.Parameter(torch.randn(3, 1))
def forward(self, x):
return torch.matmul(x, self.weight)
net = Net().eval()
import tvm
from tvm import relay
from tvm.relay.frontend import from_pytorch
scripted_model = torch.jit.trace(net, i).eval()
mod = from_pytorch(scripted_model, [("x", (3,))])
with tvm.transform.PassContext(opt_level=4):
relay.build_module.create_executor("graph", mod, tvm.cpu(), target='llvm').evaluate()
```
log
```
File "test.py", line 87, in <module>
relay.build_module.create_executor("graph", mod, tvm.cpu(), target='llvm').evaluate()
File "/home/ganler/Documents/tvm/python/tvm/relay/backend/interpreter.py", line 171, in evaluate
return self._make_executor()
File "/home/ganler/Documents/tvm/python/tvm/relay/build_module.py", line 618, in _make_executor
self.mod = InferType()(self.mod)
File "/home/ganler/Documents/tvm/python/tvm/ir/transform.py", line 161, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/ganler/Documents/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
2: TVMFuncCall
1: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::$_6>(tvm::transform::$_6, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
0: tvm::runtime::TVMMovableArgValueWithContext_::operator tvm::IRModule<tvm::IRModule>() const
4: TVMFuncCall
3: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::$_6>(tvm::transform::$_6, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
2: tvm::runtime::TVMMovableArgValueWithContext_::operator tvm::IRModule<tvm::IRModule>() const
1: tvm::runtime::TVMMovableArgValue_::operator tvm::IRModule<tvm::IRModule, void>() const
0: tvm::IRModule tvm::runtime::TVMPODValue_::AsObjectRef<tvm::IRModule>() const
File "/home/ganler/Documents/tvm/include/tvm/runtime/packed_func.h", line 777
TVMError: In function transform.RunPass(0: transform.Pass, 1: IRModule) -> IRModule: error while converting argument 1: [02:31:28] /home/ganler/Documents/tvm/include/tvm/runtime/packed_func.h:1863:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
Check failed: (!checked_type.defined()) is false: Expected IRModule, but got Array
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [tvm] ganler commented on issue #10651: [Bug][ONNX] MatMul in dense_alter_op
Posted by GitBox <gi...@apache.org>.
ganler commented on issue #10651:
URL: https://github.com/apache/tvm/issues/10651#issuecomment-1071109404
Interesting, I think there can be basically 2 big ideas for working this around:
1. avoid such MatMul being transformed into Dense;
2. add unsqueeze so that [3] -> [3, 1] fitting the requirement of existing code base.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [tvm] masahi commented on issue #10651: [Bug][ONNX] MatMul in dense_alter_op
Posted by GitBox <gi...@apache.org>.
masahi commented on issue #10651:
URL: https://github.com/apache/tvm/issues/10651#issuecomment-1070494750
Yeah with this code I get the same error as onnx:
```
import torch
class Net(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.weight = torch.nn.Parameter(torch.randn(3, 1))
def forward(self, x):
return torch.matmul(x, self.weight)
net = Net().eval()
import tvm
from tvm import relay
from tvm.relay.frontend import from_pytorch
scripted_model = torch.jit.trace(net, torch.randn((3,))).eval()
mod, params = from_pytorch(scripted_model, [("x", (3,))])
with tvm.transform.PassContext(opt_level=4):
relay.build_module.create_executor("graph", mod, tvm.cpu(), target='llvm').evaluate()
```
```
File "/home/masa/projects/dev/tvm/python/tvm/topi/x86/dense_alter_op.py", line 48, in _alter_dense_layout
M, K = get_const_tuple(data_tensor.shape)
ValueError: not enough values to unpack (expected 2, got 1)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [tvm] masahi commented on issue #10651: [Bug][ONNX] MatMul in dense_alter_op
Posted by GitBox <gi...@apache.org>.
masahi commented on issue #10651:
URL: https://github.com/apache/tvm/issues/10651#issuecomment-1070424517
What happens if you use the PT frontend?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [tvm] ganler edited a comment on issue #10651: [Bug][ONNX] MatMul in dense_alter_op
Posted by GitBox <gi...@apache.org>.
ganler edited a comment on issue #10651:
URL: https://github.com/apache/tvm/issues/10651#issuecomment-1070442633
> What happens if you use the PT frontend?
It seems that cannot work as well (but i am not very familiar with `from_pytorch` API so I might make mistake in the following code).
```python
import torch
class Net(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.weight = torch.nn.Parameter(torch.randn(3, 1))
def forward(self, x):
return torch.matmul(x, self.weight)
net = Net().eval()
import tvm
from tvm import relay
from tvm.relay.frontend import from_pytorch
scripted_model = torch.jit.trace(net, i).eval()
mod = from_pytorch(scripted_model, [("x", (3,))])
with tvm.transform.PassContext(opt_level=4):
relay.build_module.create_executor("graph", mod, tvm.cpu(), target='llvm').evaluate()
```
log
```
File "test.py", line 87, in <module>
relay.build_module.create_executor("graph", mod, tvm.cpu(), target='llvm').evaluate()
File "/home/ganler/Documents/tvm/python/tvm/relay/backend/interpreter.py", line 171, in evaluate
return self._make_executor()
File "/home/ganler/Documents/tvm/python/tvm/relay/build_module.py", line 618, in _make_executor
self.mod = InferType()(self.mod)
File "/home/ganler/Documents/tvm/python/tvm/ir/transform.py", line 161, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/ganler/Documents/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
2: TVMFuncCall
1: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::$_6>(tvm::transform::$_6, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
0: tvm::runtime::TVMMovableArgValueWithContext_::operator tvm::IRModule<tvm::IRModule>() const
4: TVMFuncCall
3: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::$_6>(tvm::transform::$_6, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
2: tvm::runtime::TVMMovableArgValueWithContext_::operator tvm::IRModule<tvm::IRModule>() const
1: tvm::runtime::TVMMovableArgValue_::operator tvm::IRModule<tvm::IRModule, void>() const
0: tvm::IRModule tvm::runtime::TVMPODValue_::AsObjectRef<tvm::IRModule>() const
File "/home/ganler/Documents/tvm/include/tvm/runtime/packed_func.h", line 777
TVMError: In function transform.RunPass(0: transform.Pass, 1: IRModule) -> IRModule: error while converting argument 1: [02:31:28] /home/ganler/Documents/tvm/include/tvm/runtime/packed_func.h:1863:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
Check failed: (!checked_type.defined()) is false: Expected IRModule, but got Array
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [tvm] ganler commented on issue #10651: [Bug][ONNX] MatMul in dense_alter_op
Posted by GitBox <gi...@apache.org>.
ganler commented on issue #10651:
URL: https://github.com/apache/tvm/issues/10651#issuecomment-1071109404
Interesting, I think there can be basically 2 big ideas for working this around:
1. avoid such MatMul being transformed into Dense;
2. add unsqueeze so that [3] -> [3, 1] fitting the requirement of existing code base.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [tvm] masahi commented on issue #10651: [Bug][ONNX] MatMul in dense_alter_op
Posted by GitBox <gi...@apache.org>.
masahi commented on issue #10651:
URL: https://github.com/apache/tvm/issues/10651#issuecomment-1070429862
I think relay `dense` op type rel should reject such input shapes, I wonder why it doesn't...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [tvm] ganler commented on issue #10651: [Bug][ONNX] MatMul in dense_alter_op
Posted by GitBox <gi...@apache.org>.
ganler commented on issue #10651:
URL: https://github.com/apache/tvm/issues/10651#issuecomment-1070433145
> I think relay `dense` op type rel should reject such input shapes, I wonder why it doesn't...
It seems it tries to convert this MatMul operation into a dense operation...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org