You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/02/18 21:57:26 UTC
[GitHub] [tvm] jwfromm edited a comment on pull request #10321: [ONNX] only broadcast matmul if the shape has changed
jwfromm edited a comment on pull request #10321:
URL: https://github.com/apache/tvm/pull/10321#issuecomment-1045234006
We might have to poke around a little more to figure out the issue. I wasn't able to replicate a failure using a dedicated script with useless `broadcast_to` calls inserted. This seems to run fine for example:
```
import numpy as np
import tvm
from tvm import relay
from tvm.contrib import graph_executor
x = relay.var('x', shape=[10], dtype='float32')
y = relay.var('y', shape=[10], dtype='float32')
val = relay.const(5, dtype='float32')
x2 = relay.broadcast_to(x, relay.shape_of(y))
out = x2 + val
mod = tvm.IRModule.from_expr(out)
with relay.build_config(opt_level=3):
lib = relay.build(mod, target="llvm")
gmod = graph_executor.GraphModule(lib["default"](tvm.cpu()))
x_np = np.random.normal(size=[10]).astype('float32')
y_np = np.random.normal(size=[10]).astype('float32')
gmod.set_input('x', x_np)
gmod.set_input('y', y_np)
gmod.run()
print(gmod.get_output(0))
```
Do we expect that to trigger the issue or is it more nuanced?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org