You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2019/11/06 08:35:15 UTC
[GitHub] [incubator-tvm] FinnWeng opened a new issue #4265: [RELAY][Bug]
output type assignment not work for tf.range() in TVM
FinnWeng opened a new issue #4265: [RELAY][Bug] output type assignment not work for tf.range() in TVM
URL: https://github.com/apache/incubator-tvm/issues/4265
This issue happens when converting code with `tf.range()`.
My environment is:
develop: python3.6, tensorflow1.14
convert to TVM: container of tvmai/demo-gpu
The code of issue is
```python
my_tensor = tf.reshape(tf.range(1,256+1,1,dtype=tf.float32),[1,256])
```
the error log is:
```
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /usr/tvm/build/libtvm.so(TVMFuncCall+0x61) [0x7f8c084d70f1]
[bt] (7) /usr/tvm/build/libtvm.so(+0xb1d64b) [0x7f8c083e264b]
[bt] (6) /usr/tvm/build/libtvm.so(tvm::relay::ModuleNode::FromExpr(tvm::relay::Expr const&, tvm::Map<tvm::relay::GlobalVar, tvm::relay::Function, void, void> const&, tvm::Map<tvm::relay::GlobalTypeVar, tvm::relay::TypeData, void, void> const&)+0x17b) [0x7f8c083e236b]
[bt] (5) /usr/tvm/build/libtvm.so(tvm::relay::ModuleNode::Add(tvm::relay::GlobalVar const&, tvm::relay::Function const&, bool)+0x344) [0x7f8c083df2d4]
[bt] (4) /usr/tvm/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x1fd) [0x7f8c082c7ced]
[bt] (3) /usr/tvm/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::relay::Expr)+0x55) [0x7f8c082c6e65]
[bt] (2) /usr/tvm/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x4e1) [0x7f8c08306781]
[bt] (1) /usr/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), void tvm::runtime::TypedPackedFunc<bool (tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)>(bool (*)(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xd4) [0x7f8c0809dc74]
[bt] (0) /usr/tvm/build/libtvm.so(tvm::relay::BroadcastRel(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)+0xb7c) [0x7f8c080d0efc]
File "/usr/tvm/src/relay/ir/error.cc", line 133
TVMError:
Error(s) have occurred. The program has been annotated with them:
In `main`:
v0.0.4
fn () {
%0 = arange(1f, 257f, 1f, start=meta[relay.Constant][0], stop=meta[relay.Constant][1], step=meta[relay.Constant][2], dtype="int32") unable to unify: `int32` and `float32`; ;
%1 = reshape(%0, newshape=[1, 256]);
multiply(%1, meta[relay.Constant][3]) an internal invariant was violated while typechecking your program [08:25:28] /usr/tvm/src/relay/op/type_relations.cc:121: Check failed: t0->dtype == t1->dtype (int32 vs. float32) :
;
}
// meta data omitted. you can use show_meta_data=True to include meta data
```
And I take detour to avoid this issue by:
```python
my_tensor = tf.cast(tf.reshape(tf.range(1,256+1,1),[1,256]),tf.float32)
```
It seems fine with code above. So I guess the issue is about type assignment in tf.range.
Thanks!
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
With regards,
Apache Git Services