You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/03/19 11:04:32 UTC
[GitHub] [incubator-tvm] cchung100m commented on issue #5073:
[Relay][Frontend][ONNX] operator support NonZero
cchung100m commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#issuecomment-601119011
Hi @kazum
It seems that I need to add the schedule of `argwhere` for cuda in this PR. I would appreciate if you can guide me to do it, many thanks. :)
```
=================================== FAILURES ===================================
_________________________________ test_nonzero _________________________________
def test_nonzero():
def verify_nonzero(indata, outdata, dtype):
node = helper.make_node('NonZero',
inputs=['X'],
outputs=['Y'],)
graph = helper.make_graph([node],
"nonzero_test",
inputs=[helper.make_tensor_value_info("X", TensorProto.INT64, list(indata.shape))],
outputs=[helper.make_tensor_value_info("Y", TensorProto.INT64, list(outdata.shape))])
model = helper.make_model(graph, producer_name='nonzero_test')
onnx_out = get_onnxruntime_output(model, indata, dtype)
for target, ctx in ctx_list():
tvm_out = get_tvm_output_with_vm(model, indata, target, ctx, opset=9)
tvm.testing.assert_allclose(onnx_out, tvm_out, rtol=1e-05, atol=1e-05)
input_data = np.array([[1, 0], [1, 1]], dtype=np.int64)
result = np.array((np.nonzero(input_data))) # expected output [[0, 1, 1], [0, 0, 1]]
> verify_nonzero(input_data, result, dtype=np.int64)
tests/python/frontend/onnx/test_forward.py:2251:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/python/frontend/onnx/test_forward.py:2246: in verify_nonzero
tvm_out = get_tvm_output_with_vm(model, indata, target, ctx, opset=9)
tests/python/frontend/onnx/test_forward.py:54: in get_tvm_output_with_vm
ex = relay.create_executor('vm', mod=mod, ctx=ctx, target=target)
python/tvm/relay/build_module.py:414: in create_executor
return VMExecutor(mod, ctx, target)
python/tvm/relay/backend/vm.py:247: in __init__
self.executable = compile(mod, target)
python/tvm/relay/backend/vm.py:68: in compile
compiler.lower(mod, target, target_host)
python/tvm/relay/backend/vm.py:134: in lower
self._lower(mod, target, target_host)
tvm/_ffi/_cython/./packed_func.pxi:308: in tvm._ffi._cy3.core.PackedFuncBase.__call__
???
tvm/_ffi/_cython/./packed_func.pxi:243: in tvm._ffi._cy3.core.FuncCall
???
tvm/_ffi/_cython/./packed_func.pxi:232: in tvm._ffi._cy3.core.FuncCall3
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E tvm._ffi.base.TVMError: Traceback (most recent call last):
E [bt] (8) /workspace/build/libtvm.so(tvm::relay::vm::VMFunctionCompiler::VisitExpr_(tvm::relay::CallNode const*)+0x3d5) [0x7f8cb0e7ef55]
E [bt] (7) /workspace/build/libtvm.so(tvm::relay::OpMatch<void>::operator()(tvm::relay::Call const&)+0xef) [0x7f8cb0e7c27f]
E [bt] (6) /workspace/build/libtvm.so(tvm::relay::vm::VMFunctionCompiler::VisitExpr_(tvm::relay::CallNode const*)::{lambda(tvm::Array<tvm::RelayExpr, void> const&, tvm::Attrs const&, tvm::Array<tvm::Type, void> const&)#1}::operator()(tvm::Array<tvm::RelayExpr, void> const&, tvm::Attrs const&, tvm::Array<tvm::Type, void> const&) const+0x13a) [0x7f8cb0e7e12a]
E [bt] (5) /workspace/build/libtvm.so(tvm::relay::vm::VMFunctionCompiler::EmitInvokeTVMOp(tvm::relay::Function const&, tvm::RelayExpr const&, tvm::RelayExpr const&)+0x8f3) [0x7f8cb0e7d633]
E [bt] (4) /workspace/build/libtvm.so(tvm::relay::CompileEngineImpl::Lower(tvm::relay::CCacheKey const&)+0x20) [0x7f8cb0e4ef20]
E [bt] (3) /workspace/build/libtvm.so(tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&)+0x61e) [0x7f8cb0e4e17e]
E [bt] (2) /workspace/build/libtvm.so(tvm::relay::ScheduleGetter::Create(tvm::relay::Function const&)+0xe8f) [0x7f8cb0e4d3ef]
E [bt] (1) /workspace/build/libtvm.so(tvm::relay::OpImplementation::Schedule(tvm::Attrs const&, tvm::Array<tvm::te::Tensor, void> const&, tvm::Target const&)+0xb1) [0x7f8cb0e8f231]
E [bt] (0) /workspace/build/libtvm.so(+0xc5e14b) [0x7f8cb0f8714b]
E File "tvm/_ffi/_cython/./packed_func.pxi", line 54, in tvm._ffi._cy3.core.tvm_callback
E File "/workspace/python/tvm/relay/op/strategy/generic.py", line 738, in schedule_argwhere
E return topi.generic.schedule_argwhere(outs)
E File "/workspace/topi/python/topi/generic/search.py", line 35, in schedule_argwhere
E return _default_schedule(outs, False)
E File "/workspace/topi/python/topi/generic/vision.py", line 29, in _default_schedule
E raise RuntimeError("schedule not registered for '%s'" % target)
E RuntimeError: schedule not registered for 'cuda'
tvm/_ffi/_cython/./base.pxi:159: TVMError
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
With regards,
Apache Git Services