You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/03/15 10:07:35 UTC

[GitHub] [incubator-tvm] cchung100m opened a new pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

cchung100m opened a new pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073
 
 
   Hi @tqchen @vinx13 @zhiics
   
   Following issue #4568, this PR is going to add NonZero operator and the test case for ONNX frontend. I would appreciate that if you can help me to review/manage this PR, thanks.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393634206
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,6 +30,30 @@
 import scipy
 
 
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None, mode=None):
+    """ Generic function to execute and get tvm output with vm executor"""
+    if isinstance(input_data, list):
+        input_names = {}
+        shape_dict = {}
+        dtype_dict = {}
+        for i, _ in enumerate(input_data):
+            input_names[i] = graph_def.graph.input[i].name
+            shape_dict[input_names[i]] = input_data[i].shape
+            dtype_dict[input_names[i]] = input_data[i].dtype
+    else:
+        input_names = graph_def.graph.input[0].name
+        shape_dict = {input_names: input_data.shape}
+        dtype_dict = {input_names: input_data.dtype}
+
+    mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
+
+    ex = relay.create_executor(mode, mod=mod, ctx=tvm.cpu(), target=target)
+    indata = tvm.nd.array(input_data)
+    result = ex.evaluate()(indata)
+    return result.asnumpy().transpose()
 
 Review comment:
   The output shape of argwhere is different from the one of ONNX nonzero. I move the transpose() to [here](https://github.com/apache/incubator-tvm/pull/5073/files#diff-03149f7671cff8be6734838f7707a24dR1455).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] masahi commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
masahi commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#issuecomment-601120936
 
 
   You don't need to test on cuda. Just do not use `ctx_list()`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] tqchen commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
tqchen commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#issuecomment-599731754
 
 
   @kazum @masahi can you please followup?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
jwfromm commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393310531
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,7 +30,8 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
+def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32',
+                   opset=None, mode=None):
 
 Review comment:
   Just curious, why does `mode` need to be added? Does `NonZero` only work in the VM executor? If so, maybe we can just exclusively use the VM for testing, I'm not sure there's a need to support both.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393634952
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,6 +30,30 @@
 import scipy
 
 
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None, mode=None):
+    """ Generic function to execute and get tvm output with vm executor"""
+    if isinstance(input_data, list):
+        input_names = {}
+        shape_dict = {}
+        dtype_dict = {}
+        for i, _ in enumerate(input_data):
+            input_names[i] = graph_def.graph.input[i].name
+            shape_dict[input_names[i]] = input_data[i].shape
+            dtype_dict[input_names[i]] = input_data[i].dtype
+    else:
+        input_names = graph_def.graph.input[0].name
+        shape_dict = {input_names: input_data.shape}
+        dtype_dict = {input_names: input_data.dtype}
+
+    mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
+
+    ex = relay.create_executor(mode, mod=mod, ctx=tvm.cpu(), target=target)
 
 Review comment:
   Thanks for the review and I updated it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393634681
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,6 +30,30 @@
 import scipy
 
 
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None, mode=None):
+    """ Generic function to execute and get tvm output with vm executor"""
+    if isinstance(input_data, list):
 
 Review comment:
   Thanks for the suggestions and I updated the part you mentioned.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r394392382
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,22 +30,54 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
-    """ Generic function to execute and get tvm output"""
-    target = 'llvm'
+def get_input_data_shape_dict(graph_def, input_data):
     if isinstance(input_data, list):
         input_names = {}
         shape_dict = {}
-        dtype_dict = {}
         for i, _ in enumerate(input_data):
             input_names[i] = graph_def.graph.input[i].name
             shape_dict[input_names[i]] = input_data[i].shape
-            dtype_dict[input_names[i]] = input_data[i].dtype
     else:
         input_names = graph_def.graph.input[0].name
         shape_dict = {input_names: input_data.shape}
+
+    return input_names, shape_dict
+
+
+def get_input_data_dtype_dict(graph_def, input_data):
+    if isinstance(input_data, list):
+        input_names = {}
+        dtype_dict = {}
+        for i, _ in enumerate(input_data):
+            input_names[i] = graph_def.graph.input[i].name
+            dtype_dict[input_names[i]] = input_data[i].dtype
+    else:
+        input_names = graph_def.graph.input[0].name
         dtype_dict = {input_names: input_data.dtype}
 
+    return input_names, dtype_dict
+
+
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None):
+    """ Generic function to execute and get tvm output with vm executor"""
+
+    _, shape_dict = get_input_data_shape_dict(graph_def, input_data)
+
+    mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
+
+    ex = relay.create_executor('vm', mod=mod, ctx=ctx, target=target)
+    indata = tvm.nd.array(input_data)
+    result = ex.evaluate()(indata)
+    return result.asnumpy()
+
+
+def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
 
 Review comment:
   Okay, I'm fine with keeping both for now :)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#issuecomment-601119011
 
 
   Hi @kazum 
   
   It seems that I need to add the schedule of `argwhere` for cuda in this PR.  I would appreciate if you can guide me to do it, many thanks. :)
   
   ```
   
   =================================== FAILURES ===================================
   
   _________________________________ test_nonzero _________________________________
   
   
   
       def test_nonzero():
   
       
   
           def verify_nonzero(indata, outdata, dtype):
   
               node = helper.make_node('NonZero',
   
                                       inputs=['X'],
   
                                       outputs=['Y'],)
   
       
   
               graph = helper.make_graph([node],
   
                                         "nonzero_test",
   
                                         inputs=[helper.make_tensor_value_info("X", TensorProto.INT64, list(indata.shape))],
   
                                         outputs=[helper.make_tensor_value_info("Y", TensorProto.INT64, list(outdata.shape))])
   
       
   
               model = helper.make_model(graph, producer_name='nonzero_test')
   
       
   
               onnx_out = get_onnxruntime_output(model, indata, dtype)
   
       
   
               for target, ctx in ctx_list():
   
                   tvm_out = get_tvm_output_with_vm(model, indata, target, ctx, opset=9)
   
                   tvm.testing.assert_allclose(onnx_out, tvm_out, rtol=1e-05, atol=1e-05)
   
       
   
           input_data = np.array([[1, 0], [1, 1]], dtype=np.int64)
   
           result = np.array((np.nonzero(input_data)))  # expected output [[0, 1, 1], [0, 0, 1]]
   
   >       verify_nonzero(input_data, result, dtype=np.int64)
   
   
   
   tests/python/frontend/onnx/test_forward.py:2251: 
   
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
   
   tests/python/frontend/onnx/test_forward.py:2246: in verify_nonzero
   
       tvm_out = get_tvm_output_with_vm(model, indata, target, ctx, opset=9)
   
   tests/python/frontend/onnx/test_forward.py:54: in get_tvm_output_with_vm
   
       ex = relay.create_executor('vm', mod=mod, ctx=ctx, target=target)
   
   python/tvm/relay/build_module.py:414: in create_executor
   
       return VMExecutor(mod, ctx, target)
   
   python/tvm/relay/backend/vm.py:247: in __init__
   
       self.executable = compile(mod, target)
   
   python/tvm/relay/backend/vm.py:68: in compile
   
       compiler.lower(mod, target, target_host)
   
   python/tvm/relay/backend/vm.py:134: in lower
   
       self._lower(mod, target, target_host)
   
   tvm/_ffi/_cython/./packed_func.pxi:308: in tvm._ffi._cy3.core.PackedFuncBase.__call__
   
       ???
   
   tvm/_ffi/_cython/./packed_func.pxi:243: in tvm._ffi._cy3.core.FuncCall
   
       ???
   
   tvm/_ffi/_cython/./packed_func.pxi:232: in tvm._ffi._cy3.core.FuncCall3
   
       ???
   
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
   
   
   
   >   ???
   
   E   tvm._ffi.base.TVMError: Traceback (most recent call last):
   
   E     [bt] (8) /workspace/build/libtvm.so(tvm::relay::vm::VMFunctionCompiler::VisitExpr_(tvm::relay::CallNode const*)+0x3d5) [0x7f8cb0e7ef55]
   
   E     [bt] (7) /workspace/build/libtvm.so(tvm::relay::OpMatch<void>::operator()(tvm::relay::Call const&)+0xef) [0x7f8cb0e7c27f]
   
   E     [bt] (6) /workspace/build/libtvm.so(tvm::relay::vm::VMFunctionCompiler::VisitExpr_(tvm::relay::CallNode const*)::{lambda(tvm::Array<tvm::RelayExpr, void> const&, tvm::Attrs const&, tvm::Array<tvm::Type, void> const&)#1}::operator()(tvm::Array<tvm::RelayExpr, void> const&, tvm::Attrs const&, tvm::Array<tvm::Type, void> const&) const+0x13a) [0x7f8cb0e7e12a]
   
   E     [bt] (5) /workspace/build/libtvm.so(tvm::relay::vm::VMFunctionCompiler::EmitInvokeTVMOp(tvm::relay::Function const&, tvm::RelayExpr const&, tvm::RelayExpr const&)+0x8f3) [0x7f8cb0e7d633]
   
   E     [bt] (4) /workspace/build/libtvm.so(tvm::relay::CompileEngineImpl::Lower(tvm::relay::CCacheKey const&)+0x20) [0x7f8cb0e4ef20]
   
   E     [bt] (3) /workspace/build/libtvm.so(tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&)+0x61e) [0x7f8cb0e4e17e]
   
   E     [bt] (2) /workspace/build/libtvm.so(tvm::relay::ScheduleGetter::Create(tvm::relay::Function const&)+0xe8f) [0x7f8cb0e4d3ef]
   
   E     [bt] (1) /workspace/build/libtvm.so(tvm::relay::OpImplementation::Schedule(tvm::Attrs const&, tvm::Array<tvm::te::Tensor, void> const&, tvm::Target const&)+0xb1) [0x7f8cb0e8f231]
   
   E     [bt] (0) /workspace/build/libtvm.so(+0xc5e14b) [0x7f8cb0f8714b]
   
   E     File "tvm/_ffi/_cython/./packed_func.pxi", line 54, in tvm._ffi._cy3.core.tvm_callback
   
   E     File "/workspace/python/tvm/relay/op/strategy/generic.py", line 738, in schedule_argwhere
   
   E       return topi.generic.schedule_argwhere(outs)
   
   E     File "/workspace/topi/python/topi/generic/search.py", line 35, in schedule_argwhere
   
   E       return _default_schedule(outs, False)
   
   E     File "/workspace/topi/python/topi/generic/vision.py", line 29, in _default_schedule
   
   E       raise RuntimeError("schedule not registered for '%s'" % target)
   
   E   RuntimeError: schedule not registered for 'cuda'
   
   
   
   tvm/_ffi/_cython/./base.pxi:159: TVMError
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] masahi commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
masahi commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#issuecomment-601291532
 
 
   Thanks @cchung100m @kazum 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393506803
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,6 +30,30 @@
 import scipy
 
 
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None, mode=None):
+    """ Generic function to execute and get tvm output with vm executor"""
+    if isinstance(input_data, list):
+        input_names = {}
+        shape_dict = {}
+        dtype_dict = {}
+        for i, _ in enumerate(input_data):
+            input_names[i] = graph_def.graph.input[i].name
+            shape_dict[input_names[i]] = input_data[i].shape
+            dtype_dict[input_names[i]] = input_data[i].dtype
+    else:
+        input_names = graph_def.graph.input[0].name
+        shape_dict = {input_names: input_data.shape}
+        dtype_dict = {input_names: input_data.dtype}
+
+    mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
+
+    ex = relay.create_executor(mode, mod=mod, ctx=tvm.cpu(), target=target)
 
 Review comment:
   `tvm.cpu()` should be `ctx`?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r394006951
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,22 +30,54 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
-    """ Generic function to execute and get tvm output"""
-    target = 'llvm'
+def get_input_data_shape_dict(graph_def, input_data):
     if isinstance(input_data, list):
         input_names = {}
         shape_dict = {}
-        dtype_dict = {}
         for i, _ in enumerate(input_data):
             input_names[i] = graph_def.graph.input[i].name
             shape_dict[input_names[i]] = input_data[i].shape
-            dtype_dict[input_names[i]] = input_data[i].dtype
     else:
         input_names = graph_def.graph.input[0].name
         shape_dict = {input_names: input_data.shape}
+
+    return input_names, shape_dict
+
+
+def get_input_data_dtype_dict(graph_def, input_data):
+    if isinstance(input_data, list):
+        input_names = {}
+        dtype_dict = {}
+        for i, _ in enumerate(input_data):
+            input_names[i] = graph_def.graph.input[i].name
+            dtype_dict[input_names[i]] = input_data[i].dtype
+    else:
+        input_names = graph_def.graph.input[0].name
         dtype_dict = {input_names: input_data.dtype}
 
+    return input_names, dtype_dict
+
+
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None):
+    """ Generic function to execute and get tvm output with vm executor"""
+
+    _, shape_dict = get_input_data_shape_dict(graph_def, input_data)
+
+    mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
+
+    ex = relay.create_executor('vm', mod=mod, ctx=ctx, target=target)
+    indata = tvm.nd.array(input_data)
+    result = ex.evaluate()(indata)
+    return result.asnumpy()
+
+
+def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
 
 Review comment:
   How about using vm for all the tests?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393390063
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,6 +30,30 @@
 import scipy
 
 
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None, mode=None):
+    """ Generic function to execute and get tvm output with vm executor"""
+    if isinstance(input_data, list):
+        input_names = {}
+        shape_dict = {}
+        dtype_dict = {}
+        for i, _ in enumerate(input_data):
+            input_names[i] = graph_def.graph.input[i].name
+            shape_dict[input_names[i]] = input_data[i].shape
+            dtype_dict[input_names[i]] = input_data[i].dtype
+    else:
+        input_names = graph_def.graph.input[0].name
+        shape_dict = {input_names: input_data.shape}
+        dtype_dict = {input_names: input_data.dtype}
+
+    mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
+
+    ex = relay.create_executor(mode, mod=mod, ctx=tvm.cpu(), target=target)
+    indata = tvm.nd.array(input_data)
+    result = ex.evaluate()(indata)
+    return result.asnumpy().transpose()
 
 Review comment:
   Why transpose ?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on issue #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#issuecomment-601209817
 
 
   Hi @masahi @kazum @jwfromm 
   
   Thanks for all the reviews and suggestions. :)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393633686
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,6 +30,30 @@
 import scipy
 
 
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None, mode=None):
 
 Review comment:
   Thanks for the suggestions and I updated it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393390492
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,6 +30,30 @@
 import scipy
 
 
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None, mode=None):
+    """ Generic function to execute and get tvm output with vm executor"""
+    if isinstance(input_data, list):
 
 Review comment:
   This if/else is dup from get_tvm_output, move it to a common function 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r394006892
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,22 +30,54 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
-    """ Generic function to execute and get tvm output"""
-    target = 'llvm'
+def get_input_data_shape_dict(graph_def, input_data):
     if isinstance(input_data, list):
         input_names = {}
         shape_dict = {}
-        dtype_dict = {}
         for i, _ in enumerate(input_data):
             input_names[i] = graph_def.graph.input[i].name
             shape_dict[input_names[i]] = input_data[i].shape
-            dtype_dict[input_names[i]] = input_data[i].dtype
     else:
         input_names = graph_def.graph.input[0].name
         shape_dict = {input_names: input_data.shape}
+
+    return input_names, shape_dict
+
+
+def get_input_data_dtype_dict(graph_def, input_data):
 
 Review comment:
   This function is not used anywhere.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393505835
 
 

 ##########
 File path: python/tvm/relay/frontend/onnx.py
 ##########
 @@ -1443,6 +1443,16 @@ def _impl_v11(cls, inputs, attr, params):
         return _op.image.resize(inputs[0], out_size, layout, method, coord_trans)
 
 
+class NonZero(OnnxOpConverter):
+    """Operator converter for NonZero
+    """
+    @classmethod
+    def _impl_v9(cls, inputs, attr, params):
+        if len(inputs) > 1:
+            raise ValueError("Expect 1 input only")
+        return AttrCvt(op_name='argwhere')(inputs, attr, params)
 
 Review comment:
   The output shape of argwhere is different from the one of ONNX nonzero.  Needs transpose here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393635064
 
 

 ##########
 File path: python/tvm/relay/frontend/onnx.py
 ##########
 @@ -1443,6 +1443,16 @@ def _impl_v11(cls, inputs, attr, params):
         return _op.image.resize(inputs[0], out_size, layout, method, coord_trans)
 
 
+class NonZero(OnnxOpConverter):
+    """Operator converter for NonZero
+    """
+    @classmethod
+    def _impl_v9(cls, inputs, attr, params):
+        if len(inputs) > 1:
+            raise ValueError("Expect 1 input only")
+        return AttrCvt(op_name='argwhere')(inputs, attr, params)
 
 Review comment:
   Thanks for the review and I updated it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r394358064
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,22 +30,54 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
-    """ Generic function to execute and get tvm output"""
-    target = 'llvm'
+def get_input_data_shape_dict(graph_def, input_data):
     if isinstance(input_data, list):
         input_names = {}
         shape_dict = {}
-        dtype_dict = {}
         for i, _ in enumerate(input_data):
             input_names[i] = graph_def.graph.input[i].name
             shape_dict[input_names[i]] = input_data[i].shape
-            dtype_dict[input_names[i]] = input_data[i].dtype
     else:
         input_names = graph_def.graph.input[0].name
         shape_dict = {input_names: input_data.shape}
+
+    return input_names, shape_dict
+
+
+def get_input_data_dtype_dict(graph_def, input_data):
 
 Review comment:
   Hi @kazum 
   Thanks for the review and I can remove it here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393361403
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,7 +30,8 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
+def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32',
+                   opset=None, mode=None):
 
 Review comment:
   Yes, It makes sense and I will put it into another function instead of adding it to `get_tvm_output`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] masahi merged pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r393389794
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,6 +30,30 @@
 import scipy
 
 
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None, mode=None):
 
 Review comment:
   Remove mode and always use vm

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

Posted by GitBox <gi...@apache.org>.
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r394366255
 
 

 ##########
 File path: tests/python/frontend/onnx/test_forward.py
 ##########
 @@ -30,22 +30,54 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
-    """ Generic function to execute and get tvm output"""
-    target = 'llvm'
+def get_input_data_shape_dict(graph_def, input_data):
     if isinstance(input_data, list):
         input_names = {}
         shape_dict = {}
-        dtype_dict = {}
         for i, _ in enumerate(input_data):
             input_names[i] = graph_def.graph.input[i].name
             shape_dict[input_names[i]] = input_data[i].shape
-            dtype_dict[input_names[i]] = input_data[i].dtype
     else:
         input_names = graph_def.graph.input[0].name
         shape_dict = {input_names: input_data.shape}
+
+    return input_names, shape_dict
+
+
+def get_input_data_dtype_dict(graph_def, input_data):
+    if isinstance(input_data, list):
+        input_names = {}
+        dtype_dict = {}
+        for i, _ in enumerate(input_data):
+            input_names[i] = graph_def.graph.input[i].name
+            dtype_dict[input_names[i]] = input_data[i].dtype
+    else:
+        input_names = graph_def.graph.input[0].name
         dtype_dict = {input_names: input_data.dtype}
 
+    return input_names, dtype_dict
+
+
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+                           opset=None):
+    """ Generic function to execute and get tvm output with vm executor"""
+
+    _, shape_dict = get_input_data_shape_dict(graph_def, input_data)
+
+    mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
+
+    ex = relay.create_executor('vm', mod=mod, ctx=ctx, target=target)
+    indata = tvm.nd.array(input_data)
+    result = ex.evaluate()(indata)
+    return result.asnumpy()
+
+
+def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, output_dtype='float32', opset=None):
 
 Review comment:
   Hi @kazum 
   I'd like to keep the `relay.create_executor` and `relay.build` both in this PR. 
   
   I cannot change the `relay.build` with `relay.create_executor` directly due to there are many errors like below:
   
   ```
     File "/tvm/tests/python/frontend/onnx/test_forward.py", line 2282, in <module>
       test_flatten()
   
     File "/tvm/tests/python/frontend/onnx/test_forward.py", line 374, in test_flatten
       tvm_out = get_tvm_output(model, x, target, ctx, ref_shape, 'float32')
   
     File "/tvm/tests/python/frontend/onnx/test_forward.py", line 70, in get_tvm_output
       result = ex.evaluate()(indata)
   
     File "/tvm/python/tvm/relay/backend/vm.py", line 256, in _vm_wrapper
       return self.vm.run(*args)
   
     File "/tvm/python/tvm/runtime/vm.py", line 366, in run
       return self.invoke("main", *args, **kwargs)
   
     File "/tvm/python/tvm/runtime/vm.py", line 348, in invoke
       return self._invoke(func_name)
   
     File "/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 213, in __call__
       raise get_last_ffi_error()
   
   tvm._ffi.base.TVMError: Traceback (most recent call last):
     [bt] (7) 8   ???                                 0x00007fff54230930 0x0 + 140734604970288
     [bt] (6) 7   _ctypes.cpython-37m-darwin.so       0x00000001104dc2bf ffi_call_unix64 + 79
     [bt] (5) 6   libtvm.dylib                        0x0000000125071f78 TVMFuncCall + 72
     [bt] (4) 5   libtvm.dylib                        0x00000001250a4e3f std::__1::__function::__func<tvm::runtime::vm::VirtualMachine::GetFunction(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::$_0, std::__1::allocator<tvm::runtime::vm::VirtualMachine::GetFunction(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::$_0>, void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 735
     [bt] (3) 4   libtvm.dylib                        0x00000001250a199a tvm::runtime::vm::VirtualMachine::RunLoop() + 7610
     [bt] (2) 3   libtvm.dylib                        0x00000001250a310f tvm::runtime::vm::VirtualMachine::InvokePacked(long long, tvm::runtime::PackedFunc const&, long long, long long, std::__1::vector<tvm::runtime::ObjectRef, std::__1::allocator<tvm::runtime::ObjectRef> > const&) + 1039
     [bt] (1) 2   libtvm.dylib                        0x000000012507b396 std::__1::__function::__func<tvm::runtime::WrapPackedFunc(int (*)(TVMValue*, int*, int, TVMValue*, int*), tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::$_0, std::__1::allocator<tvm::runtime::WrapPackedFunc(int (*)(TVMValue*, int*, int, TVMValue*, int*), tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::$_0>, void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 310
     [bt] (0) 1   libtvm.dylib                        0x0000000124666af9 dmlc::LogMessageFatal::~LogMessageFatal() + 57
     File "/tvm/src/runtime/library_module.cc", line 89
   TVMError: Check failed: ret == 0 (-1 vs. 0) : Assert fail: (((tvm_struct_get(arg0, 0, 5) == (uint8)2) && (tvm_struct_get(arg0, 0, 6) == (uint8)32)) && (tvm_struct_get(arg0, 0, 7) == (uint16)1)), arg0.dtype is expected to be float32
   
   ```
   ```
   File "/tvm/tests/python/frontend/onnx/test_forward.py", line 2128, in verify_lstm
       output_dtype=['float32', 'float32', 'float32'])
   
     File "/tvm/tests/python/frontend/onnx/test_forward.py", line 69, in get_tvm_output
       indata = tvm.nd.array(input_data)
   
     File "/tvm/python/tvm/runtime/ndarray.py", line 487, in array
       return empty(arr.shape, arr.dtype, ctx).copyfrom(arr)
   
     File "/tvm/python/tvm/runtime/ndarray.py", line 270, in empty
       dtype = DataType(dtype)
   
     File "/tvm/python/tvm/_ffi/runtime_ctypes.py", line 101, in __init__
       raise ValueError("Do not know how to handle type %s" % type_str)
   
   ValueError: Do not know how to handle type object
   ```
   
   Maybe we can initiate another PR to the above issues and change the `relay.build` with `relay.create_executor`?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services