You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/11/03 10:16:58 UTC

[GitHub] [tvm] shengxinhu opened a new pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

shengxinhu opened a new pull request #9438:
URL: https://github.com/apache/tvm/pull/9438


   support onnx scan operator


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#issuecomment-984817101


   @shengxinhu My apologies, I missed this one, I should have reviewed it three weeks ago, it seems. Any change you can rebase to resolve the conflicts?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] shengxinhu commented on a change in pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
shengxinhu commented on a change in pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#discussion_r745221748



##########
File path: python/tvm/relay/frontend/onnx.py
##########
@@ -3125,6 +3125,223 @@ def _impl_v1(cls, inputs, attr, params):
         return ret
 
 
+class Scan(OnnxOpConverter):
+    """Operator converter for Scan"""
+
+    @classmethod
+    def _impl_v8(cls, inputs, attr, params):
+        new_inputs = inputs[1:]
+        batch_num = infer_shape(inputs[1])[0]
+        out = []
+        for i in range(batch_num):
+            v9_inputs = [
+                _op.take(new_inputs[j], _expr.const(i), axis=0) for j in range(len(new_inputs))
+            ]
+            results = cls._impl_v9(v9_inputs, attr, params)
+            results = [_op.expand_dims(results[j], axis=0) for j in range(len(results))]
+            if i == 0:
+                out = results
+            else:
+                out = [_op.concatenate([out[j], results[j]], axis=0) for j in range(len(results))]
+
+        out = _expr.TupleWrapper(_expr.Tuple(out), len(out))
+        return out
+
+    @classmethod
+    def _impl_v9(cls, inputs, attr, params):
+        body = attr.get("body")
+        num_scan_inputs = attr.get("num_scan_inputs")
+        num_all_inputs = len(inputs)
+        num_state_inputs = len(body.input) - num_scan_inputs
+        num_state_outputs = num_state_inputs
+        num_all_outputs = len(body.output)
+        num_scan_outputs = num_all_outputs - num_state_outputs
+        scan_input_axes = attr.get("scan_input_axes", [0] * num_scan_inputs)
+        scan_input_directions = attr.get("scan_input_directions", [0] * num_scan_inputs)
+        scan_output_axes = attr.get("scan_output_axes", [0] * num_scan_outputs)
+        scan_output_directions = attr.get("scan_output_directions", [0] * num_scan_outputs)
+        # loop count are the same for all scan inputs, so get loop count by first input scan
+        # strided_slice not support dynamic axes, so assume input shape are static
+        max_loop_count = infer_shape(inputs[num_state_inputs])[scan_input_axes[0]]
+
+        # Create a copy of the body function to prevent the original
+        # from being modified.
+        body = copy.copy(attr["body"])
+
+        # Loop inputs will be packed as
+        # [iter_count, loop_deps, scan_outputs]
+        def cond_fn(*loop_inputs):
+            i = loop_inputs[0]
+            return _op.less(i, relay.const(max_loop_count, "int32"))
+
+        # Get the current graph proto and create a clone for the subgraph
+        graph_scope = GraphProto.current
+        subgraph_scope = GraphProto(
+            graph_scope._shape, graph_scope._dtype, graph_scope._freeze_params
+        )
+        # Load nodes from outer graph into inner graph.
+        subgraph_scope._nodes = graph_scope._nodes.copy()
+
+        # Create a list of variables for each value updated in the loop.
+        def get_var(name, val, scan=False):
+            checked_type = infer_type(val)
+            if hasattr(checked_type, "type_annotation"):
+                checked_type = checked_type.type_annotation
+            if hasattr(checked_type, "checked_type"):
+                checked_type = checked_type.checked_type
+            shape = get_const_tuple(checked_type.shape)
+            actual_shape = []
+            for dim in shape:
+                if isinstance(dim, int) and dim == 0:
+                    actual_shape.append(_ty.Any())
+                else:
+                    actual_shape.append(dim)
+            if scan:
+                return _expr.var(name, shape=[_ty.Any()] + actual_shape, dtype=checked_type.dtype)
+
+            return _expr.var(name, shape=actual_shape, dtype=checked_type.dtype)
+
+        # Construct variables and initial empty tensors for any scan outputs.
+        # To do this, we'll figure out the output shapes of the body subgraph by importing
+        # it and doing type inference.
+        scan_output_vars = []
+        scan_output_init = []
+        if num_scan_outputs > 0:
+            with subgraph_scope:
+                loop_outputs = subgraph_scope.from_onnx(
+                    body, graph_scope.opset, get_output_expr=True
+                )
+            loop_outputs = _expr.TupleWrapper(loop_outputs, len(body.output))
+
+        for i in range(num_scan_outputs):
+            name, _, _, _ = get_info(body.output[i + num_state_outputs])
+            output_node = infer_type(loop_outputs[i + num_state_outputs])
+            shape = list(get_const_tuple(output_node.checked_type.shape))
+            shape.insert(scan_output_axes[i], max_loop_count)
+            dtype = output_node.checked_type.dtype
+            scan_output_vars.append(_expr.var(name, shape=shape, dtype=dtype))
+            scan_output_init.append(_op.zeros(shape, dtype))
+

Review comment:
       It seems has a more elegant method(similar as operator Loop existing implementation):
   scan_output_vars.append(_expr.var(name, shape=_ty.Any() * len(shape), dtype=dtype))
   scan_output_init.append(_op.rehape(_expr.const(np.array[]).astype(dtype)), [0] + [1]*(len(shape)-1))
   so the following _op.strided_slice could be removed, as the scan_output_init is empty.
   But I did not make it work, could you take a look?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#discussion_r743389017



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -5774,6 +5774,139 @@ def repeat(N, D):
     )
 
 
+@tvm.testing.parametrize_targets

Review comment:
       Interesting. Well the tests fail so maybe you can try running locally with the latest version of onnx and onnxruntime. 
   
   We should pass the official tests to guarantee correctness with the onnx spec.

##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -5774,6 +5774,139 @@ def repeat(N, D):
     )
 
 
+@tvm.testing.parametrize_targets

Review comment:
       Interesting. Well the tests fail so maybe you can try running locally with the latest version of onnx and onnxruntime. 
   
   We should pass the official tests to guarantee correctness with the onnx spec.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#issuecomment-987534002


   The change seems to have a lot of duplication with `Loop` converter. Can we clean things up? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart edited a comment on pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
mbrookhart edited a comment on pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#issuecomment-984817101


   @shengxinhu My apologies, I missed this one, I should have reviewed it three weeks ago, it seems. Any change you can rebase to resolve the conflicts?
   
   Edit: NVM, it was a simple "two prs added lines to the end of the test list" issue, I could fix that in the GUI


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#issuecomment-989438211


   I mean, just looking at the two implementations, it's obvious that you started by copy-pasting Loop converter (even comments are identical). Can we refactor common code into a helper function, or is it difficult to cleanly extract common bits?
   
   This is not just for usual code-hygiene reason. I believe there are some issues in Loop converter in preserving static shape information. I've never seen a practical ONNX model with `Loop` op that can be successfully compiled by TVM - I've seen / heard that ONNX models converted from TF ssd mobilenet v1, MaskRCNN, and mlperf DLRM has `Loop` op, all of them failed to compile with TVM. So I don't want to see the Loop converter implementation in question duplicated for other ops.
   
   See for example this issue regarding ssd mobilenet v1 https://github.com/apache/tvm/issues/8284#issuecomment-864467897 Note that the original tf ssd mobilenet v1 model compiles and runs fine when imported directly by TF frontend, so I can only assume that this is a ONNX converter issue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#discussion_r742545158



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -5774,6 +5774,139 @@ def repeat(N, D):
     )
 
 
+@tvm.testing.parametrize_targets

Review comment:
       in unsupported_onnx_tests can you remove 
   
       "test_scan9_sum",
       "test_scan_sum",
   
   And rerun `pytest ... test_onnx_nodes:test_scan_sum` `pytest ... test_onnx_nodes:test_scan9_sum`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] shengxinhu commented on a change in pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
shengxinhu commented on a change in pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#discussion_r742555529



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -5774,6 +5774,139 @@ def repeat(N, D):
     )
 
 
+@tvm.testing.parametrize_targets

Review comment:
       I have remove them in unsupported_onnx_tests, but could not run test_onnx_nodes[test_scan_sum] and test_onnx_nodes[test_scan9_sum] in docker ci_gpu, it said not found.

##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -5774,6 +5774,139 @@ def repeat(N, D):
     )
 
 
+@tvm.testing.parametrize_targets

Review comment:
       I have removed them in unsupported_onnx_tests, but could not run test_onnx_nodes[test_scan_sum] and test_onnx_nodes[test_scan9_sum] in docker ci_gpu, it said not found.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#issuecomment-989635047


   ok, no problem. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] shengxinhu commented on pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
shengxinhu commented on pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#issuecomment-989590115


   Hi @masahi , thanks for your explanations. I indeed copy some code in Loop converter implementation. I think only the get_var function could be extracted. do you think is it necessary? If Loop is questionable, Scan is likely to be the same. I also have not seen an ONNX model with scan operator, I realized it just for a task assigned to me.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#discussion_r743389017



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -5774,6 +5774,139 @@ def repeat(N, D):
     )
 
 
+@tvm.testing.parametrize_targets

Review comment:
       Interesting. Well the tests fail so maybe you can try running locally with the latest version of onnx and onnxruntime. 
   
   We should pass the official tests to guarantee correctness with the onnx spec.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart edited a comment on pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
mbrookhart edited a comment on pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#issuecomment-984817101


   @shengxinhu My apologies, I missed this one, I should have reviewed it three weeks ago, it seems. Any chance you can rebase to resolve the conflicts?
   
   Edit: NVM, it was a simple "two prs added lines to the end of the test list" issue, I could fix that in the GUI


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] shengxinhu commented on pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
shengxinhu commented on pull request #9438:
URL: https://github.com/apache/tvm/pull/9438#issuecomment-989422905


   Hi @masahi , I didn't get your point of duplication with loop converter, could you point it out sepecifically?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #9438: [Frontend][ONNX] Support ONNX Scan operator

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #9438:
URL: https://github.com/apache/tvm/pull/9438


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org