You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/12/13 09:28:06 UTC

[GitHub] [tvm] chunit-quic opened a new pull request #9723: [Frontend] Add Span filling for frontends to Relay

chunit-quic opened a new pull request #9723:
URL: https://github.com/apache/tvm/pull/9723


   * Add a common span filling feature for tf1/2, tflite and pytorch.
   * Add test case for Span filling in each frontend.
   * Expose Tuple and TupleGetItem to python end
   
   Hi community,
   
   Here is a pull request about span filling for frontends -> relay
   (frontedns: TF 1 and 2, tfltie and pytorch)
   This feature could help users to track the conversion more precisely
   I would like to descript more about how it works and current status below. :D
   
   1. One to many conversion
   First, though there is a span_span function for tensorflow and tensorflow2, some spans are still empty from time to time.
   One of the reasons is that an op conversion might be an one to many conversion.
   In this situation the intermediate ops result in empty string.
   Take the [pack](https://github.com/apache/tvm/blob/2b35cfd6ddb73afecd3f550f33881e1fdc7c3267/python/tvm/relay/frontend/tensorflow_ops.py#L1535) conversion for example, there are several expand_dims ops might be added before concatenate.
   Via adding a ExprMutator to traverse expression each time when an op is converted, we should get a full span tagged RelayIR.
   
   Here gives a simple example:
   Before modification, the test case in this patch (tensorflow/test_forward.py:320) is converted to the following Relay expressions
   
   > def @main(%input: Tensor[(?, ?, 3, 1), float32]) {
   >   %113 = shape_of(%input, dtype="int32") /* Shape */;
   >   %114 = strided_slice(%113, begin=[0], end=[1], strides=[1], axes=None);
   >   %115 = squeeze(%114) /* strided_slice */;
   >   %116 = expand_dims(%115, axis=0);
   >   %117 = expand_dims(3, axis=0);
   >   %118 = expand_dims(3, axis=0);
   >   %119 = (%116, %117, %118);
   >   %120 = concatenate(%119) /* stack */;
   >   dyn.reshape(%input, %120, newshape=[]) /* output */
   > }
   
   With this patch we can obtain the following format.
   
   > def @main(%input: Tensor[(?, ?, 3, 1), float32]) {
   >   %10 = shape_of(%input, dtype="int32") /* Shape */;
   >   %11 = strided_slice(%10, begin=[0], end=[1], strides=[1], axes=None) /* strided_slice */;
   >   %12 = squeeze(%11) /* strided_slice_DERIVED_0 */;
   >   %13 = expand_dims(%12, axis=0) /* stack */;
   >   %14 = expand_dims(3, axis=0) /* stack_DERIVED_0 */;
   >   %15 = expand_dims(3, axis=0) /* stack_DERIVED_1 */;
   >   %16 = (%13, %14, %15) /* stack_DERIVED_2 */;
   >   %17 = concatenate(%16) /* stack_DERIVED_3 */;
   >   dyn.reshape(%input, %17, newshape=[]) /* output */
   > }
   
   Note that it slightly differs from the original format. The "stak" span is tagged in the expand_dims in the very begging, which is just like the conversion steps of _pack op.
   
   2. span naming for each frontend
     2.1. TensorFlow (1 and 2) naming: is kept the same.
     2.2. tflite naming: is a combination of its op position index and output tensor name(s).
         op position is good enghough to map back the tflite.
         And the output tensor name should be helpful when user search the op in Netron
     2.3. Pytorch naming: Because PyTorch provides two kinds of graph, jit._trace.TopLevelTracedModule, and _C.Graph, two key attributes, kind() and scopeName() are recorded in a span.
         scopeName(), is used to map a Realy expression back to its original pytorch module part in jit._trace.TopLevelTracedModule, and _C.Graph.
         Combined with kind(), the position of node can be precisely located in _C.Graph.
   
   3. Limitation
     Few model in test_functional_models.py is still in investigation.
   
   4. Trivial
      Several test cases are attached. Should be a quick verifier for reviewing.
   
   Thank you for reading. Any comment is appreciated. :)
   
   Thanks for contributing to TVM!   Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-992451852


   Hi @lixiaoquan,
   
   It's a good advice and easy to implement.  :D
   If it tag the original name to the final expression is a better philosophy for users, I can change to this way after collecting more other comments from reviewers.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1022781774


   Hi @chunit-quic, thanks for the fast reaction.
   
   My case is bit different with yours. My network has a single LSTM layer with some dense layers before and after the LSTM. The model returns three outputs which are output, cell, and hidden state of the LSTM. However, the converted relay graph has two LSTM layers. The first LSTM stands for the output and the second one stands for the cell and hidden state. This is really weird because all of them should be generated from the same LSTM layer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1001838817


   Hi @FrozenGene, 
   Thanks for your positive feedback! :D
   
   > However, we have one unresolved comment,
   
   Currently I would prefer to merge this one first if possible.
   The unresolved part you means should be the discussion with @jroesch and @mbs-octoml in src/printer/relay_text_printer.cc.
   Since I haven't got further replies from them and to the best of my knowledge it might be good to have one more PR to reach (or modify) what we want. Like the formal printer, or collect and expose span to python end.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1001869069


   Thanks @chunit-quic @mbs-octoml @jroesch  It is merged now. @chunit-quic Let us make new PRs to solve left discussion.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on a change in pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on a change in pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#discussion_r769717976



##########
File path: src/printer/relay_text_printer.cc
##########
@@ -389,12 +389,21 @@ Doc RelayTextPrinter::VisitExpr_(const TupleNode* op) {
   if (op->fields.size() == 1) {
     doc << ",";
   }
-  return doc << ")";
+  doc << ")";
+  if (op->span.defined()) {

Review comment:
       Hi @jroesch
   
   It seems that there is a more suitable printer for this part.
   Would you mind to share that feature with me? Sorry that I just follow the existing format without checking it more carefully.
   Once we have conclusion about which one should be used this time, I could try to modify my current code. :D




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-997486966


   Hi @mbs-octoml, @jroesch,
   
   Just a gentle ping. Should I modify something more or would it be fine to merged? Thanks :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1022781774


   Hi @chunit-quic, thanks for the fast reaction.
   
   In my case is a little bit different with yours. My network has a single layer lstm with some dense layer before and after the lstm layesrs. The model returns three outputs which are output and hidden states of the LSTM. However, the converted relay graph has two LSTM layers. The first LSTM stands for the output and the second one stands for the hidden states. This is really weired because both outputs should be generated from the same LSTM layer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1001838817


   Hi @FrozenGene, 
   Thanks for your positive feedback! :D
   
   > However, we have one unresolved comment,
   
   Currently I would prefer to let this one merged if possible.
   The unresolved part you means should be the discussion with @jroesch and @mbs-octoml in src/printer/relay_text_printer.cc.
   Since I haven't got further replies from them and to the best of my knowledge it might be good to have one more PR to reach (or modify) what we want. Like the formal printer, or collect and expose span to python end.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1001836832


   As @mbs-octoml approved, ideally we could merge it now. However, we have one unresolved comment, @chunit-quic do you want to file a new pr or do you want to resolve it in this pr?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1022781774


   Hi @chunit-quic, thanks for the fast reaction.
   
   My case is bit different with yours. My network has a single layer LSTM with some dense layers before and after the LSTM layer. The model returns three outputs which are output and hidden states of the LSTM. However, the converted relay graph has two LSTM layers. The first LSTM stands for the output and the second one stands for the hidden states. This is really weird because both of them should be generated from the same LSTM layer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] lixiaoquan commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
lixiaoquan commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-992346440


   I just feel that in `one to many` case, the original tensor/layer name should be attached to the last node in the group. Because that's where the computational result (and tensor type) matches between original graph and relay IR. And we may need to find the last node of a group frequently, keep original layer's name there can make it easier.
   
   For example, a LSTM can become thousands of nodes in relay IR. If its name is not attached to the last, we'll have to try to search layer_DERIVED_xxxx many times to find the end.
   
   ```text
   def @main(%input: Tensor[(?, ?, 3, 1), float32]) {
   %10 = shape_of(%input, dtype="int32") /* Shape /;
   %11 = strided_slice(%10, begin=[0], end=[1], strides=[1], axes=None) / strided_slice_PART_0) /;
   %12 = squeeze(%11) / strided_slice /;
   %13 = expand_dims(%12, axis=0) / stack_PART_0 /;
   %14 = expand_dims(3, axis=0) / stack_PART_1 /;
   %15 = expand_dims(3, axis=0) / stack_PART_2 /;
   %16 = (%13, %14, %15) / stack_PART_3 /;
   %17 = concatenate(%16) / stack /;
   dyn.reshape(%input, %17, newshape=[]) / output */
   }
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-997486966


   Hi @mbs-octoml, @jroesch,
   
   Just a gentle ping. Should I modify something more or would it be fine to be merged? Thanks :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on a change in pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on a change in pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#discussion_r769682892



##########
File path: python/tvm/relay/frontend/common.py
##########
@@ -954,3 +955,53 @@ def try_resolve_var_to_const(x, graph_params):
         return _op.const(value, dtype)
 
     return x
+
+
+def set_span(sym, node_name):
+    """Set up the sapn for while converting OP"""
+
+    class SpanFiller(ExprMutator):
+        """SpanFiller"""
+
+        def __init__(self, node_name, surfix_str="_PART_"):

Review comment:
       Fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1001838817


   Hi @FrozenGene, 
   Thanks for your positive feedback! :D
   
   > However, we have one unresolved comment,
   
   Currently I would prefer to merge this one first if possible.
   The unresolved part you means should be the discussion with @jroesch and @mbs-octoml in src/printer/relay_text_printer.cc.
   Since I haven't got further replies from them and to the best of my knowledge it might be good to have one more PR to reach (or modify) what we want. Like the formal printer, or collect and expose those spans which are hidden now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1020952881


   @chunit-quic @FrozenGene @mbs-octoml 
   Hi guys, I think I found a bug when convert pytorch LSTM layer to relay graph. 
   LSTM layer appears twice in the converted relay graph even if I have only one LSTM layer. 
   I found this weird behavior solved, if I commented the following two lines in python/tvm/relay/frontend/pytorch.py
   `span_str, empty_counter = self._get_torch_span(op_node, empty_counter)`
   `relay_out = set_span(relay_out, span_str)`
   
   Do you have any idea??


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1021984727


   Thank you @FrozenGene, for your inference here is the reversion PR. :)
   https://github.com/apache/tvm/pull/10072


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on a change in pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on a change in pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#discussion_r769709917



##########
File path: src/printer/relay_text_printer.cc
##########
@@ -389,12 +389,21 @@ Doc RelayTextPrinter::VisitExpr_(const TupleNode* op) {
   if (op->fields.size() == 1) {
     doc << ",";
   }
-  return doc << ")";
+  doc << ")";
+  if (op->span.defined()) {

Review comment:
       Hi @mbs-octoml 
   Thank you for reviewing and giving this PR a positive feedback! :D
   
   About comments in this part, please kindly correct me if I am wrong.
   
   > nit:: can you leave a warning comment that we'll probably need to protect this by some kind of 'include_spans' or 'verbose' printer flag
   
   If I didn't misunderstand it. It will be nice to have a flag to control span printing (After all some time it will be super long.)
   In the latest commit I add a bool flag with true as default value to control it. (src/printer/text_printer.h:116)
   Although a "/* */" is left based on this implementation... is It basically what we want?
   
   > would you be up for doing the span suffix printing in the VisitExpr override?
   
   About this part, do you mean how about adding print span for those printer without it currently?
   Like, ScalarLiteral, PrintExpr and VisitExpr_(const IfNode* op) ...
   If so, I did try to browse them at first. Yet it seems that it is not easy to track and verify them comprehensively barely by a glance. Since sometime we might even need to check their c++ global register and python end.
   If is fine to you perhaps one more PR for this enhancement would be better? I could also try to think about which test cases could help me to check those kind of printers. :D




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1001873030


   Thanks @FrozenGene. Sure thing! :D
   Once we get some more precise information from @mbs-octoml and @jroesch, we can start these works.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1020952881


   @chunit-quic @FrozenGene @mbs-octoml 
   Hi guys, I think I found a bug when convert pytorch LSTM layer to relay graph. 
   LSTM layer appears twice in the converted relay graph even if I have only one LSTM layer. 
   I found this weird behavior solved, if I commented the following two lines in python/tvm/relay/frontend/pytorch.py
   ` span_str, empty_counter = self._get_torch_span(op_node, empty_counter)
      relay_out = set_span(relay_out, span_str)`
   
   Do you have any idea??


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1021984727


   Thank you @FrozenGene, for your reference here is the reversion PR. :)
   https://github.com/apache/tvm/pull/10072


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1021936021


   > Hi, @rebel-shshin I'm surprised by the lstm case. After checking the file test_lstm.py in the pytorch folder, the output IR graph does change with set_span mutator.
   > 
   > Hi, @FrozenGene @mbs-octoml Since it might take me a while to investigate it, perhaps it would be better to revert it? I am preparing a PR to revert the whole change of this PR. I will submit the change if reversion is OK to you guys. Thank you!
   
   OK. Please submit reverted pr.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1022781774


   Hi @chunit-quic, thanks for the fast reaction.
   
   In my case is a little bit different with yours. My network has a single layer LSTM with some dense layers before and after the LSTM layer. The model returns three outputs which are output and hidden states of the LSTM. However, the converted relay graph has two LSTM layers. The first LSTM stands for the output and the second one stands for the hidden states. This is really weird because both of them should be generated from the same LSTM layer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1022781774


   Hi @chunit-quic, thanks for the fast reaction.
   
   In my case is a little bit different with yours. My network has a single layer LSTM with some dense layers before and after the LSTM layer. The model returns three outputs which are output and hidden states of the LSTM. However, the converted relay graph has two LSTM layers. The first LSTM stands for the output and the second one stands for the hidden states. This is really weird because both outputs should be generated from the same LSTM layer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1022781774


   Hi @chunit-quic, thanks for the fast reaction.
   
   My case is bit different with yours. My network has a single LSTM layer with some dense layers before and after the LSTM. The model returns three outputs which are output, cell, and hidden state of the LSTM. However, the converted relay graph has two LSTM layers. The first LSTM stands for the output and the second one stands for the cell and hidden state. This is really weird because both of them should be generated from the same LSTM layer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1001836001


   I like this pr, which could make us support heterogeneous execution more fine-grained . Thanks @chunit-quic 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1004285286


   Hi @chunit-quic, sorry the line went dead over the break, glad to see this merged. I don't have any outstanding requests for you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-992451852


   Hi @lixiaoquan,
   
   It's a good advice and easy to implement.  :D
   If  tag the original name to the final expression is a better philosophy for users, I can change to this way after collecting more other comments from reviewers.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jroesch commented on a change in pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
jroesch commented on a change in pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#discussion_r769291698



##########
File path: src/printer/relay_text_printer.cc
##########
@@ -389,12 +389,21 @@ Doc RelayTextPrinter::VisitExpr_(const TupleNode* op) {
   if (op->fields.size() == 1) {
     doc << ",";
   }
-  return doc << ")";
+  doc << ")";
+  if (op->span.defined()) {

Review comment:
       Ideally this should be replaced by my different printer that I described to you the other day iirc. I think we should bring back the old printing mode via an implementation of a Renderer.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1020952881


   @chunit-quic @FrozenGene @mbs-octoml 
   Hi guys, I think I found a bug when convert pytorch LSTM layer to relay graph. 
   LSTM layer appears twice in the converted relay graph even if I have only one LSTM layer. 
   I found this weird behavior solved, if I commented the following two lines in python/tvm/relay/frontend/pytorch.py
   ` span_str, empty_counter = self._get_torch_span(op_node, empty_counter)
      relay_out = set_span(relay_out, span_str)`
   
   Do you have any idea??


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1022059117


   Hi @rebel-shshin, 
   Pardon that I forgot to confirm with you about the details.
   So the following snapshot is what I get from the single LSTM layer result from test_lstmpy.
   ![image](https://user-images.githubusercontent.com/82346784/151144335-54235e2b-3849-42c5-b8aa-1c2c709421ae.png)
   In the left-hand side with span filling, four more expressions pop out:
   Two more tuples (%36, %37) appear  in the while loop, and a Nil (%44) followed by (%45 = %39(0, %44, %states, %input)), which is the LSTM body.
   
   Is it the same as what you get? If not would you mind to share what's your model and conversion file with me? Thank you. :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1021880566


   Hi, @rebel-shshin 
   I'm surprised by the lstm case.
   After checking the file test_lstm.py in the pytorch folder, the output IR graph does change with set_span mutator.
   
   Hi, @FrozenGene @mbs-octoml
   Since it might take me a while to investigate it, perhaps it would be better to revert it? I am preparing a PR to revert the whole change of this PR. I will submit the change if reversion is OK to you guys.
   Thank you!
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1001563660


   Hi @mbs-octoml, @jroesch,
   Hope you guys have a great vacation! And just a gentle ping again. :D
   
   Hi @masahi, 
   I found you merged a lots of PRs recently in closed PR section. Would you mind to take a look at this PR also please? :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1022781774


   Hi @chunit-quic, thanks for the fast reaction.
   
   In my case is a little bit different with yours. My network has a single layer lstm with some dense layers before and after the LSTM layer. The model returns three outputs which are output and hidden states of the LSTM. However, the converted relay graph has two LSTM layers. The first LSTM stands for the output and the second one stands for the hidden states. This is really weird because both outputs should be generated from the same LSTM layer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] rebel-shshin edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
rebel-shshin edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1020952881


   @chunit-quic @FrozenGene @mbs-octoml 
   Hi guys, I think I found a bug when convert pytorch LSTM layer to relay graph. 
   LSTM layer appears twice in the converted relay graph even if I have only one LSTM layer. 
   I found this weird behavior solved, if I commented the following two lines in python/tvm/relay/frontend/pytorch.py
   `span_str, empty_counter = self._get_torch_span(op_node, empty_counter)`
   `relay_out = set_span(relay_out, span_str)`
   
   Do you have any idea??


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic edited a comment on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic edited a comment on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1022059117


   Hi @rebel-shshin, 
   Pardon that I forgot to confirm with you about the details.
   So the following snapshot is what I get from the single LSTM layer result from test_lstm.py.
   ![image](https://user-images.githubusercontent.com/82346784/151144335-54235e2b-3849-42c5-b8aa-1c2c709421ae.png)
   In the left-hand side with span filling, four more expressions pop out:
   Two more tuples (%36, %37) appear  in the while loop, and a Nil (%44) followed by (%45 = %39(0, %44, %states, %input)), which is the LSTM body.
   
   Is it the same as what you get? If not would you mind to share what's your model and conversion file with me? Thank you. :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] chunit-quic commented on pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
chunit-quic commented on pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#issuecomment-1023281804


   Hi @rebel-shshin,
   
   Thank you for the detailed information. I found some clues yet still need some time to spot the problem preciously. I will try to make a test case similar to yours and give it a try. :D


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on a change in pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on a change in pull request #9723:
URL: https://github.com/apache/tvm/pull/9723#discussion_r768889160



##########
File path: python/tvm/relay/frontend/common.py
##########
@@ -954,3 +955,53 @@ def try_resolve_var_to_const(x, graph_params):
         return _op.const(value, dtype)
 
     return x
+
+
+def set_span(sym, node_name):
+    """Set up the sapn for while converting OP"""
+
+    class SpanFiller(ExprMutator):
+        """SpanFiller"""
+
+        def __init__(self, node_name, surfix_str="_PART_"):

Review comment:
       nit: suffix_str

##########
File path: src/printer/relay_text_printer.cc
##########
@@ -389,12 +389,21 @@ Doc RelayTextPrinter::VisitExpr_(const TupleNode* op) {
   if (op->fields.size() == 1) {
     doc << ",";
   }
-  return doc << ")";
+  doc << ")";
+  if (op->span.defined()) {

Review comment:
       nit:: can you leave a warning comment that we'll probably need to protect this by some kind of 'include_spans' or 'verbose' printer flag. But at this stage I'm happy to have them all!

##########
File path: src/printer/relay_text_printer.cc
##########
@@ -389,12 +389,21 @@ Doc RelayTextPrinter::VisitExpr_(const TupleNode* op) {
   if (op->fields.size() == 1) {
     doc << ",";
   }
-  return doc << ")";
+  doc << ")";
+  if (op->span.defined()) {

Review comment:
       would you be up for doing the span suffix printing in the VisitExpr override? I think might as well do it for all the node types uniformly.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene merged pull request #9723: [Frontend] Add Span filling for frontends to Relay

Posted by GitBox <gi...@apache.org>.
FrozenGene merged pull request #9723:
URL: https://github.com/apache/tvm/pull/9723


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org