You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/03/09 22:22:18 UTC

[GitHub] [incubator-tvm] trevor-m opened a new pull request #5018: Look for TupleType instead of TupleNode in LayoutRewriter

trevor-m opened a new pull request #5018: Look for TupleType instead of TupleNode in LayoutRewriter
URL: https://github.com/apache/incubator-tvm/pull/5018
 
 
   When getting the input shapes from the args LayoutRewriter checked for tuples only by seeing if the node is an instance of TupleNOde. With that code, the following relay program would encounter an error during LayoutRewriter because the arg is a relay.Var but has a TupleTypeNode instead of a TensorTypeNode as it expected.
   
   ```
   def @main(%input_0: Tensor[(1, 2, 6, 6), float32], %input_1: Tensor[(1, 3, 6, 6), float32]) -> Tensor[(1, 5, 6, 6), float32] {
     %0 = (%input_0, %input_1);
     %1 = fn (%x: (Tensor[(1, 2, 6, 6), float32], Tensor[(1, 3, 6, 6), float32])) -> Tensor[(1, 5, 6, 6), float32] {
       concatenate(%x, axis=1) /* ty=Tensor[(1, 5, 6, 6), float32] */
     };
     %1(%0) /* ty=Tensor[(1, 5, 6, 6), float32] */
   }
   ```
   
   @zhiics @anijain2305 Could you please review?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] trevor-m closed pull request #5018: Look for TupleType instead of TupleNode in LayoutRewriter

Posted by GitBox <gi...@apache.org>.
trevor-m closed pull request #5018: Look for TupleType instead of TupleNode in LayoutRewriter
URL: https://github.com/apache/incubator-tvm/pull/5018
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] trevor-m edited a comment on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter

Posted by GitBox <gi...@apache.org>.
trevor-m edited a comment on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter
URL: https://github.com/apache/incubator-tvm/pull/5018#issuecomment-597257018
 
 
   > IMHO, I don't think we need to test this functionality in the partitioning pass as it is a unit test of layout itself. Instead, You can directly make the "partitioned" graph and run Alterlayout pass and assert whatever IR it should emit.
   
   Hi Zhi, I don't think it is possible to create the partitioned graph without actually using partitoning. I recreated the subgraph below:
   ```
   input_type = relay.TensorType((1, 5, 6, 6), "float32")
   x = relay.var("x", relay.TupleType([input_type, input_type]))
   out = relay.concatenate(x, axis=1)
   func = relay.Function([x], out)
   ```
   However, the `relay.concatenate` python API does not accept a Var node as the input:
   ```
     File "test_pass_alter_op_layout.py", line 1053, in test_concatenate
       out = relay.concatenate(x, axis=1)
   
     File "/data/neo-ai-tvm/python/tvm/relay/op/tensor.py", line 888, in concatenate
       data = list(data)
   
   TypeError: 'Var' object is not iterable
   ```
   
   Edit: I was able to bypass this by calling `tvm.relay.op._make.concatenate`. Updating PR...

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] trevor-m commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter

Posted by GitBox <gi...@apache.org>.
trevor-m commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter
URL: https://github.com/apache/incubator-tvm/pull/5018#issuecomment-597257018
 
 
   > IMHO, I don't think we need to test this functionality in the partitioning pass as it is a unit test of layout itself. Instead, You can directly make the "partitioned" graph and run Alterlayout pass and assert whatever IR it should emit.
   
   Hi Zhi, I don't think it is possible to create the partitioned graph without actually using partitoning.
   ```
   input_type = relay.TensorType((1, 5, 6, 6), "float32")
   x = relay.var("x", relay.TupleType([input_type, input_type]))
   out = relay.concatenate(x, axis=1)
   func = relay.Function([x], out)
   ```
   The `relay.concatenate` python API does not accept a Var node as the input:
   ```
     File "test_pass_alter_op_layout.py", line 1053, in test_concatenate
       out = relay.concatenate(x, axis=1)
   
     File "/data/neo-ai-tvm/python/tvm/relay/op/tensor.py", line 888, in concatenate
       data = list(data)
   
   TypeError: 'Var' object is not iterable
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter
URL: https://github.com/apache/incubator-tvm/pull/5018#issuecomment-599183145
 
 
   https://github.com/apache/incubator-tvm/pull/5066
   
   Above PR have same line changes. Might be a good idea to try that first.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] tqchen commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter

Posted by GitBox <gi...@apache.org>.
tqchen commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter
URL: https://github.com/apache/incubator-tvm/pull/5018#issuecomment-606245916
 
 
   @trevor-m @anijain2305 @zhiics please followup

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] trevor-m edited a comment on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter

Posted by GitBox <gi...@apache.org>.
trevor-m edited a comment on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter
URL: https://github.com/apache/incubator-tvm/pull/5018#issuecomment-597257018
 
 
   > IMHO, I don't think we need to test this functionality in the partitioning pass as it is a unit test of layout itself. Instead, You can directly make the "partitioned" graph and run Alterlayout pass and assert whatever IR it should emit.
   
   Hi Zhi, I don't think it is possible to create the partitioned graph without actually using partitoning. I recreated the subgraph below:
   ```
   input_type = relay.TensorType((1, 5, 6, 6), "float32")
   x = relay.var("x", relay.TupleType([input_type, input_type]))
   out = relay.concatenate(x, axis=1)
   func = relay.Function([x], out)
   ```
   However, the `relay.concatenate` python API does not accept a Var node as the input:
   ```
     File "test_pass_alter_op_layout.py", line 1053, in test_concatenate
       out = relay.concatenate(x, axis=1)
   
     File "/data/neo-ai-tvm/python/tvm/relay/op/tensor.py", line 888, in concatenate
       data = list(data)
   
   TypeError: 'Var' object is not iterable
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] trevor-m commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter

Posted by GitBox <gi...@apache.org>.
trevor-m commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter
URL: https://github.com/apache/incubator-tvm/pull/5018#issuecomment-606272480
 
 
   Closing since https://github.com/apache/incubator-tvm/pull/5066 also solves this issue.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5018: Look for TupleType instead of TupleNode in LayoutRewriter
URL: https://github.com/apache/incubator-tvm/pull/5018#issuecomment-596832368
 
 
   Trevor, can you please add a test case? The implementation is good, but we need a test case. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services