You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/04/23 16:54:15 UTC

[GitHub] [incubator-tvm] lhutton1 opened a new pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

lhutton1 opened a new pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422


   * This patch means that you can specify an additional layout, rather than using the layout chosen by default during conversion.
   * This is specifically useful for external codegen when a 3rd party library needs to target a specific kernel layout for example.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420445637



##########
File path: docs/dev/convert_layout.rst
##########
@@ -116,16 +117,21 @@ These steps happen for each operator in sequence, where ConvertLayout pass keeps
         data_layout = attrs['data_layout']
         kernel_layout = attrs['kernel_layout']
         data, weight = inputs
-        assert desired_layout == 'NCHW', \
-                "Currently only transformation to NCHW layout is supported."
+
+        # Use the first entry in desired layouts which specifies the data layout.
+        # The expected ordering of layouts for this operator is defined by this function.
+        desired_data_layout = str(desired_layouts[0])
+
         if desired_layout == 'NCHW':
             new_attrs = dict(attrs)
-            new_attrs['data_layout'] = desired_layout
+            new_attrs['data_layout'] = desired_data_layout
             new_attrs['kernel_layout'] = 'OIHW'
             # Actual insertion of layout transforms is taken care internally
             # by ConvertLayout pass.
             return relay.nn.conv2d(data, weight, **new_attrs)
-        return None
+        else:
+            assert "Layout %s is not yet supported" % desired_data_layout
+            return None

Review comment:
       I'm fine with either proposals I mentioned before, but the one without an if-block might be better considering @anijain2305's comment. In this case, you don't have to return None naturally.
   
   ```
   assert layout == 'NCHW'
   # Do something.
   return relay.nn.conv2d(data, weight, **new_attrs)
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420477165



##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       It's fine to me tho. What I care is whether we can have the following function signature, for example:
   
   ```
   def convert_qnn_conv2d(desired_layouts: Tuple[str, str])
   def convert_qnn_pool(desired_layouts: Tuple[str])
   ```
   
   But this is a miner point so feel free to merge this PR without it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420434693



##########
File path: tests/python/relay/test_pass_convert_op_layout.py
##########
@@ -625,3 +685,5 @@ def expected():
     test_qnn_conv_requantize_convert_layout()
     test_qnn_conv_concat_convert_layout()
     test_qnn_conv_add_convert_layout()
+    test_conv_convert_kernel_layout()
+    test_default_keyword()

Review comment:
       I think that's good to check, will add that.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420479959



##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       I see. That makes sense.
   
   For this PR, I think we can start with a dict from python that is Map<String, Array<String>> for now. We can then send a separate PR for stricter type check.
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420688732



##########
File path: python/tvm/relay/op/nn/_nn.py
##########
@@ -141,20 +142,29 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
     from tvm import relay
     data, weight = inputs
     new_attrs = dict(attrs)
-    new_attrs['data_layout'] = desired_layout
-    if desired_layout == 'NCHW':
+    desired_data_layout = str(desired_layouts[0])
+    assert desired_data_layout != "default", "Data layout cannot be default"
+    new_attrs['data_layout'] = desired_data_layout
+
+    if len(desired_layouts) > 1:
+        desired_kernel_layout = str(desired_layouts[1])
+        if desired_kernel_layout != "default":
+            new_attrs['kernel_layout'] = desired_kernel_layout
+            return relay.nn.conv2d(data, weight, **new_attrs)
+
+    # Handle default kernel layouts
+    if desired_data_layout == 'NCHW':
         new_attrs['kernel_layout'] = 'OIHW'
         return relay.nn.conv2d(data, weight, **new_attrs)
-    elif desired_layout == 'NHWC':
+    elif desired_data_layout == 'NHWC':
         # Check for depthwise convolution.
         if is_depthwise_conv2d(data.shape, attrs['data_layout'], weight.shape,
                                attrs['kernel_layout'], attrs['groups']):
             new_attrs['kernel_layout'] = 'HWOI'
         else:
             new_attrs['kernel_layout'] = 'HWIO'
         return relay.nn.conv2d(data, weight, **new_attrs)
-    else:
-        assert "Layout %s is not yet supported." % (desired_layout)
+    assert "Layout %s is not yet supported." % (desired_data_layout)
     return None

Review comment:
       Apologies I forgot to mention, although the return is unreachable I added it to silence the linter. In this case, I think its needed, or we disable the check for this function?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420442683



##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       I agree with you that enforcing default would make the code clearer and actually I personally like this style more. In this case we can also refine the implementation as follows (take conv2d as an example):
   
   ```
   assert len(desired_layouts) == 2
   desired_data_layout, desired_kernel_layout = desired_layouts
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420935901



##########
File path: python/tvm/relay/op/nn/_nn.py
##########
@@ -141,20 +142,29 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
     from tvm import relay
     data, weight = inputs
     new_attrs = dict(attrs)
-    new_attrs['data_layout'] = desired_layout
-    if desired_layout == 'NCHW':
+    desired_data_layout = str(desired_layouts[0])
+    assert desired_data_layout != "default", "Data layout cannot be default"
+    new_attrs['data_layout'] = desired_data_layout
+
+    if len(desired_layouts) > 1:
+        desired_kernel_layout = str(desired_layouts[1])
+        if desired_kernel_layout != "default":
+            new_attrs['kernel_layout'] = desired_kernel_layout
+            return relay.nn.conv2d(data, weight, **new_attrs)
+
+    # Handle default kernel layouts
+    if desired_data_layout == 'NCHW':
         new_attrs['kernel_layout'] = 'OIHW'
         return relay.nn.conv2d(data, weight, **new_attrs)
-    elif desired_layout == 'NHWC':
+    elif desired_data_layout == 'NHWC':
         # Check for depthwise convolution.
         if is_depthwise_conv2d(data.shape, attrs['data_layout'], weight.shape,
                                attrs['kernel_layout'], attrs['groups']):
             new_attrs['kernel_layout'] = 'HWOI'
         else:
             new_attrs['kernel_layout'] = 'HWIO'
         return relay.nn.conv2d(data, weight, **new_attrs)
-    else:
-        assert "Layout %s is not yet supported." % (desired_layout)
+    assert "Layout %s is not yet supported." % (desired_data_layout)
     return None

Review comment:
       We should not disable the checker anyways. In this case you have two options.
   
   1. Move the assertion out of the branch. This solution makes the last block on the top-level to avoid inconsistent return statement, but I visually do not like, so I prefer the second one.
   
   ```python
   if desired_data_layout == 'NCHW':
     # do something
     return
   assert desired_data_layout == 'NHWC'
   # do something
   return
   ```
   
   2. Throw exception instead of assertion. In this case, linter can know everything after `raise` is unreachable and it won't complain.
   
   ```python
   if desired_data_layout == 'NCHW':
     # do something
     return
   if desired_data_layout == 'NHWC': # PEP8 standard suggests not using else when previous block has a return statement. Since TVM linter doesn't enforce this point, I'm fine with both.
     # do something
     return
   raise RuntimeError(...)
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420475487



##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       Super like List Conversion to Tuple.
   
   @lhutton1  What do you think? Are you ok with the changes? Thanks for being patient.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#issuecomment-623353852


   @anijain2305 could you take another look when you have time?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420644527



##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       Np, I'm happy with either, although Tuple might make slightly more sense. I think it might be easier to do this as a separate PR though.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420479959



##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       I see. That makes sense.
   
   For this PR, I think we can start with a dict from python that is Map<String, Array<String>> for now (current state of the PR). We can then send a separate PR for stricter type check.
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420426034



##########
File path: docs/dev/convert_layout.rst
##########
@@ -116,16 +117,21 @@ These steps happen for each operator in sequence, where ConvertLayout pass keeps
         data_layout = attrs['data_layout']
         kernel_layout = attrs['kernel_layout']
         data, weight = inputs
-        assert desired_layout == 'NCHW', \
-                "Currently only transformation to NCHW layout is supported."
+
+        # Use the first entry in desired layouts which specifies the data layout.
+        # The expected ordering of layouts for this operator is defined by this function.
+        desired_data_layout = str(desired_layouts[0])
+
         if desired_layout == 'NCHW':
             new_attrs = dict(attrs)
-            new_attrs['data_layout'] = desired_layout
+            new_attrs['data_layout'] = desired_data_layout
             new_attrs['kernel_layout'] = 'OIHW'
             # Actual insertion of layout transforms is taken care internally
             # by ConvertLayout pass.
             return relay.nn.conv2d(data, weight, **new_attrs)
-        return None
+        else:
+            assert "Layout %s is not yet supported" % desired_data_layout
+            return None

Review comment:
       One way to do this is to move the assert to L124.
   
   ~~~
   assert desired_layout == ...
   if desired_layout == 'NCHW':
       .....
   return None
   ~~~

##########
File path: python/tvm/relay/op/nn/_nn.py
##########
@@ -141,20 +142,29 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
     from tvm import relay
     data, weight = inputs
     new_attrs = dict(attrs)
-    new_attrs['data_layout'] = desired_layout
-    if desired_layout == 'NCHW':
+    desired_data_layout = str(desired_layouts[0])
+    assert desired_data_layout != "default", "Data layout cannot be default"
+    new_attrs['data_layout'] = desired_data_layout
+
+    if len(desired_layouts) > 1:
+        desired_kernel_layout = str(desired_layouts[1])
+        if desired_kernel_layout != "default":
+            new_attrs['kernel_layout'] = desired_kernel_layout
+            return relay.nn.conv2d(data, weight, **new_attrs)
+
+    # Handle default kernel layouts
+    if desired_data_layout == 'NCHW':
         new_attrs['kernel_layout'] = 'OIHW'
         return relay.nn.conv2d(data, weight, **new_attrs)
-    elif desired_layout == 'NHWC':
+    elif desired_data_layout == 'NHWC':
         # Check for depthwise convolution.
         if is_depthwise_conv2d(data.shape, attrs['data_layout'], weight.shape,
                                attrs['kernel_layout'], attrs['groups']):
             new_attrs['kernel_layout'] = 'HWOI'
         else:
             new_attrs['kernel_layout'] = 'HWIO'
         return relay.nn.conv2d(data, weight, **new_attrs)
-    else:
-        assert "Layout %s is not yet supported." % (desired_layout)
+    assert "Layout %s is not yet supported." % (desired_data_layout)
     return None

Review comment:
       Ditto

##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       How about having stricter requirement here?
   
   User must specify default
   ~~~
   desired_layouts = {'nn.conv2d': ['NCHW', 'default']}
   ~~~
   
   @comaniac What do you think? We can add a comment in the Conv2d convert layout function to specify what default means.

##########
File path: tests/python/relay/test_pass_convert_op_layout.py
##########
@@ -625,3 +685,5 @@ def expected():
     test_qnn_conv_requantize_convert_layout()
     test_qnn_conv_concat_convert_layout()
     test_qnn_conv_add_convert_layout()
+    test_conv_convert_kernel_layout()
+    test_default_keyword()

Review comment:
       A soft suggestion - Does it make sense to add a test that tests conversion for 2 operators simultaneously - Like conv3d and conv2d in the same network? Or qnn.conv2d and nn.conv2d in the same network.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420642814



##########
File path: docs/dev/convert_layout.rst
##########
@@ -116,16 +117,21 @@ These steps happen for each operator in sequence, where ConvertLayout pass keeps
         data_layout = attrs['data_layout']
         kernel_layout = attrs['kernel_layout']
         data, weight = inputs
-        assert desired_layout == 'NCHW', \
-                "Currently only transformation to NCHW layout is supported."
+
+        # Use the first entry in desired layouts which specifies the data layout.
+        # The expected ordering of layouts for this operator is defined by this function.
+        desired_data_layout = str(desired_layouts[0])
+
         if desired_layout == 'NCHW':
             new_attrs = dict(attrs)
-            new_attrs['data_layout'] = desired_layout
+            new_attrs['data_layout'] = desired_data_layout
             new_attrs['kernel_layout'] = 'OIHW'
             # Actual insertion of layout transforms is taken care internally
             # by ConvertLayout pass.
             return relay.nn.conv2d(data, weight, **new_attrs)
-        return None
+        else:
+            assert "Layout %s is not yet supported" % desired_data_layout
+            return None

Review comment:
       I'll go with the assert, thanks




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420443258



##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       In fact, if this is a case, we can even consider using tuple instead of list to be more type specific (`List[str]` vs. `Tuple[str, str]`).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r419595355



##########
File path: docs/dev/convert_layout.rst
##########
@@ -116,6 +117,7 @@ These steps happen for each operator in sequence, where ConvertLayout pass keeps
         data_layout = attrs['data_layout']
         kernel_layout = attrs['kernel_layout']
         data, weight = inputs
+        desired_layout = str(desired_layouts[0])
         assert desired_layout == 'NCHW', \
                 "Currently only transformation to NCHW layout is supported."
         if desired_layout == 'NCHW':

Review comment:
       This checking seems not necessary because Line 121 already has an assertion. Either remove this branch or move the assertion to the else branch.

##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +220,40 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}
+
     # Convert the layout to NCHW
     # RemoveUnunsedFunctions is used to clean up the graph.
     seq = tvm.transform.Sequential([relay.transform.RemoveUnusedFunctions(),
-                                      relay.transform.ConvertLayout('NCHW')])
+                                      relay.transform.ConvertLayout(desired_layouts)])
     with relay.transform.PassContext(opt_level=3):
         mod = seq(mod)
 
     # Call relay compilation
     with relay.build_config(opt_level=3):
          graph, lib, params = relay.build(mod, target, params=params)
 
+
+The example above only considers data layout, the kernel layout is automatically converted to one that is supported by TVM. If we wish to also convert to a specific kernel layout this can be done like so:

Review comment:
       s/, the kernel/, because the kernel.

##########
File path: docs/dev/convert_layout.rst
##########
@@ -116,6 +117,7 @@ These steps happen for each operator in sequence, where ConvertLayout pass keeps
         data_layout = attrs['data_layout']
         kernel_layout = attrs['kernel_layout']
         data, weight = inputs
+        desired_layout = str(desired_layouts[0])

Review comment:
       It's confusing that you only use the first layout in this list. Although I later found the explanation in this doc, it'd be better to add 1-2 sentences here.

##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +220,40 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}
+
     # Convert the layout to NCHW
     # RemoveUnunsedFunctions is used to clean up the graph.
     seq = tvm.transform.Sequential([relay.transform.RemoveUnusedFunctions(),
-                                      relay.transform.ConvertLayout('NCHW')])
+                                      relay.transform.ConvertLayout(desired_layouts)])
     with relay.transform.PassContext(opt_level=3):
         mod = seq(mod)
 
     # Call relay compilation
     with relay.build_config(opt_level=3):
          graph, lib, params = relay.build(mod, target, params=params)
 
+
+The example above only considers data layout, the kernel layout is automatically converted to one that is supported by TVM. If we wish to also convert to a specific kernel layout this can be done like so:
+
+.. code-block:: python
+
+    desired_layouts = {'nn.conv2d': ['NCHW', 'HWIO']}
+    pass = relay.transform.ConvertLayout(desired_layouts)
+
+
+If we wish to select the default choice for a specific layout then the layout should be declared as "default". In the first example, the kernel layout is implicitly defined as "default".

Review comment:
       1. Better to explain the order. Imagining that a new user wants to use this pass, what function/doc he/she should refer to in order to make this list (e.g., [data layout, kernel layout])? A simple version could be saying conv2d is always [data, kernel] and you can check XXX for others.
   
   2. Better to explicitly write down an example saying that `{'nn.conv2d': ['NCHW', 'default']}` is equivalent to `{'nn.conv2d': ['NCHW', 'OIHW']}`.

##########
File path: python/tvm/relay/op/nn/_nn.py
##########
@@ -141,7 +142,17 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
     from tvm import relay
     data, weight = inputs
     new_attrs = dict(attrs)
+    desired_layout = str(desired_layouts[0])

Review comment:
       Since we now have `desired_kernel_layout`, would that be better to rename this one to `desired_data_layout` (so do the other conv2ds)?

##########
File path: python/tvm/relay/qnn/op/layout_conversions.py
##########
@@ -22,7 +22,7 @@
 
 
 @reg.register_convert_op_layout("qnn.conv2d")
-def convert_qnn_conv2d(attrs, inputs, tinfos, desired_layout):
+def convert_qnn_conv2d(attrs, inputs, tinfos, layouts):

Review comment:
       Why not `desired_layouts` to be consistent with others?

##########
File path: python/tvm/relay/qnn/op/layout_conversions.py
##########
@@ -43,11 +44,21 @@ def convert_qnn_conv2d(attrs, inputs, tinfos, desired_layout):
     """
     # pylint: disable=import-outside-toplevel
     from tvm import relay
+    desired_layout = str(layouts[0])
     assert desired_layout == 'NCHW', \

Review comment:
       ditto. The assertion can be merged with the if-condition.

##########
File path: tests/python/relay/test_pass_convert_op_layout.py
##########
@@ -606,7 +606,37 @@ def expected():
         return y
 
     a = before()
-    a = run_opt_pass(a, transform.ConvertLayout('NCHW'))
+    a = run_opt_pass(a, transform.ConvertLayout({'qnn.conv2d': ['NCHW']}))
+    b = run_opt_pass(expected(), transform.InferType())
+
+    assert tvm.ir.structural_equal(a, b), "Actual = \n" + str(a)
+
+
+def test_conv_convert_kernel_layout():
+    """ Check that convolution kernel layout is correctly transformed. """
+    def before():
+        x = relay.var("x", shape=(1, 56, 56, 64))
+        weight = relay.var("weight", shape=(3, 3, 64, 64))
+        y = relay.nn.conv2d(x, weight, channels=64, kernel_size=(3, 3), padding=(1, 1),
+                            data_layout='NHWC', kernel_layout='HWIO')
+        y = relay.Function(analysis.free_vars(y), y)
+        return y
+
+    def expected():
+        x = relay.var("x", shape=(1, 56, 56, 64))
+        w = relay.var("weight", shape=(3, 3, 64, 64))
+        w = relay.layout_transform(w, 'HWIO', 'OHWI')
+        y = relay.nn.conv2d(x, w,
+                            channels=64,
+                            kernel_size=(3, 3),
+                            padding=(1, 1),
+                            data_layout='NHWC',
+                            kernel_layout='OHWI')
+        y = relay.Function(analysis.free_vars(y), y)
+        return y
+
+    a = before()
+    a = run_opt_pass(a, transform.ConvertLayout({'nn.conv2d': ['NHWC', 'OHWI']}))

Review comment:
       Should also test "default" since it is a reserved word for this pass.

##########
File path: python/tvm/relay/transform/transform.py
##########
@@ -324,7 +324,7 @@ def AlterOpLayout():
     return _ffi_api.AlterOpLayout()
 
 
-def ConvertLayout(desired_layout):
+def ConvertLayout(layouts):

Review comment:
       ditto. Why not `desired_layouts`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420475487



##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       Edit - Actually Tuple might not work @comaniac because for pool, we might have a Tuple of 1 element, but for conv2d we have 2.
   
   @lhutton1  What do you think? Are you ok with the changes? Thanks for being patient.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r413999028



##########
File path: include/tvm/relay/op_attr_types.h
##########
@@ -158,13 +158,15 @@ using FTVMAlterOpLayout = runtime::TypedPackedFunc<
  * \param tinfos An array of placeholders, use for getting the inferred shape
  *               and dtype of the inputs.
  * \param desired_layout The desired layout.
+ * \param additional_layouts Specify additional layouts, e.g. kernel_layout.
  * \return new_expr The modified expression.
  */
 using FTVMConvertOpLayout = runtime::TypedPackedFunc<
   Expr(const Attrs& attrs,
        const Array<Expr>& args,
        const Array<te::Tensor>& tinfos,
-       const std::string& desired_layout)>;
+       const std::string& desired_layout,

Review comment:
       Thanks, I hope I understood correctly. So the idea is that someone using this pass would specify a map of operator -> [layouts] e.g. `relay.transform.ConvertLayout({"nn.conv2d": ["NCHW", "IOHW"], ...})`? My intention for doing it the additional_layouts way was to minimise the impact on the current state of the pass, although I like your idea.
   
   My only concern with using a list would be an operator that had say 3 different input tensors with different layouts (I don't think one exists currently, could this pop up in the future?). For example data, kernel, some_other. How could we specify a preferred layout for data and some_other whilst leaving kernel set to default?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r414025113



##########
File path: include/tvm/relay/op_attr_types.h
##########
@@ -158,13 +158,15 @@ using FTVMAlterOpLayout = runtime::TypedPackedFunc<
  * \param tinfos An array of placeholders, use for getting the inferred shape
  *               and dtype of the inputs.
  * \param desired_layout The desired layout.
+ * \param additional_layouts Specify additional layouts, e.g. kernel_layout.
  * \return new_expr The modified expression.
  */
 using FTVMConvertOpLayout = runtime::TypedPackedFunc<
   Expr(const Attrs& attrs,
        const Array<Expr>& args,
        const Array<te::Tensor>& tinfos,
-       const std::string& desired_layout)>;
+       const std::string& desired_layout,

Review comment:
       Sounds good to me :)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420481027



##########
File path: docs/dev/convert_layout.rst
##########
@@ -116,16 +117,21 @@ These steps happen for each operator in sequence, where ConvertLayout pass keeps
         data_layout = attrs['data_layout']
         kernel_layout = attrs['kernel_layout']
         data, weight = inputs
-        assert desired_layout == 'NCHW', \
-                "Currently only transformation to NCHW layout is supported."
+
+        # Use the first entry in desired layouts which specifies the data layout.
+        # The expected ordering of layouts for this operator is defined by this function.
+        desired_data_layout = str(desired_layouts[0])
+
         if desired_layout == 'NCHW':
             new_attrs = dict(attrs)
-            new_attrs['data_layout'] = desired_layout
+            new_attrs['data_layout'] = desired_data_layout
             new_attrs['kernel_layout'] = 'OIHW'
             # Actual insertion of layout transforms is taken care internally
             # by ConvertLayout pass.
             return relay.nn.conv2d(data, weight, **new_attrs)
-        return None
+        else:
+            assert "Layout %s is not yet supported" % desired_data_layout
+            return None

Review comment:
       I am fine with either way.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r414011046



##########
File path: include/tvm/relay/op_attr_types.h
##########
@@ -158,13 +158,15 @@ using FTVMAlterOpLayout = runtime::TypedPackedFunc<
  * \param tinfos An array of placeholders, use for getting the inferred shape
  *               and dtype of the inputs.
  * \param desired_layout The desired layout.
+ * \param additional_layouts Specify additional layouts, e.g. kernel_layout.
  * \return new_expr The modified expression.
  */
 using FTVMConvertOpLayout = runtime::TypedPackedFunc<
   Expr(const Attrs& attrs,
        const Array<Expr>& args,
        const Array<te::Tensor>& tinfos,
-       const std::string& desired_layout)>;
+       const std::string& desired_layout,

Review comment:
       You are correct in understanding my proposal.
   
   Thats interesting. Although, we don't any operator with three different layouts. For this, maybe, we can use a string "default" (or maybe a better name), that leaves the layout for the tensor in that index unchanged. 
   
   Therefore, we can be stricter while calling the ConvertLayout pass. The user must define the layouts for all the tensors. If he/she does not care about a certain tensor layout, even then they must say "default".




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#issuecomment-623970891


   There seems to be an unrelated issue with the depthwise convolution check when using the default NHWC kernel layout: `if is_depthwise_conv2d(data.shape, attrs['data_layout'], weight.shape, attrs['kernel_layout'], attrs['groups'])`, `AttributeError: <class 'tvm.relay.expr.Var'> has no attribute shape`.
   
   For now I'll change the test to use NCHW, but I'll raise a ticket.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on issue #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on issue #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#issuecomment-618515093


   cc @anijain2305 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420644527



##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
       I'm happy with either, although Tuple might make slightly more sense. I think it might be easier to do this as a separate PR though.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r413969744



##########
File path: include/tvm/relay/op_attr_types.h
##########
@@ -158,13 +158,15 @@ using FTVMAlterOpLayout = runtime::TypedPackedFunc<
  * \param tinfos An array of placeholders, use for getting the inferred shape
  *               and dtype of the inputs.
  * \param desired_layout The desired layout.
+ * \param additional_layouts Specify additional layouts, e.g. kernel_layout.
  * \return new_expr The modified expression.
  */
 using FTVMConvertOpLayout = runtime::TypedPackedFunc<
   Expr(const Attrs& attrs,
        const Array<Expr>& args,
        const Array<te::Tensor>& tinfos,
-       const std::string& desired_layout)>;
+       const std::string& desired_layout,

Review comment:
       Does it make sense to convert desired_layout to a Map from `op_name` to a list of layouts?
   For example, it can look like
   ~~~
   "nn.conv2d" : ["NCHW", "IOHW"]
   "nn.conv3d" : ["NCDHW", "IODHW"]
   ~~~
   
   Though I agree that the current interface is very restrictive, additional layout seems little awkward. With the above proposal, as we define convert_layout for the operators, we have a fixed length list that we can use.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420265752



##########
File path: docs/dev/convert_layout.rst
##########
@@ -116,16 +117,21 @@ These steps happen for each operator in sequence, where ConvertLayout pass keeps
         data_layout = attrs['data_layout']
         kernel_layout = attrs['kernel_layout']
         data, weight = inputs
-        assert desired_layout == 'NCHW', \
-                "Currently only transformation to NCHW layout is supported."
+
+        # Use the first entry in desired layouts which specifies the data layout.
+        # The expected ordering of layouts for this operator is defined by this function.
+        desired_data_layout = str(desired_layouts[0])
+
         if desired_layout == 'NCHW':
             new_attrs = dict(attrs)
-            new_attrs['data_layout'] = desired_layout
+            new_attrs['data_layout'] = desired_data_layout
             new_attrs['kernel_layout'] = 'OIHW'
             # Actual insertion of layout transforms is taken care internally
             # by ConvertLayout pass.
             return relay.nn.conv2d(data, weight, **new_attrs)
-        return None
+        else:
+            assert "Layout %s is not yet supported" % desired_data_layout
+            return None

Review comment:
       This return is unreachable.

##########
File path: python/tvm/relay/op/nn/_nn.py
##########
@@ -216,15 +227,24 @@ def convert_conv3d(attrs, inputs, tinfos, desired_layout):
     from tvm import relay
     data, weight = inputs
     new_attrs = dict(attrs)
-    new_attrs['data_layout'] = desired_layout
-    if desired_layout == 'NCDHW':
+    desired_data_layout = str(desired_layouts[0])
+    assert desired_data_layout != "default", "Data layout cannot be default"
+    new_attrs['data_layout'] = desired_data_layout
+
+    if len(desired_layouts) > 1:
+        desired_kernel_layout = str(desired_layouts[1])
+        if desired_kernel_layout != "default":
+            new_attrs['kernel_layout'] = desired_kernel_layout
+            return relay.nn.conv3d(data, weight, **new_attrs)
+
+    # Handle default kernel layouts
+    if desired_data_layout == 'NCDHW':
         new_attrs['kernel_layout'] = 'OIDHW'
         return relay.nn.conv3d(data, weight, **new_attrs)
-    elif desired_layout == "NDHWC":
+    elif desired_data_layout == "NDHWC":
         new_attrs['kernel_layout'] = 'DHWIO'
         return relay.nn.conv3d(data, weight, **new_attrs)
-    else:
-        assert "Layout %s is not yet supported" % desired_layout
+    assert "Layout %s is not yet supported" % desired_data_layout
     return None

Review comment:
       ditto.

##########
File path: docs/dev/convert_layout.rst
##########
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator - batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default relay.build pipeline. The intended usage is to call it between the framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of heavily-layout sensitive operators to a list of the desired layouts for that operator.
+
 .. code-block:: python
 
     # TFlite framework to Relay parser - Default layout is NHWC
     mod, params = relay.frontend.from_tflite(tflite_model,
                                              shape_dict=shape_dict,
                                              dtype_dict=dtype_dict)
 
+    # We assume our model's heavily-layout sensitive operators only consist of nn.conv2d
+    desired_layouts = {'nn.conv2d': ['NCHW']}
+
     # Convert the layout to NCHW
     # RemoveUnunsedFunctions is used to clean up the graph.
     seq = tvm.transform.Sequential([relay.transform.RemoveUnusedFunctions(),
-                                      relay.transform.ConvertLayout('NCHW')])
+                                      relay.transform.ConvertLayout(desired_layouts)])
     with relay.transform.PassContext(opt_level=3):
         mod = seq(mod)
 
     # Call relay compilation
     with relay.build_config(opt_level=3):
          graph, lib, params = relay.build(mod, target, params=params)
 
+
+The example above only considers data layout because the kernel layout is automatically converted to one that is supported by TVM. If we wish to also convert to a specific kernel layout this can be done like so:
+
+.. code-block:: python
+
+    desired_layouts = {'nn.conv2d': ['NCHW', 'HWIO']}
+    pass = relay.transform.ConvertLayout(desired_layouts)
+
+
+The ordering of layouts is defined by the implementation of `register_convert_op_layout("OPNAME")`, you can refer to the docstring which should explicitly state the expected layout. In the example above its [data_layout, kernel_layout].

Review comment:
       s/its/it's

##########
File path: python/tvm/relay/op/nn/_nn.py
##########
@@ -141,20 +142,29 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
     from tvm import relay
     data, weight = inputs
     new_attrs = dict(attrs)
-    new_attrs['data_layout'] = desired_layout
-    if desired_layout == 'NCHW':
+    desired_data_layout = str(desired_layouts[0])
+    assert desired_data_layout != "default", "Data layout cannot be default"
+    new_attrs['data_layout'] = desired_data_layout
+
+    if len(desired_layouts) > 1:
+        desired_kernel_layout = str(desired_layouts[1])
+        if desired_kernel_layout != "default":
+            new_attrs['kernel_layout'] = desired_kernel_layout
+            return relay.nn.conv2d(data, weight, **new_attrs)
+
+    # Handle default kernel layouts
+    if desired_data_layout == 'NCHW':
         new_attrs['kernel_layout'] = 'OIHW'
         return relay.nn.conv2d(data, weight, **new_attrs)
-    elif desired_layout == 'NHWC':
+    elif desired_data_layout == 'NHWC':
         # Check for depthwise convolution.
         if is_depthwise_conv2d(data.shape, attrs['data_layout'], weight.shape,
                                attrs['kernel_layout'], attrs['groups']):
             new_attrs['kernel_layout'] = 'HWOI'
         else:
             new_attrs['kernel_layout'] = 'HWIO'
         return relay.nn.conv2d(data, weight, **new_attrs)
-    else:
-        assert "Layout %s is not yet supported." % (desired_layout)
+    assert "Layout %s is not yet supported." % (desired_data_layout)
     return None

Review comment:
       ditto.

##########
File path: python/tvm/relay/qnn/op/layout_conversions.py
##########
@@ -43,11 +44,22 @@ def convert_qnn_conv2d(attrs, inputs, tinfos, desired_layout):
     """
     # pylint: disable=import-outside-toplevel
     from tvm import relay
-    assert desired_layout == 'NCHW', \
-            "Currently only transformation to NCHW layout is supported."
-    if desired_layout == 'NCHW':
+    desired_data_layout = str(desired_layouts[0])
+    assert desired_data_layout != "default", "Data layout cannot be default"
+
+    if desired_data_layout == 'NCHW':
         new_attrs = dict(attrs)
-        new_attrs['data_layout'] = desired_layout
-        new_attrs['kernel_layout'] = 'OIHW'
+        new_attrs['data_layout'] = desired_data_layout
+
+        desired_kernel_layout = "default"
+        if len(desired_layouts) > 1:
+            desired_kernel_layout = str(desired_layouts[1])
+
+        if desired_kernel_layout != "default":
+            new_attrs['kernel_layout'] = desired_kernel_layout
+        else:
+            new_attrs['kernel_layout'] = 'OIHW'
+
         return relay.qnn.op.conv2d(*inputs, **new_attrs)
+    assert "Layout %s is not yet supported" % desired_data_layout
     return None

Review comment:
       ditto.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

Posted by GitBox <gi...@apache.org>.
lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420434153



##########
File path: docs/dev/convert_layout.rst
##########
@@ -116,16 +117,21 @@ These steps happen for each operator in sequence, where ConvertLayout pass keeps
         data_layout = attrs['data_layout']
         kernel_layout = attrs['kernel_layout']
         data, weight = inputs
-        assert desired_layout == 'NCHW', \
-                "Currently only transformation to NCHW layout is supported."
+
+        # Use the first entry in desired layouts which specifies the data layout.
+        # The expected ordering of layouts for this operator is defined by this function.
+        desired_data_layout = str(desired_layouts[0])
+
         if desired_layout == 'NCHW':
             new_attrs = dict(attrs)
-            new_attrs['data_layout'] = desired_layout
+            new_attrs['data_layout'] = desired_data_layout
             new_attrs['kernel_layout'] = 'OIHW'
             # Actual insertion of layout transforms is taken care internally
             # by ConvertLayout pass.
             return relay.nn.conv2d(data, weight, **new_attrs)
-        return None
+        else:
+            assert "Layout %s is not yet supported" % desired_data_layout
+            return None

Review comment:
       I think this goes back to @comaniac's original comment about unnecessary branching after an assert, I think just removing the unnecessary return is enough? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org