You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/04/03 15:05:33 UTC

[GitHub] [incubator-tvm] shoubhik opened a new pull request #5230: Adding support for TFLite QnnSubtract operator.

shoubhik opened a new pull request #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230
 
 
   * Adding TFLite quantized subtract operator with test case.
   * Adding fix for depthwise conv2d if input channel is 1 with test case.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 merged pull request #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 merged pull request #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] shoubhik commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
shoubhik commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-609875364
 
 
   @FrozenGene @yzhliu Can you take a look at the fix for depthwise conv2d with 1 input channel.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611072742
 
 
   @FrozenGene It is not possible to use depthwise conv inside Relay here. The `groups=1` in this case and Relay code only considers a conv as depthwise when `groups > 1`. So, I don't think there are 2 choices here. Let us know if you are ok with this.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene edited a comment on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
FrozenGene edited a comment on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611310639
 
 
   > Thanks for the reply. Can you please share what two things we are comparing? I am still confused. Please let me know example of input shape, depth multiplier, groups etc.
   
   I leverage this issue https://github.com/tensorflow/tensorflow/issues/23563 we talked about. For example we feed the input (1, 224, 224, 1),in our tensorflow we have convolution to handle it. i.e. `input -> convolution`. However, the tflite will change the convolution into depthwise convolution with multiplier greater than 1, the input channel will be 1.  Like this picture of tensorflow issue:
   ![image](https://user-images.githubusercontent.com/7287321/78855497-a5766f80-7a56-11ea-92d3-96590ffb4fae.png)
   I suspect if we don't change to depthwise convolution with multiplier greater than 1 and keep the original convolution, like this:
   ![image](https://user-images.githubusercontent.com/7287321/78855555-c8088880-7a56-11ea-8fe7-c4bb0183a998.png)
   We could get better performance.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611288745
 
 
   > @FrozenGene It is not possible to use depthwise conv inside Relay here. The `groups=1` in this case and Relay code only considers a conv as depthwise when `groups > 1`. So, I don't think there are 2 choices here. Let us know if you are ok with this.
   
   Yes. Me meaning is before we decide to change or not, could we do one performance comparation so that we know depthwise with multiplier > 1 is good or becomes normal convolution is good.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 edited a comment on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 edited a comment on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-610167373
 
 
   @FrozenGene If we have C = 1, then depth wise conv becomes normal conv. There is nothing to accumulate across input channels basically. And depth_multiplier becomes equal to the number of output channels. What do you think? Is the change good with you?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene edited a comment on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
FrozenGene edited a comment on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611288745
 
 
   > @FrozenGene It is not possible to use depthwise conv inside Relay here. The `groups=1` in this case and Relay code only considers a conv as depthwise when `groups > 1`. So, I don't think there are 2 choices here. Let us know if you are ok with this.
   
   Yes. My meaning is before we decide to change or not, could we do one performance comparation so that we know depthwise with multiplier > 1 is good or becomes normal convolution is good.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 edited a comment on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 edited a comment on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-608595058
 
 
   @FrozenGene @yzhliu Can you take a look as well? Idea is that if input channel == 1, the depthwise conv becomes simple conv.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-610737656
 
 
   > @FrozenGene If we have C = 1, then depth wise conv becomes normal conv. There is nothing to accumulate across input channels basically. And depth_multiplier becomes equal to the number of output channels. What do you think? Is the change good with you?
   
   I think it is ok. I met this thing ever. That is when input channel is  1 (for example, gray image), we will get depthwise convolution of multiplier greater than 1 rather than normal convolution.  Would you mind doing one performance comparison? I expect we could get better performance when we use normal convolution.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611308252
 
 
   Thanks for the reply. Can you please share what two things we are comparing? I am still confused. Please let me know example of input shape, depth multiplier, groups etc.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] shoubhik commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
shoubhik commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611724091
 
 
   > > Correct. So here, TFLite converts Conv to depthwise conv, where depthwise conv has input_channels = groups = 1 and depth multiplier > 1
   > > This PR converts this type of depthwise conv back to normal conv. Exactly the same way that you are suggesting. (There is not even a way to run Relay depthwise conv with groups = 1)
   > > Let us know if that is ok.
   > 
   > Yes. Convolution should be better I expect. However, I wish we could have one performance comparation result between these two ways. So that we decide whether we keep depthwise convolution with multiplier greater than 1 or like this pr converting back to normal convolution finally.
   
   @FrozenGene I think I have not clarified the issue why we are converting the kernel layout for HWOI to HWIO when number of input channels is 1.
   
   First, let us take the example of the current scenario
   ```
   params['kernel_layout'] = 'HWOI'
   ```
   
   Assuming you have a network you mentioned before with depthwise convolution with input_channel = 1 and try to parse a Relay graph using the `from_tflite` API you get an error like this
   
   ```
   v0.0.4
   fn (%input: Tensor[(1, 76, 64, 1), uint8], %v_param_1: Tensor[(64, 1), uint8], %v_param_2: Tensor[(1, 64, 1), uint8], %v_param_3: Tensor[(9, 5, 1, 96), uint8], ...) {
     %0 = qnn.subtract(%input, %v_param_1, ...);
     %1 = qnn.mul(%0, %v_param_2, ...);
     %2 = qnn.conv2d(%1, %v_param_3, ..., data_layout="NHWC", kernel_layout="HWOI", out_dtype="int32") in particular dimension 2 conflicts 96 does not match 1dimension 3 conflicts 1 does not match 96; unable to unify: `Tensor[(9, 5, 96, 1), uint8]` and `Tensor[(9, 5, 1, 96), uint8]`; ;
   ```
   The change 
   ```
   params['kernel_layout'] = 'HWIO' if input_c == 1 else 'HWOI'
   ```
   fixes this issue.
   
   This error happens because input_channel is 1 and in the [Conv2dRel](https://github.com/apache/incubator-tvm/blob/master/src/relay/op/nn/convolution.h#L132-L391) function fails as there are checks for `param->groups > 1`. So this fix is mainly to get the current implementation working.
   
   In order to do the benchmarking, you are suggesting we need to change the underlying `Conv2dRel` functionality which I think should be part of another PR and RFC perhaps.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611884419
 
 
   > > > Correct. So here, TFLite converts Conv to depthwise conv, where depthwise conv has input_channels = groups = 1 and depth multiplier > 1
   > > > This PR converts this type of depthwise conv back to normal conv. Exactly the same way that you are suggesting. (There is not even a way to run Relay depthwise conv with groups = 1)
   > > > Let us know if that is ok.
   > > 
   > > 
   > > Yes. Convolution should be better I expect. However, I wish we could have one performance comparation result between these two ways. So that we decide whether we keep depthwise convolution with multiplier greater than 1 or like this pr converting back to normal convolution finally.
   > 
   > @FrozenGene I think I have not clarified the issue why we are converting the kernel layout for HWOI to HWIO when number of input channels is 1.
   > 
   > First, let us take the example of the current scenario
   > 
   > ```
   > params['kernel_layout'] = 'HWOI'
   > ```
   > 
   > Assuming you have a network you mentioned before with depthwise convolution with input_channel = 1 and try to parse a Relay graph using the `from_tflite` API you get an error like this
   > 
   > ```
   > v0.0.4
   > fn (%input: Tensor[(1, 76, 64, 1), uint8], %v_param_1: Tensor[(64, 1), uint8], %v_param_2: Tensor[(1, 64, 1), uint8], %v_param_3: Tensor[(9, 5, 1, 96), uint8], ...) {
   >   %0 = qnn.subtract(%input, %v_param_1, ...);
   >   %1 = qnn.mul(%0, %v_param_2, ...);
   >   %2 = qnn.conv2d(%1, %v_param_3, ..., data_layout="NHWC", kernel_layout="HWOI", out_dtype="int32") in particular dimension 2 conflicts 96 does not match 1dimension 3 conflicts 1 does not match 96; unable to unify: `Tensor[(9, 5, 96, 1), uint8]` and `Tensor[(9, 5, 1, 96), uint8]`; ;
   > ```
   > 
   > The change
   > 
   > ```
   > params['kernel_layout'] = 'HWIO' if input_c == 1 else 'HWOI'
   > ```
   > 
   > fixes this issue.
   > 
   > This error happens because input_channel is 1 and in the [Conv2dRel](https://github.com/apache/incubator-tvm/blob/master/src/relay/op/nn/convolution.h#L132-L391) function fails as there are checks for `param->groups > 1`. So this fix is mainly to get the current implementation working.
   > 
   > In order to do the benchmarking, you are suggesting we need to change the underlying `Conv2dRel` functionality which I think should be part of another PR and RFC perhaps.
   
   Oh...I understand it fully now. Thanks for clarifying. If we don't change the Conv2dRel, yes, we can not do benchmark comparison currently. I am fine with this change. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611330944
 
 
   > Correct. So here, TFLite converts Conv to depthwise conv, where depthwise conv has input_channels = groups = 1 and depth multiplier > 1
   > 
   > 
   > 
   > This PR converts this type of depthwise conv back to normal conv. Exactly the same way that you are suggesting. (There is not even a way to run Relay depthwise conv with groups = 1)
   > 
   > 
   > 
   > Let us know if that is ok.
   
   Yes. Convolution should be better I expect. However, I wish we could have one performance comparation result between these two ways. So that we decide whether we keep depthwise convolution with multiplier greater than 1 or like this pr converting back to normal convolution finally.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611885516
 
 
   Thanks @shoubhik @FrozenGene This is merged!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-608595058
 
 
   @FrozenGene Can you take a look as well? Idea is that if input channel == 1, the depthwise conv becomes simple conv.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#discussion_r403206076
 
 

 ##########
 File path: python/tvm/relay/frontend/tflite.py
 ##########
 @@ -1318,7 +1317,9 @@ def convert_conv(self, op, conv_type):
         if is_depthwise_conv:
 
 Review comment:
   Can we move the condition of input channels == 1 to where we compute `is_depthwise_conv`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611310639
 
 
   > Thanks for the reply. Can you please share what two things we are comparing? I am still confused. Please let me know example of input shape, depth multiplier, groups etc.
   
   I leverage this issue https://github.com/tensorflow/tensorflow/issues/23563 we talked about. For example we feed the input (1, 224, 224, 1),in our tensorflow we have convolution to handle it. i.e. `input -> convolution`. However, the tflite we change the convolution into depthwise convolution with multiplier greater than 1, the input channel will be 1.  Like this picture of tensorflow issue:
   ![image](https://user-images.githubusercontent.com/7287321/78855497-a5766f80-7a56-11ea-92d3-96590ffb4fae.png)
   I suspect if we don't change to depthwise convolution with multiplier greater than 1 and keep the original convolution, like this:
   ![image](https://user-images.githubusercontent.com/7287321/78855555-c8088880-7a56-11ea-8fe7-c4bb0183a998.png)
   We could get better performance.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] shoubhik commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
shoubhik commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-608610213
 
 
   The context for the change is this - https://github.com/tensorflow/tensorflow/issues/23563

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-610167373
 
 
   @FrozenGene If we have C = 1, then depth wise conv becomes normal conv. There is nothing to accumulate across input channels basically. And depth_multiplier becomes equal to the number of output channels. What do you think? Is the change good?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-611319144
 
 
   Correct. So here, TFLite converts Conv to depthwise conv, where depthwise conv has input_channels = groups = 1 and depth multiplier > 1
   
   This PR converts this type of depthwise conv back to normal conv. Exactly the same way that you are suggesting. (There is not even a way to run Relay depthwise conv with groups = 1)
   
   Let us know if that is ok.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] shoubhik commented on issue #5230: Adding support for TFLite QnnSubtract operator.

Posted by GitBox <gi...@apache.org>.
shoubhik commented on issue #5230: Adding support for TFLite QnnSubtract operator.
URL: https://github.com/apache/incubator-tvm/pull/5230#issuecomment-608540108
 
 
   @anijain2305 and @zhiics can you take a look at this PR.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services