You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/09/29 19:07:12 UTC

[GitHub] [tvm] apeskov opened a new pull request #9152: Pytorch qnn frontend. Avoid explicit pad in case of 'zero_point == 0'

apeskov opened a new pull request #9152:
URL: https://github.com/apache/tvm/pull/9152


   Explicit pad is not required when zero point is zero.
   
   Moreover, explicit pad specification on frontend level looks very suspicious. `qnn.conv2d` semantic already provides a proper handling of pad with non trivial zero point:
   https://github.com/apache/tvm/blob/725ae75af4997ff3a5107cc82d64609773de23a0/src/relay/qnn/op/convolution.cc#L224-L226


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] apeskov commented on pull request #9152: Pytorch qnn frontend. Avoid explicit pad in case of 'zero_point == 0'

Posted by GitBox <gi...@apache.org>.
apeskov commented on pull request #9152:
URL: https://github.com/apache/tvm/pull/9152#issuecomment-983028013


   @masahi Thank you for answer. Could you please specify which particular BYOC doesn't support pads for qnn.conv2d? Or you mean that they doesn't support pad with zero point? Does this patch has a conflict with that limitation?
   
   I still need in that PR. In the opposite case I will have to write a pass which will merge pad back to convolution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi closed pull request #9152: Pytorch qnn frontend. Avoid explicit pad in case of 'zero_point == 0'

Posted by GitBox <gi...@apache.org>.
masahi closed pull request #9152:
URL: https://github.com/apache/tvm/pull/9152


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #9152: Pytorch qnn frontend. Avoid explicit pad in case of 'zero_point == 0'

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #9152:
URL: https://github.com/apache/tvm/pull/9152#issuecomment-983085744


   This is not about a limitation of BYOC. Originally you said "qnn.conv2d semantic already provides a proper handling of pad with non trivial zero point:" but this is not enforced for BYOC cases when this lowering code is not used. 
   
   I think the explicit padding makes the requirement of pad by zero point clearer. Actually QNN didn't properly pad with a zero point until someone pointed out this bug. So I believe it is better to make this requirement obvious rather than relying on each BYOC backend to properly handle pad by zero point.
   
   For your purpose, there is already a pass that folds pad with conv2d https://github.com/apache/tvm/blob/78657e1f8b2c97c3acc389e2b757c6ac8174388d/src/relay/transforms/fold_explicit_padding.cc#L41. You can extend that pass to handle `qnn.conv2d`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #9152: Pytorch qnn frontend. Avoid explicit pad in case of 'zero_point == 0'

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #9152:
URL: https://github.com/apache/tvm/pull/9152#issuecomment-930603972


   Explicit pad is necessary for BYOC use cases where we don't use QNN lowering. So the pad value needs to be available at a QNN graph level. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #9152: Pytorch qnn frontend. Avoid explicit pad in case of 'zero_point == 0'

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #9152:
URL: https://github.com/apache/tvm/pull/9152#issuecomment-930603972


   Explicit pad is necessary for BYOC use cases where we don't use QNN lowering. So the pad value needs to be available at a QNN graph level. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] apeskov commented on pull request #9152: Pytorch qnn frontend. Avoid explicit pad in case of 'zero_point == 0'

Posted by GitBox <gi...@apache.org>.
apeskov commented on pull request #9152:
URL: https://github.com/apache/tvm/pull/9152#issuecomment-983032474


   In general, current situation when we try to solve some limitation of some BYOC runtime on frontend level is quite suspicious. From my point of view frontend should convert model as is. If BYOC has some limitation it should apply converter passes to try to eliminate these limitations.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org