You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/10/28 21:31:16 UTC

[GitHub] [incubator-tvm] masahi commented on pull request #6782: [Torch, QNN] Support dynamic quantization flow to enable importing quantized transformer models

masahi commented on pull request #6782:
URL: https://github.com/apache/incubator-tvm/pull/6782#issuecomment-718220692


   > Overall looks good. Is this enough to run qBERT? I am surprised that we dont need to work on requantize here
   
   Yes this is enough. Dynamic quantization flow replaces fp32 dense with runtime qparam calculation + int8 dense , leaving everything else fp32.
   
   The output of `linear_dynamic` op is fp32, so I don't think we need requantize. PyTorch just cast int32 output to fp32, and multiply by input and weight scales, which I followed here. The corresponding implementation is here
   https://github.com/pytorch/FBGEMM/blob/master/include/fbgemm/OutputProcessing-inl.h#L232


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org