You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/12/07 10:52:14 UTC

[GitHub] [tvm] Goose-Bomb commented on issue #13545: [Bug][FQ2I] Failed to run FakeQuantizationToInteger on QDQ ONNX model

Goose-Bomb commented on issue #13545:
URL: https://github.com/apache/tvm/issues/13545#issuecomment-1340775536

   > Ok reproduced, I'll take a look.
   > 
   > BTW, this is an interesting model. How did you quantize it?
   
   The model is quantized in PyTorch using Torch.FX, then exported using PyTorch's ONNX exporter. (However, currently the onnx exporter only has limited support for quantized model, some Ops like quantized `Concat` are not supported yet, and the PyTorch team are working on a [universal quant handler](https://github.com/pytorch/pytorch/issues/87508) for ONNX exporter)
   
   The exported model follows the QDQ pattern, one of the quantization representation used by ONNXRuntime's [quantizer](https://onnxruntime.ai/docs/performance/quantization.html)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org