You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/01/26 15:38:27 UTC

[GitHub] jinhuang415 commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

jinhuang415 commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#issuecomment-360818295
 
 
   Hi @reminisce, may I ask a few questions:
   (1) Do we always need to do run-time min/max calculation for weights parameter (not sure if there is any consideration to pre-calculate the min/max range of weights also to improve performance)? If it is needed, do we have any test/statistics how much overhead it may occupy?
   (2) By reading "quantization_github.pptx", looks the model accuracy will drop a little bit when calibration batches increase to some extent, the more accurate ranges will be covered if using more calibration batches so from intuition the accuracy should be better using larger batches? Do we have any insight why accuracy drop down while calibration batches increases?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services