You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/08/05 08:40:20 UTC

[GitHub] xinyu-intel commented on issue #11747: Flaky test test_quantization_mkldnn.test_requantize_int32_to_int8

xinyu-intel commented on issue #11747: Flaky test test_quantization_mkldnn.test_requantize_int32_to_int8
URL: https://github.com/apache/incubator-mxnet/issues/11747#issuecomment-410505095
 
 
   @KellenSunderland @marcoabreu @reminisce Hi, I think we should set the absolute error to one [ULP](https://en.wikipedia.org/wiki/Unit_in_the_last_place). When we check whether the int8 data are equal, we should set the absolute error to 1.
   In this case, we convert int32 data to float32 data and then convert to int8 data. In the last step `(np.sign(data) * np.minimum(np.abs(data) * scale + 0.5, quantized_range)).astype('int8')` plus 0.5 to realize rounded up will bring one ULP truncated error.
   For example, MKLDNN float 63.4999 will not be round up to 64 after plus 0.5 but Numpy float 63.5000 will. The absolute error of float32 1e-4 will be expand to 1 after converted to int8.
   I'll start a pr to fix it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services