You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/02/05 14:35:06 UTC

[GitHub] xinyu-intel commented on issue #13697: [MKLDNN] Enable signed int8 support for convolution.

xinyu-intel commented on issue #13697: [MKLDNN] Enable signed int8 support for convolution.
URL: https://github.com/apache/incubator-mxnet/pull/13697#issuecomment-460659526
 
 
   @reminisce 
   I tested resnet152 and inception-bn on Tesla V100 and resnet152 looks good. However, there is a quantization bug with inception-bn because  #13297 enabled quantized_concat on CPU side. It seems that #14060 is fixing this bug. Below are accuracy numbers of these two models(after apply #14060 ).
   
   |MODE|ResNet152|Inception-bn|
   |:-----:|:----------:|:----------|
   |FP32|77.18%/93.00%|72.38%/90.61%|
   |online|75.46%/92.24%|72.08%/90.31%|
   |5 batch naive|75.41%/92.10%|71.81%/90.29%|

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services