You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/07/02 15:19:41 UTC

[GitHub] [incubator-mxnet] tianylijun edited a comment on issue #15432: [quantization] is there any plan to support 6bits quantization , as 6 bits quantization more efficient than 8 bits on arm cpu

tianylijun edited a comment on issue #15432: [quantization] is there any plan to support 6bits quantization , as 6 bits quantization more efficient than 8 bits on arm cpu
URL: https://github.com/apache/incubator-mxnet/issues/15432#issuecomment-507723673
 
 
   > Thanks for the proposal.
   > Do you have any technical details about 6bits is more efficient than 8bits?
   
   As current ARM SIMD do not support int32 += int8int8(except Cotex A55,A75),unfortunate 8bits MAC will overflow during 4X4 sgemm block(127x127x4 exceed int16 range ),so int8 need first convert to int16 then do int32 += int16int16. but 6 bits will not overflow in 4x4 sgemm block,so 6 bits can use directly use int16+= int8*int8 MAC,more efficient than 8 bits

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services