You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/10/18 23:25:34 UTC

[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16531: Just use fp16 is slower than mixed_precision! Why?

anirudh2290 commented on issue #16531: Just use fp16 is slower than mixed_precision! Why?
URL: https://github.com/apache/incubator-mxnet/issues/16531#issuecomment-544003405
 
 
   By faster you mean convergence ? Can you elaborate on what is your use case. Are you running it for training or inference ? Can you provide reproducible script ? Its possible that convergence may be slower with all fp16 because certain reduce operators need to run in FP32 mode to retain good accuracy. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services