You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/02/17 01:21:23 UTC

[GitHub] eric-haibin-lin commented on issue #13709: Why FP16 training speed is too slow on Tesla T4 in Gluon?

eric-haibin-lin commented on issue #13709: Why FP16 training speed is too slow on Tesla T4 in Gluon?
URL: https://github.com/apache/incubator-mxnet/issues/13709#issuecomment-464405403
 
 
   Were you using self-attention blocks with batch_dot operator? There was an improvement for fp16 in https://github.com/apache/incubator-mxnet/pull/13716 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services