You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/11/07 17:21:21 UTC

[GitHub] szhengac opened a new issue #11796: Batch_dot does not support FP16 well

szhengac opened a new issue #11796: Batch_dot does not support FP16 well
URL: https://github.com/apache/incubator-mxnet/issues/11796
 
 
   The `batch_dot` does not support FP16 well and can make training slower compared to using FP32. This is tested using [Transformer](https://github.com/dmlc/gluon-nlp/blob/master/scripts/nmt/train_transformer.py) model in Gluonnlp. This feature has been added in a [NVIDIA mxnet](https://docs.nvidia.com/deeplearning/dgx/mxnet-release-notes/rel_17.11.html#rel_17.11). So I think it is good to enable this in the master.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services