You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/01/29 22:15:26 UTC

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17466: Bert gemms true fp16

ptrendx commented on a change in pull request #17466: Bert gemms true fp16
URL: https://github.com/apache/incubator-mxnet/pull/17466#discussion_r372660973
 
 

 ##########
 File path: docs/static_site/src/pages/api/faq/env_var.md
 ##########
 @@ -353,6 +353,10 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - Values: 0(false) or 1(true) ```(default=1)```
   - This variable controls whether to use the MKL-DNN backend in fused RNN operator for CPU context. There are two fusion implementations of RNN operator in MXNet. The MKL-DNN implementation has a better performance than the naive one, but the latter is more stable in the backward operation currently.
 
+* MXNET_FC_TRUE_FP16
+  - Values: 0(false) or 1(true) ```(default=0)```
+  - If this variable is set to true, MXNet will performs true FP16 computation in CUBLAS gemms when input datatype is float16.
 
 Review comment:
   Could you describe it slightly more? I don't think average user will understand what "true fp16" means. Maybe something like this: 
   If this variable is set to true, MXNet will perform fp16 accumulation when using cuBLAS and input datatype is set to float16. This could increase the speed of the computation, but might result in loss of accuracy. This makes this setting useful mainly for inference usecases.
   
   @eric-haibin-lin What do you think?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services