You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by sr...@apache.org on 2019/02/05 04:53:23 UTC
[incubator-mxnet] branch master updated: modifying SyncBN doc for
FP16 use case (#14041)
This is an automated email from the ASF dual-hosted git repository.
srochel pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 3f6778b modifying SyncBN doc for FP16 use case (#14041)
3f6778b is described below
commit 3f6778b0e88c88d010f40cb3555c822430496acb
Author: Manu Seth <22...@users.noreply.github.com>
AuthorDate: Mon Feb 4 20:53:05 2019 -0800
modifying SyncBN doc for FP16 use case (#14041)
LGTM
---
python/mxnet/gluon/contrib/nn/basic_layers.py | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/python/mxnet/gluon/contrib/nn/basic_layers.py b/python/mxnet/gluon/contrib/nn/basic_layers.py
index 28fea15..56f0809 100644
--- a/python/mxnet/gluon/contrib/nn/basic_layers.py
+++ b/python/mxnet/gluon/contrib/nn/basic_layers.py
@@ -165,7 +165,10 @@ class SyncBatchNorm(BatchNorm):
Standard BN [1]_ implementation only normalize the data within each device.
SyncBN normalizes the input within the whole mini-batch.
- We follow the sync-onece implmentation described in the paper [2]_.
+ We follow the implementation described in the paper [2]_.
+
+ Note: Current implementation of SyncBN does not support FP16 training.
+ For FP16 inference, use standard nn.BatchNorm instead of SyncBN.
Parameters
----------