You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/01/26 22:52:22 UTC

[GitHub] zhanghang1989 opened a new issue #9580: BatchNorm Not Backward Correctly in Evaluation Mode

zhanghang1989 opened a new issue #9580: BatchNorm Not Backward Correctly in Evaluation Mode
URL: https://github.com/apache/incubator-mxnet/issues/9580
 
 
   ## Description
   
   I found the evaluation mode backward is already supported in the source code https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/batch_norm.cu#L368-L375, but it is not called due to the wrong context given during the execution.
   
   ## Steps to reproduce
   add  ``LOG(INFO) << "SetupFlags: ctx.is_train : " << ctx.is_train;`` to SetupFlags function https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/batch_norm.cu#L578
   
   compile MXNet with ``USE_CUDNN=0`` to make sure not calling cudnn
   
   ## Minimum reproducible example
   ```python
   x = (mx.nd.random.uniform(0,1,shape=(B,C,H,W))).square().as_in_context(mx.gpu(0))
   layer1 = nn.BatchNorm(in_channels = C)
   layer1.initialize(ctx = mx.gpu(0))
   with autograd.record(train_mode=False):
       y1 = layer1(x)
       loss1 = y1.sum()
   loss1.backward()
   ```
   
   ## Output from the terminal
   [22:51:47] src/operator/nn/batch_norm.cu:579: SetupFlags: ctx.is_train : 0
   [22:51:47] src/operator/nn/batch_norm.cu:579: SetupFlags: ctx.is_train : 1
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services