You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/05/28 21:17:37 UTC

[GitHub] [incubator-mxnet] nofaviv commented on issue #14357: [Bug] Batchnorm running_var behaves differently when using gpu vs. cpu

nofaviv commented on issue #14357: [Bug] Batchnorm running_var behaves differently when using gpu vs. cpu
URL: https://github.com/apache/incubator-mxnet/issues/14357#issuecomment-496693485
 
 
   I have just encountered the same or similar problem. I am training on a GPU using the python API. However, my inference target machine does not have a GPU and uses the C++ API on a CPU. My inference results are all **nan**. When I grouped all symbols to the output I was able to see that the problem starts after the first batch normalization layer. When I run the same C++ code on a machine with GPU, everything is ok.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services