You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by gi...@git.apache.org on 2017/08/18 20:07:30 UTC

[GitHub] ptrendx commented on issue #7475: Paradox VRAM demand as a function of batch size: Low batch size, high VRAM demand

ptrendx commented on issue #7475: Paradox VRAM demand as a function of batch size: Low batch size, high VRAM demand
URL: https://github.com/apache/incubator-mxnet/issues/7475#issuecomment-323449186
 
 
   I assume you run with cuDNN. Do you have convolutions in your network? If so, the most probable explanation is that for smaller batch sizes a different algorithm is chosen to perform convolution (since it performs faster) with different requirements on additional memory used.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services