You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/05/28 05:59:48 UTC

[GitHub] kaleidoscopical commented on issue #11062: how to manually occupy all gpu memory like tensorflow?

kaleidoscopical commented on issue #11062: how to manually occupy all gpu memory like tensorflow?
URL: https://github.com/apache/incubator-mxnet/issues/11062#issuecomment-392427318
 
 
   Thanks for replying me : )
   
   For example, I have a GPU card with 12GB memory and one of my running programs occupies 11.5GB of it. Besides, there are several dynamic memory allocations in my codes. 
   
   If I accidentally run another program on the same GPU card, both of the programs will report "Out of Memory". So, I wonder whether there is a safer way to solve this problem.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services