You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by gi...@git.apache.org on 2017/08/07 06:00:50 UTC

[GitHub] edmBernard commented on issue #7350: Multi-Training-Task on the same GPU card

edmBernard commented on issue #7350: Multi-Training-Task on the same GPU card
URL: https://github.com/apache/incubator-mxnet/issues/7350#issuecomment-320575891
 
 
   You have two training at 27 samples/sec each one ?
   GPU have power processing so if one training use 100% the second need to wait even if they share the memory. 
   ```
   +-----------------------------------------------------------------------------+
   | NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
   |-------------------------------+----------------------+----------------------+
   | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
   | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
   |===============================+======================+======================|
   |   0  GeForce GTX 1070    Off  | 0000:01:00.0     Off |                  N/A |
   | 29%   44C    P8    14W / 151W |      0MiB /  8113MiB |      0%      Default |
   +-------------------------------+----------------------+----------------------+
                                                                 ^
                                                                 |
                                                                 |
   ```
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services