You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by "wangwei (JIRA)" <ji...@apache.org> on 2016/01/12 03:30:39 UTC

[jira] [Created] (SINGA-131) Implement and optimize hybrid training using both CPU and GPU

wangwei created SINGA-131:
-----------------------------

             Summary: Implement and optimize hybrid training using both CPU and GPU
                 Key: SINGA-131
                 URL: https://issues.apache.org/jira/browse/SINGA-131
             Project: Singa
          Issue Type: Improvement
            Reporter: wangwei


We discussed with researchers from Stanford on implementing hybrid training before
http://mail-archives.apache.org/mod_mbox/singa-dev/201507.mbox/%3CCAJz0iLsd5iSCqqVU4QHLKzMO2o%2BFt-40kN8RgWkYhDn%3D6Qqqbw%40mail.gmail.com%3E.
Now with the GPU training supported, we can move on to this feature.

The distributed training framework is natural for hybrid training with CPU and GPU. The first n workers would be assigned with GPU cards (n is the number of cards configured by users), and the rest workers would run on CPU.

Some code may need updates and optimization to consider the memory transferring between GPU workers and CPU workers. Most of them is in worker.cc, param.cc and stub.cc.

Automatically tune the workload among GPU and CPU could be designed and implemented in this ticket or a new ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)