You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by "wangwei (JIRA)" <ji...@apache.org> on 2016/10/06 08:17:20 UTC

[jira] [Closed] (SINGA-131) Implement and optimize hybrid training using both CPU and GPU

     [ https://issues.apache.org/jira/browse/SINGA-131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

wangwei closed SINGA-131.
-------------------------
    Resolution: Fixed

> Implement and optimize hybrid training using both CPU and GPU
> -------------------------------------------------------------
>
>                 Key: SINGA-131
>                 URL: https://issues.apache.org/jira/browse/SINGA-131
>             Project: Singa
>          Issue Type: Improvement
>            Reporter: wangwei
>              Labels: CPU, GPU, hybrid
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> We discussed with researchers from Stanford on implementing hybrid training before
> http://mail-archives.apache.org/mod_mbox/singa-dev/201507.mbox/%3CCAJz0iLsd5iSCqqVU4QHLKzMO2o%2BFt-40kN8RgWkYhDn%3D6Qqqbw%40mail.gmail.com%3E.
> Now with the GPU training supported, we can move on to this feature.
> The distributed training framework is natural for hybrid training with CPU and GPU. The first n workers would be assigned with GPU cards (n is the number of cards configured by users), and the rest workers would run on CPU.
> Some code may need updates and optimization to consider the memory transferring between GPU workers and CPU workers. Most of them is in worker.cc, param.cc and stub.cc.
> Automatically Tuning the workload among GPU and CPU could be designed and implemented in this ticket or a new ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)