You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@horn.apache.org by "Lee Dongjin (JIRA)" <ji...@apache.org> on 2016/02/03 03:30:39 UTC

[jira] [Assigned] (HORN-8) Implementation of Parameter Server

     [ https://issues.apache.org/jira/browse/HORN-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Lee Dongjin reassigned HORN-8:
------------------------------

    Assignee: Lee Dongjin

> Implementation of Parameter Server
> ----------------------------------
>
>                 Key: HORN-8
>                 URL: https://issues.apache.org/jira/browse/HORN-8
>             Project: Apache Horn
>          Issue Type: Improvement
>            Reporter: Edward J. Yoon
>            Assignee: Lee Dongjin
>
> The current implementation works in synchronous way like below (SmallLayeredNeuralNetworkTrainer.java 101 lines):
> {code}
> task0        task1        task2
>       compute updates locally
> -------------------------------- sends updates to master task
> -------------------------------- merge updates and broadcast it to every tasks
>       compute updates locally
> -------------------------------- sends updates to master task
> -------------------------------- merge updates and broadcast it to every tasks
>      
>                ...
>       (Loop until onvergence)
> {code}
> By separating the master, we can support asynchronous parallel SGD. My idea is just using of a task0 (BSPTask) as a server daemon. In this issue ticket, single master is enough at this moment.
> {code}
> task0     |          task1                          ....   taskN
>           |
>           |
>           |   compute updates locally
>           |
>  Receive  |<------ push updates to master task
>  Update1  |                     
>           +------> fetch updates
>           |
>           |
>           |
>  Receive  |<------------------------------------ ..
>  Update2  |
>           +------------------------------------> ..
>           |
>           |
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Re: [jira] [Updated] (HORN-8) Implementation of Parameter Server

Posted by "Edward J. Yoon" <ed...@apache.org>.
yup we beed to close it.

Sent from my iPhone

On Wednesday, 3 February 2016, Zachary Jaffee <zi...@case.edu> wrote:

> Should we also close the activation function ticket (Horn-9) or is there
> more to do there.
>
> On Tue, Feb 2, 2016 at 6:30 PM, Lee Dongjin (JIRA) <jira@apache.org
> <javascript:;>> wrote:
>
> >
> >      [
> >
> https://issues.apache.org/jira/browse/HORN-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> > ]
> >
> > Lee Dongjin reassigned HORN-8:
> > ------------------------------
> >
> >     Assignee: Lee Dongjin
> >
> > > Implementation of Parameter Server
> > > ----------------------------------
> > >
> > >                 Key: HORN-8
> > >                 URL: https://issues.apache.org/jira/browse/HORN-8
> > >             Project: Apache Horn
> > >          Issue Type: Improvement
> > >            Reporter: Edward J. Yoon
> > >            Assignee: Lee Dongjin
> > >
> > > The current implementation works in synchronous way like below
> > (SmallLayeredNeuralNetworkTrainer.java 101 lines):
> > > {code}
> > > task0        task1        task2
> > >       compute updates locally
> > > -------------------------------- sends updates to master task
> > > -------------------------------- merge updates and broadcast it to
> every
> > tasks
> > >       compute updates locally
> > > -------------------------------- sends updates to master task
> > > -------------------------------- merge updates and broadcast it to
> every
> > tasks
> > >
> > >                ...
> > >       (Loop until onvergence)
> > > {code}
> > > By separating the master, we can support asynchronous parallel SGD. My
> > idea is just using of a task0 (BSPTask) as a server daemon. In this issue
> > ticket, single master is enough at this moment.
> > > {code}
> > > task0     |          task1                          ....   taskN
> > >           |
> > >           |
> > >           |   compute updates locally
> > >           |
> > >  Receive  |<------ push updates to master task
> > >  Update1  |
> > >           +------> fetch updates
> > >           |
> > >           |
> > >           |
> > >  Receive  |<------------------------------------ ..
> > >  Update2  |
> > >           +------------------------------------> ..
> > >           |
> > >           |
> > > {code}
> >
> >
> >
> > --
> > This message was sent by Atlassian JIRA
> > (v6.3.4#6332)
> >
>
>
>
> --
> Zach Jaffee
> B.S. Computer Science
> Case Western Reserve University Class of 2017
> Operations Director | WRUW FM 91.1 Cleveland
> Secretary | Recruitment Chair | Phi Kappa Theta Fraternity
> (917) 881-0646
> zjaffee.com
> github.com/ZJaffee
>


-- 
Best Regards, Edward J. Yoon

Re: [jira] [Assigned] (HORN-8) Implementation of Parameter Server

Posted by Zachary Jaffee <zi...@case.edu>.
Should we also close the activation function ticket (Horn-9) or is there
more to do there.

On Tue, Feb 2, 2016 at 6:30 PM, Lee Dongjin (JIRA) <ji...@apache.org> wrote:

>
>      [
> https://issues.apache.org/jira/browse/HORN-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> ]
>
> Lee Dongjin reassigned HORN-8:
> ------------------------------
>
>     Assignee: Lee Dongjin
>
> > Implementation of Parameter Server
> > ----------------------------------
> >
> >                 Key: HORN-8
> >                 URL: https://issues.apache.org/jira/browse/HORN-8
> >             Project: Apache Horn
> >          Issue Type: Improvement
> >            Reporter: Edward J. Yoon
> >            Assignee: Lee Dongjin
> >
> > The current implementation works in synchronous way like below
> (SmallLayeredNeuralNetworkTrainer.java 101 lines):
> > {code}
> > task0        task1        task2
> >       compute updates locally
> > -------------------------------- sends updates to master task
> > -------------------------------- merge updates and broadcast it to every
> tasks
> >       compute updates locally
> > -------------------------------- sends updates to master task
> > -------------------------------- merge updates and broadcast it to every
> tasks
> >
> >                ...
> >       (Loop until onvergence)
> > {code}
> > By separating the master, we can support asynchronous parallel SGD. My
> idea is just using of a task0 (BSPTask) as a server daemon. In this issue
> ticket, single master is enough at this moment.
> > {code}
> > task0     |          task1                          ....   taskN
> >           |
> >           |
> >           |   compute updates locally
> >           |
> >  Receive  |<------ push updates to master task
> >  Update1  |
> >           +------> fetch updates
> >           |
> >           |
> >           |
> >  Receive  |<------------------------------------ ..
> >  Update2  |
> >           +------------------------------------> ..
> >           |
> >           |
> > {code}
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>



-- 
Zach Jaffee
B.S. Computer Science
Case Western Reserve University Class of 2017
Operations Director | WRUW FM 91.1 Cleveland
Secretary | Recruitment Chair | Phi Kappa Theta Fraternity
(917) 881-0646
zjaffee.com
github.com/ZJaffee