You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@horn.apache.org by "Lee Dongjin (JIRA)" <ji...@apache.org> on 2016/02/03 03:30:39 UTC
[jira] [Resolved] (HORN-8) Implementation of Parameter Server
[ https://issues.apache.org/jira/browse/HORN-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lee Dongjin resolved HORN-8.
----------------------------
Resolution: Fixed
> Implementation of Parameter Server
> ----------------------------------
>
> Key: HORN-8
> URL: https://issues.apache.org/jira/browse/HORN-8
> Project: Apache Horn
> Issue Type: Improvement
> Reporter: Edward J. Yoon
> Assignee: Lee Dongjin
>
> The current implementation works in synchronous way like below (SmallLayeredNeuralNetworkTrainer.java 101 lines):
> {code}
> task0 task1 task2
> compute updates locally
> -------------------------------- sends updates to master task
> -------------------------------- merge updates and broadcast it to every tasks
> compute updates locally
> -------------------------------- sends updates to master task
> -------------------------------- merge updates and broadcast it to every tasks
>
> ...
> (Loop until onvergence)
> {code}
> By separating the master, we can support asynchronous parallel SGD. My idea is just using of a task0 (BSPTask) as a server daemon. In this issue ticket, single master is enough at this moment.
> {code}
> task0 | task1 .... taskN
> |
> |
> | compute updates locally
> |
> Receive |<------ push updates to master task
> Update1 |
> +------> fetch updates
> |
> |
> |
> Receive |<------------------------------------ ..
> Update2 |
> +------------------------------------> ..
> |
> |
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)