You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/01/08 09:11:40 UTC

[jira] [Assigned] (SPARK-1270) An optimized gradient descent implementation

     [ https://issues.apache.org/jira/browse/SPARK-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-1270:
-----------------------------------

    Assignee: Apache Spark

> An optimized gradient descent implementation
> --------------------------------------------
>
>                 Key: SPARK-1270
>                 URL: https://issues.apache.org/jira/browse/SPARK-1270
>             Project: Spark
>          Issue Type: Improvement
>          Components: MLlib
>    Affects Versions: 1.0.0
>            Reporter: Xusen Yin
>            Assignee: Apache Spark
>              Labels: GradientDescent, MLLib,
>
> Current implementation of GradientDescent is inefficient in some aspects, especially in high-latency network. I propose a new implementation of GradientDescent, which follows a parallelism model called GradientDescentWithLocalUpdate, inspired by Jeff Dean's DistBelief and Eric Xing's SSP. With a few modifications of runMiniBatchSGD, the GradientDescentWithLocalUpdate can outperform the original sequential version by about 4x without sacrificing accuracy, and can be easily adopted by most classification and regression algorithms in MLlib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org