You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by WangJianfei <wa...@otcaix.iscas.ac.cn> on 2016/11/30 08:51:43 UTC

Why don't we imp some adaptive learning rate methods, such as adadelat, adam?

Hi devs:
    Normally, the adaptive learning rate methods can have a fast convergence
then standard SGD, so why don't we imp them?
see the link for more details 
http://sebastianruder.com/optimizing-gradient-descent/index.html#adadelta



--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Why-don-t-we-imp-some-adaptive-learning-rate-methods-such-as-adadelat-adam-tp20057.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: Why don't we imp some adaptive learning rate methods, such as adadelat, adam?

Posted by Liang-Chi Hsieh <vi...@gmail.com>.
Hi,

There is a plan to add this into Spark ML. Please check out
https://issues.apache.org/jira/browse/SPARK-18023. You can also follow this
jira to get the latest update.



-----
Liang-Chi Hsieh | @viirya 
Spark Technology Center 
http://www.spark.tc/ 
--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Why-don-t-we-imp-some-adaptive-learning-rate-methods-such-as-adadelat-adam-tp20057p20205.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: Why don't we imp some adaptive learning rate methods, such as adadelat, adam?

Posted by WangJianfei <wa...@otcaix.iscas.ac.cn>.
yes, thank you, i know this imp is very simple, but i want to know why spark
mllib imp this?



--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Why-don-t-we-imp-some-adaptive-learning-rate-methods-such-as-adadelat-adam-tp20057p20060.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: Why don't we imp some adaptive learning rate methods, such as adadelat, adam?

Posted by Nick Pentreath <ni...@gmail.com>.
check out https://github.com/VinceShieh/Spark-AdaOptimizer

On Wed, 30 Nov 2016 at 10:52 WangJianfei <wa...@otcaix.iscas.ac.cn>
wrote:

> Hi devs:
>     Normally, the adaptive learning rate methods can have a fast
> convergence
> then standard SGD, so why don't we imp them?
> see the link for more details
> http://sebastianruder.com/optimizing-gradient-descent/index.html#adadelta
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/Why-don-t-we-imp-some-adaptive-learning-rate-methods-such-as-adadelat-adam-tp20057.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>
>