You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Gang Bai (JIRA)" <ji...@apache.org> on 2014/06/17 13:57:02 UTC

[jira] [Commented] (SPARK-2163) Set ``setConvergenceTol'' with a parameter of type Double instead of Int

    [ https://issues.apache.org/jira/browse/SPARK-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14033703#comment-14033703 ] 

Gang Bai commented on SPARK-2163:
---------------------------------

I've created a pull request on GitHub for this. https://github.com/apache/spark/pull/1104

> Set ``setConvergenceTol'' with a parameter of type Double instead of Int
> ------------------------------------------------------------------------
>
>                 Key: SPARK-2163
>                 URL: https://issues.apache.org/jira/browse/SPARK-2163
>             Project: Spark
>          Issue Type: Improvement
>          Components: MLlib
>    Affects Versions: 1.0.0
>            Reporter: Gang Bai
>
> The class LBFGS in mllib.optimization currently provides a {{setConvergenceTol(tolerance: Int)}} method for setting the convergence tolerance. The tolerance parameter is of type {{Int}}. The specified tolerance is then used as parameter in calling {{LBFGS.runLBFGS}}, where the parameter {{convergenceTol}} is of type {{Double}}.
> The Int parameter may cause problem when one creates an optimizer and sets a Double-valued tolerance. e.g:
> {code:borderStyle=solid}
> override val optimizer = new LBFGS(gradient, updater)
>       .setNumCorrections(9)
>       .setConvergenceTol(1e-4)  // *type mismatch here*
>       .setMaxNumIterations(100)
>       .setRegParam(1.0)
> {code}
> IMHO there is no need to make the tolerance of type Int. Let's change it into a Double parameter and eliminate the type mismatch problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)