You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Devesh Parekh (JIRA)" <ji...@apache.org> on 2015/04/15 01:57:59 UTC

[jira] [Comment Edited] (SPARK-2505) Weighted Regularizer

    [ https://issues.apache.org/jira/browse/SPARK-2505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14495170#comment-14495170 ] 

Devesh Parekh edited comment on SPARK-2505 at 4/14/15 11:57 PM:
----------------------------------------------------------------

Can you describe a case where you would want the weights would be anything other than 0 for the intercept and lambda for everything else? The unregularized intercept use case comes up very often, so the API for this case should be very simple.


was (Author: dparekh):
Can you describe a case where you would want the weights would be anything other than 0 for the intercept and lambda for everything else?

> Weighted Regularizer
> --------------------
>
>                 Key: SPARK-2505
>                 URL: https://issues.apache.org/jira/browse/SPARK-2505
>             Project: Spark
>          Issue Type: New Feature
>          Components: MLlib
>            Reporter: DB Tsai
>
> The current implementation of regularization in linear model is using `Updater`, and this design has couple issues as the following.
> 1) It will penalize all the weights including intercept. In machine learning training process, typically, people don't penalize the intercept. 
> 2) The `Updater` has the logic of adaptive step size for gradient decent, and we would like to clean it up by separating the logic of regularization out from updater to regularizer so in LBFGS optimizer, we don't need the trick for getting the loss and gradient of objective function.
> In this work, a weighted regularizer will be implemented, and users can exclude the intercept or any weight from regularization by setting that term with zero weighted penalty. Since the regularizer will return a tuple of loss and gradient, the adaptive step size logic, and soft thresholding for L1 in Updater will be moved to SGD optimizer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org