You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2016/05/11 07:18:12 UTC

[jira] [Commented] (FLINK-1979) Implement Loss Functions

    [ https://issues.apache.org/jira/browse/FLINK-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15279692#comment-15279692 ] 

ASF GitHub Bot commented on FLINK-1979:
---------------------------------------

Github user skavulya commented on the pull request:

    https://github.com/apache/flink/pull/656#issuecomment-218380988
  
    @tillrohrmann I updated the loss functions and regularization penalties based on your branch. I was not sure whether to update the gradient descent algorithm to pass the regularization penalty as a parameter in this branch so I kept the GradientDescent API as-is. 
    
    Let me know if you would like me to change the GradientDescent API to GradientDescent().setRegularizationPenalty(L1Regularization) and I will update my branch, squash the commits, and create a PR. https://github.com/apache/flink/compare/master...skavulya:loss-functions


> Implement Loss Functions
> ------------------------
>
>                 Key: FLINK-1979
>                 URL: https://issues.apache.org/jira/browse/FLINK-1979
>             Project: Flink
>          Issue Type: Improvement
>          Components: Machine Learning Library
>            Reporter: Johannes Günther
>            Assignee: Johannes Günther
>            Priority: Minor
>              Labels: ML
>
> For convex optimization problems, optimizer methods like SGD rely on a pluggable implementation of a loss function and its first derivative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)