You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2016/03/29 21:40:25 UTC

[jira] [Commented] (FLINK-2157) Create evaluation framework for ML library

    [ https://issues.apache.org/jira/browse/FLINK-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15216721#comment-15216721 ] 

ASF GitHub Bot commented on FLINK-2157:
---------------------------------------

Github user rawkintrevo commented on the pull request:

    https://github.com/apache/flink/pull/871#issuecomment-203067582
  
    Continued from mailing list:  Till already mentioned that having Rsquared built in to MLR was just a convenience method, it's not good for a number of reasons in practice.  
    
    Also- what is the hold up on this PR? what needs to be done/ what are the remaining things to decide? Having some model scoring would be very handy. 


> Create evaluation framework for ML library
> ------------------------------------------
>
>                 Key: FLINK-2157
>                 URL: https://issues.apache.org/jira/browse/FLINK-2157
>             Project: Flink
>          Issue Type: New Feature
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Theodore Vasiloudis
>              Labels: ML
>             Fix For: 1.0.0
>
>
> Currently, FlinkML lacks means to evaluate the performance of trained models. It would be great to add some {{Evaluators}} which can calculate some score based on the information about true and predicted labels. This could also be used for the cross validation to choose the right hyper parameters.
> Possible scores could be F score [1], zero-one-loss score, etc.
> Resources
> [1] [http://en.wikipedia.org/wiki/F1_score]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)