You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "David (JIRA)" <ji...@apache.org> on 2014/09/04 13:57:51 UTC

[jira] [Commented] (SPARK-1473) Feature selection for high dimensional datasets

    [ https://issues.apache.org/jira/browse/SPARK-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121276#comment-14121276 ] 

David commented on SPARK-1473:
------------------------------

Hi you all,

I am Dr. David Martinez and this is my first comment of this project. We implemented all feature selection methods included in
•Brown, G., Pocock, A., Zhao, M. J., & Luján, M. (2012). Conditional
 likelihood maximisation: a unifying framework for information theoretic
 feature selection.The Journal of Machine Learning Research, 13, 27-66

included more optimizations and left the framework open to include more criteria. We opened a pull request in the past but did not finished it. You can 
have a look in our github
https://github.com/LIDIAgroup/SparkFeatureSelection
We would like to finish our pull request

> Feature selection for high dimensional datasets
> -----------------------------------------------
>
>                 Key: SPARK-1473
>                 URL: https://issues.apache.org/jira/browse/SPARK-1473
>             Project: Spark
>          Issue Type: New Feature
>          Components: MLlib
>            Reporter: Ignacio Zendejas
>            Assignee: Alexander Ulanov
>            Priority: Minor
>              Labels: features
>
> For classification tasks involving large feature spaces in the order of tens of thousands or higher (e.g., text classification with n-grams, where n > 1), it is often useful to rank and filter features that are irrelevant thereby reducing the feature space by at least one or two orders of magnitude without impacting performance on key evaluation metrics (accuracy/precision/recall).
> A feature evaluation interface which is flexible needs to be designed and at least two methods should be implemented with Information Gain being a priority as it has been shown to be amongst the most reliable.
> Special consideration should be taken in the design to account for wrapper methods (see research papers below) which are more practical for lower dimensional data.
> Relevant research:
> * Brown, G., Pocock, A., Zhao, M. J., & Luján, M. (2012). Conditional
> likelihood maximisation: a unifying framework for information theoretic
> feature selection.*The Journal of Machine Learning Research*, *13*, 27-66.
> * Forman, George. "An extensive empirical study of feature selection metrics for text classification." The Journal of machine learning research 3 (2003): 1289-1305.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org