You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiangrui Meng (JIRA)" <ji...@apache.org> on 2015/05/07 19:42:59 UTC
[jira] [Updated] (SPARK-7443) MLlib 1.4 QA plan
[ https://issues.apache.org/jira/browse/SPARK-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Xiangrui Meng updated SPARK-7443:
---------------------------------
Description:
TODO: create JIRAs for each task and assign them accordingly.
h2. API
* Check API compliance using java-compliance-checker
* Audit new public APIs (from the generated html doc)
** Scala (do not forget to check the object doc)
** Java compatibility
** Python API coverage
* audit Pipeline APIs
** feature transformers
** tree models
** elastic-net
** ML attributes
** developer APIs
* graduate spark.ml from alpha
** remove AlphaComponent annotations
** remove mima excludes for spark.ml
h2. Algorithms and performance
* online LDA
* ElasticNet
* Bernoulli naive Bayes
* PMML
** scoring using PMML evaluator vs. MLlib models
* ALS.recommendAll
* save/load
* perf-tests in Python
h2. Documentation and example code
* create JIRAs for the user guide to each new algorithm and assign them to the corresponding author
* create example code for major components
** cross validation in python
** pipeline with complex feature transformations (scala/java/python)
** elastic-net (possibly with cross validation)
was:
TODO: create JIRAs for each task and assign them accordingly.
h2. API
* Check API compliance using java-compliance-checker
* Audit new public APIs (from the generated html doc)
** Scala (do not forget to check the object doc)
** Java compatibility
** Python API coverage
* audit Pipeline APIs
** feature transformers
** tree models
** elastic-net
** ML attributes
** developer APIs
* graduate spark.ml from alpha
** remove AlphaComponent annotations
** remove mima excludes for spark.ml
h2. Algorithms and performance
* online LDA
* ElasticNet
* Bernoulli naive Bayes
* PMML
** scoring using PMML evaluator vs. MLlib models
* ALS.recommendAll
* save/load
* perf-tests in Python
h2. Documentation
* create JIRAs for the user guide to each new algorithm and assign them to the corresponding author
* create example code for major components
** cross validation in python
** pipeline with complex feature transformations (scala/java/python)
** elastic-net (possibly with cross validation)
> MLlib 1.4 QA plan
> -----------------
>
> Key: SPARK-7443
> URL: https://issues.apache.org/jira/browse/SPARK-7443
> Project: Spark
> Issue Type: Umbrella
> Components: ML, MLlib
> Affects Versions: 1.4.0
> Reporter: Xiangrui Meng
> Assignee: Xiangrui Meng
> Priority: Critical
>
> TODO: create JIRAs for each task and assign them accordingly.
> h2. API
> * Check API compliance using java-compliance-checker
> * Audit new public APIs (from the generated html doc)
> ** Scala (do not forget to check the object doc)
> ** Java compatibility
> ** Python API coverage
> * audit Pipeline APIs
> ** feature transformers
> ** tree models
> ** elastic-net
> ** ML attributes
> ** developer APIs
> * graduate spark.ml from alpha
> ** remove AlphaComponent annotations
> ** remove mima excludes for spark.ml
> h2. Algorithms and performance
> * online LDA
> * ElasticNet
> * Bernoulli naive Bayes
> * PMML
> ** scoring using PMML evaluator vs. MLlib models
> * ALS.recommendAll
> * save/load
> * perf-tests in Python
> h2. Documentation and example code
> * create JIRAs for the user guide to each new algorithm and assign them to the corresponding author
> * create example code for major components
> ** cross validation in python
> ** pipeline with complex feature transformations (scala/java/python)
> ** elastic-net (possibly with cross validation)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org