You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Joseph K. Bradley (JIRA)" <ji...@apache.org> on 2016/04/15 23:56:25 UTC

[jira] [Updated] (SPARK-4591) Algorithm/model parity audit for spark.ml (Scala)

     [ https://issues.apache.org/jira/browse/SPARK-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Joseph K. Bradley updated SPARK-4591:
-------------------------------------
    Summary: Algorithm/model parity audit for spark.ml (Scala)  (was: Algorithm/model parity in spark.ml (Scala))

> Algorithm/model parity audit for spark.ml (Scala)
> -------------------------------------------------
>
>                 Key: SPARK-4591
>                 URL: https://issues.apache.org/jira/browse/SPARK-4591
>             Project: Spark
>          Issue Type: Umbrella
>          Components: ML
>            Reporter: Xiangrui Meng
>            Priority: Critical
>
> This is an umbrella JIRA for porting spark.mllib implementations to use the DataFrame-based API defined under spark.ml.  We want to achieve feature parity for the next release.
> Subtasks cover major algorithm groups.  To pick up a review subtask, please:
> * Comment that you are working on it.
> * Compare the public APIs of spark.ml vs. spark.mllib.
> * Comment on all missing items within spark.ml: algorithms, models, methods, features, etc.
> * Check for existing JIRAs covering those items.  If there is no existing JIRA, create one, and link it to your comment.
> This does *not* include:
> * Python: We can compare Scala vs. Python in spark.ml itself.
> * single-Row prediction: [SPARK-10413]
> Also, this does not include the following items (but will eventually):
> * User-facing:
> ** Streaming ML
> ** evaluation
> ** pmml
> ** stat
> ** linalg [SPARK-13944]
> * Developer-facing:
> ** optimization
> ** random, rdd
> ** util



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org