You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Joseph K. Bradley (JIRA)" <ji...@apache.org> on 2016/12/13 21:41:59 UTC
[jira] [Comment Edited] (SPARK-4591) Algorithm/model parity for
spark.ml (Scala)
[ https://issues.apache.org/jira/browse/SPARK-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15746350#comment-15746350 ]
Joseph K. Bradley edited comment on SPARK-4591 at 12/13/16 9:41 PM:
--------------------------------------------------------------------
Oh, I see I should reorg how subtasks are done. Editing now...
was (Author: josephkb):
Good point. It should be. I'll add it.
> Algorithm/model parity for spark.ml (Scala)
> -------------------------------------------
>
> Key: SPARK-4591
> URL: https://issues.apache.org/jira/browse/SPARK-4591
> Project: Spark
> Issue Type: Umbrella
> Components: ML
> Reporter: Xiangrui Meng
> Priority: Critical
>
> This is an umbrella JIRA for porting spark.mllib implementations to use the DataFrame-based API defined under spark.ml. We want to achieve feature parity for the next release.
> Subtasks cover major algorithm groups. To pick up a review subtask, please:
> * Comment that you are working on it.
> * Compare the public APIs of spark.ml vs. spark.mllib.
> * Comment on all missing items within spark.ml: algorithms, models, methods, features, etc.
> * Check for existing JIRAs covering those items. If there is no existing JIRA, create one, and link it to your comment.
> This does *not* include:
> * Python: We can compare Scala vs. Python in spark.ml itself.
> * single-Row prediction: [SPARK-10413]
> Also, this does not include the following items (but will eventually):
> * User-facing:
> ** Streaming ML
> ** evaluation
> ** pmml
> ** stat
> ** linalg [SPARK-13944]
> * Developer-facing:
> ** optimization
> ** random, rdd
> ** util
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org