You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Matei Zaharia (JIRA)" <ji...@apache.org> on 2014/11/06 07:53:34 UTC

[jira] [Resolved] (SPARK-565) Integrate spark in scala standard collection API

     [ https://issues.apache.org/jira/browse/SPARK-565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Matei Zaharia resolved SPARK-565.
---------------------------------
    Resolution: Won't Fix

FYI I'm going to close this because we've locked down the API for 1.X, and it's pretty clear that it can't fully fit into the Scala collections API (that has a lot of things we don't have, and vice versa). This is something we can investigate later but it's unlikely that we'll want to bind the API to Scala even if we change pieces of it in the future.

> Integrate spark in scala standard collection API
> ------------------------------------------------
>
>                 Key: SPARK-565
>                 URL: https://issues.apache.org/jira/browse/SPARK-565
>             Project: Spark
>          Issue Type: New Feature
>            Reporter: tjhunter
>
> This is more a meta-bug / whish item than a real bug. 
> Scala 2.0 provides some API for parallel collections which might be interesting to leverage, but mostly as a user, I would like to be able to write a function like:
> def contrived_example(xs:Seq[Int]) = xs.map(_ * 2).sum
> and not have to care if xs is an array, a scala parallel collection or a RDD. Given that RDDs already implement most of the API for Seq, it seems mostly a matter of standardization. I am probably missing some subtle details here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org