You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Éderson Cássio (JIRA)" <ji...@apache.org> on 2017/11/21 11:48:00 UTC
[jira] [Commented] (SPARK-22568) Split pair RDDs by keys - an
efficient (maybe?) substitute to groupByKey
[ https://issues.apache.org/jira/browse/SPARK-22568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260616#comment-16260616 ]
Éderson Cássio commented on SPARK-22568:
----------------------------------------
Thanks for your response.
How can I filter by each distinct key without know them previously? It's common to determine the keys by some processing routine. The only way I could be able to filter by all of them is doing a {{collect}}, before this I don't ever know how many they are.
> Split pair RDDs by keys - an efficient (maybe?) substitute to groupByKey
> ------------------------------------------------------------------------
>
> Key: SPARK-22568
> URL: https://issues.apache.org/jira/browse/SPARK-22568
> Project: Spark
> Issue Type: New Feature
> Components: Spark Core
> Affects Versions: 2.2.0
> Reporter: Éderson Cássio
> Labels: features, performance, usability
>
> Sorry for any mistakes on filling this big form... it's my first issue here :)
> Recently, I have the need to separate a RDD by some categorization. I was able to accomplish that by some ways.
> First, the obvious: mapping each element to a pair, with the key being the category of the element. Then, using the good ol' {{groupByKey}}.
> Listening to advices to avoid {{groupByKey}}, I failed to find another way that was more efficient. I ended up (a) obtaining the distinct list of element categories, (b) {{collect}} ing them and (c) making a call to {{filter}} for each category. Of course, before all I {{cache}} d my initial RDD.
> So, I started to speculate: maybe it would be possible to make a number of RDDs from an initial pair RDD _without the need to shuffle the data_. It could be made by a kind of _local repartition_: first each partition is splitted into various by key; then the master group the partitions with the same key into a new RDD. The operation returns a List or array containing the new RDDs.
> It's just a conjecture, I don't know if it would be feasible in current Spark Core architecture. But it would be great if it could be done.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org