You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "koert kuipers (JIRA)" <ji...@apache.org> on 2014/10/22 18:55:34 UTC

[jira] [Comment Edited] (SPARK-3655) Secondary sort

    [ https://issues.apache.org/jira/browse/SPARK-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14180155#comment-14180155 ] 

koert kuipers edited comment on SPARK-3655 at 10/22/14 4:54 PM:
----------------------------------------------------------------

i am not sure repartitionAndSortWithinPartitions does what i want. what i want to do is for a given RDD[(K, V)] is use the sort-based shuffle to group by key but sort by (K, V), so that for each key the values come out sorted in the resulting RDD

i could do something like map a RDD[(K, V)] to a RDD[((K, V), V] and then use sortByKey, which does result in the values sorted for each key, but if i do that i have no guarantee that all values for a given key end up in same partition.

maybe i am missing something...
best, koert


was (Author: koert):
i am not sure repartitionAndSortWithinPartitions does what i want. what i want to do is for a given RDD[(K, V)] is use the sort-based shuffle to group by key but sort by (K, V), so that for each key the values come out sorted in the resulting RDD.

> Secondary sort
> --------------
>
>                 Key: SPARK-3655
>                 URL: https://issues.apache.org/jira/browse/SPARK-3655
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>    Affects Versions: 1.1.0
>            Reporter: koert kuipers
>            Priority: Minor
>
> Now that spark has a sort based shuffle, can we expect a secondary sort soon? There are some use cases where getting a sorted iterator of values per key is helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org