You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Robert Ormandi (JIRA)" <ji...@apache.org> on 2016/04/27 00:40:13 UTC

[jira] [Commented] (SPARK-5997) Increase partition count without performing a shuffle

    [ https://issues.apache.org/jira/browse/SPARK-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259101#comment-15259101 ] 

Robert Ormandi commented on SPARK-5997:
---------------------------------------

Does it simply solve the problem if the method split up each partition to N new ones uniformly. In this way, we will have N x originalNumberOfPartitions partitions each containing originalNumberOfObjectsPerPartition / N objects approximately? N could be a parameter of the method.

> Increase partition count without performing a shuffle
> -----------------------------------------------------
>
>                 Key: SPARK-5997
>                 URL: https://issues.apache.org/jira/browse/SPARK-5997
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Andrew Ash
>
> When decreasing partition count with rdd.repartition() or rdd.coalesce(), the user has the ability to choose whether or not to perform a shuffle.  However when increasing partition count there is no option of whether to perform a shuffle or not -- a shuffle always occurs.
> This Jira is to create a {{rdd.repartition(largeNum, shuffle=false)}} call that performs a repartition to a higher partition count without a shuffle.
> The motivating use case is to decrease the size of an individual partition enough that the .toLocalIterator has significantly reduced memory pressure on the driver, as it loads a partition at a time into the driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org