You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nicholas Chammas (Jira)" <ji...@apache.org> on 2021/12/20 19:37:00 UTC

[jira] (SPARK-5997) Increase partition count without performing a shuffle

    [ https://issues.apache.org/jira/browse/SPARK-5997 ]


    Nicholas Chammas deleted comment on SPARK-5997:
    -----------------------------------------

was (Author: nchammas):
[~tenstriker] - I believe in your case you should be able to set {{spark.sql.files.maxRecordsPerFile}} to some number. Spark will not shuffle the data but it will still split up your output across multiple files.

> Increase partition count without performing a shuffle
> -----------------------------------------------------
>
>                 Key: SPARK-5997
>                 URL: https://issues.apache.org/jira/browse/SPARK-5997
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Andrew Ash
>            Priority: Major
>
> When decreasing partition count with rdd.repartition() or rdd.coalesce(), the user has the ability to choose whether or not to perform a shuffle.  However when increasing partition count there is no option of whether to perform a shuffle or not -- a shuffle always occurs.
> This Jira is to create a {{rdd.repartition(largeNum, shuffle=false)}} call that performs a repartition to a higher partition count without a shuffle.
> The motivating use case is to decrease the size of an individual partition enough that the .toLocalIterator has significantly reduced memory pressure on the driver, as it loads a partition at a time into the driver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org