You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Noritaka Sekiyama (Jira)" <ji...@apache.org> on 2020/06/28 04:28:00 UTC
[jira] [Updated] (SPARK-32112) Easier way to repartition/coalesce
DataFrames based on the number of parallel tasks that Spark can process at
the same time
[ https://issues.apache.org/jira/browse/SPARK-32112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Noritaka Sekiyama updated SPARK-32112:
--------------------------------------
Summary: Easier way to repartition/coalesce DataFrames based on the number of parallel tasks that Spark can process at the same time (was: Add a method to calculate the number of parallel tasks that Spark can process at the same time)
> Easier way to repartition/coalesce DataFrames based on the number of parallel tasks that Spark can process at the same time
> ---------------------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-32112
> URL: https://issues.apache.org/jira/browse/SPARK-32112
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 3.0.0
> Reporter: Noritaka Sekiyama
> Priority: Major
>
> Repartition/coalesce is very important to optimize Spark application's performance, however, a lot of users are struggling with determining the number of partitions.
> This issue is to add a method to calculate the number of parallel tasks that Spark can process at the same time.
> It will help Spark users to determine the optimal number of partitions.
> Expected use-cases:
> - repartition with the calculated parallel tasks
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org