You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lijia Liu (JIRA)" <ji...@apache.org> on 2017/09/28 10:12:00 UTC

[jira] [Comment Edited] (SPARK-22144) ExchangeCoordinator will not combine the partitions of an 0 sized pre-shuffle

    [ https://issues.apache.org/jira/browse/SPARK-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16183884#comment-16183884 ] 

Lijia Liu edited comment on SPARK-22144 at 9/28/17 10:11 AM:
-------------------------------------------------------------

[~yhuai] Would you please look at this when you have time?


was (Author: liutang123):
[~yhuai]: Please look at this issue!

> ExchangeCoordinator will not combine the partitions of an 0 sized pre-shuffle
> -----------------------------------------------------------------------------
>
>                 Key: SPARK-22144
>                 URL: https://issues.apache.org/jira/browse/SPARK-22144
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.2.0
>         Environment: spark: version:Spark 2.2
> master: yarn
> deploy-mode: cluster
>            Reporter: Lijia Liu
>
> A simple case:
> spark.conf.set("spark.sql.adaptive.enabled", "true")
> val df = spark.range(0, 0, 1, 10).selectExpr("id as key1") .groupBy("key1").count()
> val exchange = df.queryExecution.executedPlan.collect{case e: org.apache.spark.sql.execution.exchange.ShuffleExchange => e}(0)
> println(exchange.outputPartitioning.numPartitions) // The value will be spark.sql.shuffle.partitions and ExchangeCoordinator did not took effect. At the same time, a job with some(spark.sql.shuffle.partitions) tasks will be submited. 
> In my opinion, when data is empty, this job is useless and superfluous.
> This job cause waste of resources, in special when spark.sql.shuffle.partitions was set very large.
> So, as far as I'm concerned, when the length of pre-shuffle's partitions is 0, the length of post-shuffle's partitions should be 0 instead of spark.sql.shuffle.partitions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org