You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2021/07/01 03:34:00 UTC

[jira] [Commented] (SPARK-35961) Only use local shuffle reader for REBALANCE_PARTITIONS_BY_NONE without CustomShuffleReaderExec

    [ https://issues.apache.org/jira/browse/SPARK-35961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372351#comment-17372351 ] 

Apache Spark commented on SPARK-35961:
--------------------------------------

User 'ulysses-you' has created a pull request for this issue:
https://github.com/apache/spark/pull/33165

> Only use local shuffle reader for REBALANCE_PARTITIONS_BY_NONE without CustomShuffleReaderExec
> ----------------------------------------------------------------------------------------------
>
>                 Key: SPARK-35961
>                 URL: https://issues.apache.org/jira/browse/SPARK-35961
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>    Affects Versions: 3.2.0
>            Reporter: XiDuo You
>            Priority: Major
>
> After [SPARK-35725](https://issues.apache.org/jira/browse/SPARK-35725), we might expand partition if that partition is skewed. So the partition number check `bytesByPartitionId.length == partitionSpecs.size` would be wrong if some partitions are coalesced and some partitions are splitted into smaller.
> Note that, it's unlikely happened in real world since it used RoundRobin.
> Otherhand, after [SPARK-34899](https://issues.apache.org/jira/browse/SPARK-34899), we use origin plan if can not coalesce partitions. So the assuming of that shuffle stage has `CustomShuffleReaderExec` with no effect is always false in `REBALANCE_PARTITIONS_BY_NONE` shuffle origin. That said, if no rule was efficient, there would be no `CustomShuffleReaderExec`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org