You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2021/03/10 01:46:00 UTC
[jira] [Commented] (SPARK-34682) Regression in "operating on
canonicalized plan" check in CustomShuffleReaderExec
[ https://issues.apache.org/jira/browse/SPARK-34682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17298477#comment-17298477 ]
Apache Spark commented on SPARK-34682:
--------------------------------------
User 'andygrove' has created a pull request for this issue:
https://github.com/apache/spark/pull/31793
> Regression in "operating on canonicalized plan" check in CustomShuffleReaderExec
> --------------------------------------------------------------------------------
>
> Key: SPARK-34682
> URL: https://issues.apache.org/jira/browse/SPARK-34682
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 3.1.1
> Reporter: Andy Grove
> Priority: Minor
> Fix For: 3.2.0, 3.1.2
>
>
> In Spark 3.0.2 if I attempt to execute on a canonicalized version of CustomShuffleReaderExec I get an error "operating on canonicalized plan", as expected.
> There is a regression in Spark 3.1.1 where this check can never be reached because of a new call to sendDriverMetrics that was added prior to the check. This method will fail if operating on a canonicalized plan because it assumes the existence of metrics that do not exist if this is a canonicalized plan.
> {code:java}
> private lazy val shuffleRDD: RDD[_] = {
> sendDriverMetrics()
> shuffleStage.map { stage =>
> stage.shuffle.getShuffleRDD(partitionSpecs.toArray)
> }.getOrElse {
> throw new IllegalStateException("operating on canonicalized plan")
> }
> }{code}
> The specific error looks like this:
> {code:java}
> java.util.NoSuchElementException: key not found: numPartitions
> at scala.collection.immutable.Map$EmptyMap$.apply(Map.scala:101)
> at scala.collection.immutable.Map$EmptyMap$.apply(Map.scala:99)
> at org.apache.spark.sql.execution.adaptive.CustomShuffleReaderExec.sendDriverMetrics(CustomShuffleReaderExec.scala:122)
> at org.apache.spark.sql.execution.adaptive.CustomShuffleReaderExec.shuffleRDD$lzycompute(CustomShuffleReaderExec.scala:182)
> at org.apache.spark.sql.execution.adaptive.CustomShuffleReaderExec.shuffleRDD(CustomShuffleReaderExec.scala:181)
> at org.apache.spark.sql.execution.adaptive.CustomShuffleReaderExec.doExecuteColumnar(CustomShuffleReaderExec.scala:196)
> {code}
> I think the fix is simply to avoid calling sendDriverMetrics if the plan is canonicalized and I am planning on creating a PR to fix this.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org