You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (Jira)" <ji...@apache.org> on 2020/12/17 06:53:00 UTC

[jira] [Comment Edited] (SPARK-33822) TPCDS Q5 fails if spark.sql.adaptive.enabled=true

    [ https://issues.apache.org/jira/browse/SPARK-33822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17250832#comment-17250832 ] 

Dongjoon Hyun edited comment on SPARK-33822 at 12/17/20, 6:52 AM:
------------------------------------------------------------------

Hi, [~cloud_fan] and [~maropu]. Is this a know issue of AQE? I found this failure at `master` branch from the beginning and found that `branch-3.1` and Apache Spark 3.0.1 fails in the same way.

Also, cc [~hyukjin.kwon] since he is the release manager for Apache Spark 3.2.0.




was (Author: dongjoon):
Hi, [~cloud_fan] and [~maropu]. Is this a know issue of AQE?

Also, cc [~hyukjin.kwon] since he is the release manager for Apache Spark 3.2.0.



> TPCDS Q5 fails if spark.sql.adaptive.enabled=true
> -------------------------------------------------
>
>                 Key: SPARK-33822
>                 URL: https://issues.apache.org/jira/browse/SPARK-33822
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.0.1, 3.1.0, 3.2.0
>            Reporter: Dongjoon Hyun
>            Priority: Major
>
> {code}
> >>> tables = ['call_center', 'catalog_page', 'catalog_returns', 'catalog_sales', 'customer', 'customer_address', 'customer_demographics', 'date_dim', 'household_demographics', 'income_band', 'inventory', 'item', 'promotion', 'reason', 'ship_mode', 'store', 'store_returns', 'store_sales', 'time_dim', 'warehouse', 'web_page', 'web_returns', 'web_sales', 'web_site']
> >>> for t in tables:
> ...     spark.sql("CREATE TABLE %s USING PARQUET LOCATION '/Users/dongjoon/data/10g/%s'" % (t, t))
> >>> spark.sql(spark.sparkContext.wholeTextFiles("/Users/dongjoon/data/query/q5.sql").take(1)[0][1]).show(10000)
> +---------------+--------------------+-------------+-----------+-------------+
> |        channel|                  id|        sales|    returns|       profit|
> +---------------+--------------------+-------------+-----------+-------------+
> |           null|                null|1143646603.07|30617460.71|-317540732.87|
> |catalog channel|                null| 393609478.06| 9451732.79| -44801262.72|
> |catalog channel|catalog_pageAAAAA...|         0.00|   39037.48|    -25330.29|
> ...
> +---------------+--------------------+-------------+-----------+-------------+
> >>> sql("set spark.sql.adaptive.enabled=true")
> >>> spark.sql(spark.sparkContext.wholeTextFiles("/Users/dongjoon/data/query/q5.sql").take(1)[0][1]).show(10000)
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "/Users/dongjoon/APACHE/spark-release/spark-3.0.1-bin-hadoop3.2/python/pyspark/sql/dataframe.py", line 440, in show
>     print(self._jdf.showString(n, 20, vertical))
>   File "/Users/dongjoon/APACHE/spark-release/spark-3.0.1-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
>   File "/Users/dongjoon/APACHE/spark-release/spark-3.0.1-bin-hadoop3.2/python/pyspark/sql/utils.py", line 128, in deco
>     return f(*a, **kw)
>   File "/Users/dongjoon/APACHE/spark-release/spark-3.0.1-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
> py4j.protocol.Py4JJavaError: An error occurred while calling o160.showString.
> : java.lang.UnsupportedOperationException: BroadcastExchange does not support the execute() code path.
> 	at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecute(BroadcastExchangeExec.scala:190)
> 	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
> 	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
> 	at org.apache.spark.sql.execution.exchange.ReusedExchangeExec.doExecute(Exchange.scala:61)
> 	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
> 	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
> 	at org.apache.spark.sql.execution.adaptive.QueryStageExec.doExecute(QueryStageExec.scala:115)
> 	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
> 	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
> 	at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:316)
> 	at org.apache.spark.sql.execution.SparkPlan.executeCollectIterator(SparkPlan.scala:392)
> 	at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.$anonfun$relationFuture$1(BroadcastExchangeExec.scala:120)
> 	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$1(SQLExecution.scala:182)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org