You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Karuppayya (Jira)" <ji...@apache.org> on 2020/05/28 05:55:00 UTC
[jira] [Created] (SPARK-31850) DetermineTableStats rules computes
stats multiple time
Karuppayya created SPARK-31850:
----------------------------------
Summary: DetermineTableStats rules computes stats multiple time
Key: SPARK-31850
URL: https://issues.apache.org/jira/browse/SPARK-31850
Project: Spark
Issue Type: Bug
Components: SQL
Affects Versions: 3.0.0
Reporter: Karuppayya
DetermineTableStats rules computes stats multiple time.
There are few rules which invoke
org.apache.spark.sql.catalyst.analysis.Analyzer#executeSameContext
* The above method cause the logical plan to go through the analysis phase multiple times
* The above method is invoked as a part of rule(s) which is part of a batch which run till *Fixed point* which implies that that the analysis phase can run multiple time
* example: [https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala#L2138 |https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala#L2138]
* This is cause of concern especially in *DetermineTableStats* rule, where the computation of stats can be expensive for a large table(And also it happens multiple time due to the fixed point nature of batch from which analysis is triggered)
Repro steps
{code:java}
spark.sql("create table c(id INT, name STRING) STORED AS PARQUET")
val df = spark.sql("select count(id) id from c group by name order by id " )
df.queryExecution.analyzed
{code}
Note:
* There is no log line in DetermineTableStats to indicate that stats compute happened. Need to add a log line or use a debugger
* The above can be repro-ed with first query on a created table.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org