You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yuming Wang (JIRA)" <ji...@apache.org> on 2019/05/09 13:17:00 UTC

[jira] [Commented] (SPARK-27631) Avoid repeating calculate table statistics when AUTO_SIZE_UPDATE_ENABLED is enabled

    [ https://issues.apache.org/jira/browse/SPARK-27631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836370#comment-16836370 ] 

Yuming Wang commented on SPARK-27631:
-------------------------------------

{code:java}
**Related code path**:
https://github.com/apache/spark/blob/a2a41b7bf2bfdcd1cff242013716ac7bd84bdacd/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala#L1148-L1156

https://github.com/apache/hive/blob/rel/release-2.3.4/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L2048-L2052

https://github.com/apache/hive/blob/rel/release-2.3.4/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java#L271-L307

https://github.com/apache/hive/blob/rel/release-2.3.4/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java#L269-L275
{code}

> Avoid repeating calculate table statistics when AUTO_SIZE_UPDATE_ENABLED is enabled
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-27631
>                 URL: https://issues.apache.org/jira/browse/SPARK-27631
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: Yuming Wang
>            Priority: Major
>
> How to reproduce:
> {code:java}
> build/sbt clean package -Phive -Phadoop-3.2
> export SPARK_PREPEND_CLASSES=true
> bin/spark-shell --conf spark.hadoop.hive.metastore.schema.verification=false --conf spark.hadoop.datanucleus.schema.autoCreateAll=true --conf spark.sql.statistics.size.autoUpdate.enabled=true{code}
> {code:java}
> sc.setLogLevel("INFO")
> spark.sql("create table t1(id int) using hive")
> spark.sql("insert into t1 values(1)")
> {code}
> {noformat}
> 19/05/03 21:38:53 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 468 ms on localhost (executor driver) (1/1)
> 19/05/03 21:38:53 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
> 19/05/03 21:38:53 INFO DAGScheduler: ResultStage 0 (sql at <console>:24) finished in 0.670 s
> 19/05/03 21:38:53 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
> 19/05/03 21:38:53 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
> 19/05/03 21:38:53 INFO DAGScheduler: Job 0 finished: sql at <console>:24, took 0.710944 s
> 19/05/03 21:38:53 INFO FileFormatWriter: Write Job a1db667b-ff3a-454f-a7d1-a4d79d343e6b committed.
> 19/05/03 21:38:53 INFO FileFormatWriter: Finished processing stats for write job a1db667b-ff3a-454f-a7d1-a4d79d343e6b.
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO SessionState: Could not get hdfsEncryptionShim, it is only applicable to hdfs filesystem.
> 19/05/03 21:38:53 INFO SessionState: Could not get hdfsEncryptionShim, it is only applicable to hdfs filesystem.
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: alter_table: db=default tbl=t1 newtbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=alter_table: db=default tbl=t1 newtbl=t1
> 19/05/03 21:38:53 INFO log: Updating table stats fast for t1
> 19/05/03 21:38:53 INFO log: Updated size of table t1 to 2
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_database: default
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_database: default
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_database: default
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_database: default
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO CommandUtils: Starting to calculate the total file size under path Some(file:/root/opensource/spark/spark-warehouse/t1).
> 19/05/03 21:38:53 INFO CommandUtils: It took 3 ms to calculate the total file size under path Some(file:/root/opensource/spark/spark-warehouse/t1).
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_database: default
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_database: default
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_table : db=default tbl=t1
> 19/05/03 21:38:53 INFO HiveMetaStore: 0: alter_table: db=default tbl=t1 newtbl=t1
> 19/05/03 21:38:53 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=alter_table: db=default tbl=t1 newtbl=t1
> 19/05/03 21:38:53 INFO log: Updating table stats fast for t1
> 19/05/03 21:38:53 INFO log: Updated size of table t1 to 2
> {noformat}
> It shows that it has executed {{Updated size of table t1 to 2}} twice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org