You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yuming Wang (JIRA)" <ji...@apache.org> on 2018/10/16 07:17:00 UTC

[jira] [Created] (SPARK-25740) Set some configuration need invalidateStatsCache

Yuming Wang created SPARK-25740:
-----------------------------------

             Summary: Set some configuration need invalidateStatsCache
                 Key: SPARK-25740
                 URL: https://issues.apache.org/jira/browse/SPARK-25740
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 3.0.0
            Reporter: Yuming Wang


How to reproduce:
{code:sql}
# spark-sql
create table t1 (a int) stored as parquet;
create table t2 (a int) stored as parquet;
insert into table t1 values (1);
insert into table t2 values (1);
explain select * from t1, t2 where t1.a = t2.a;
exit;

spark-sql
explain select * from t1, t2 where t1.a = t2.a;
-- SortMergeJoin
set spark.sql.statistics.fallBackToHdfs=true;
explain select * from t1, t2 where t1.a = t2.a;
-- SortMergeJoin, it should be BroadcastHashJoin
exit;

spark-sql
set spark.sql.statistics.fallBackToHdfs=true;
explain select * from t1, t2 where t1.a = t2.a;
-- BroadcastHashJoin
{code}
We need {{LogicalPlanStats.invalidateStatsCache}}, but seems only we can do is invalidateAllCachedTables when execute set Command:
{code}
val isInvalidateAllCachedTablesKeys = Set(
  SQLConf.ENABLE_FALL_BACK_TO_HDFS_FOR_STATS.key,
  SQLConf.DEFAULT_SIZE_IN_BYTES.key
)
sparkSession.conf.set(key, value)
if (isInvalidateAllCachedTablesKeys.contains(key)) {
  sparkSession.sessionState.catalog.invalidateAllCachedTables()
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org