You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Herman van Hovell (JIRA)" <ji...@apache.org> on 2016/12/19 19:40:58 UTC

[jira] [Resolved] (SPARK-18700) getCached in HiveMetastoreCatalog not thread safe cause driver OOM

     [ https://issues.apache.org/jira/browse/SPARK-18700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Herman van Hovell resolved SPARK-18700.
---------------------------------------
       Resolution: Fixed
         Assignee: Li Yuanjian
    Fix Version/s: 2.2.0
                   2.1.1

> getCached in HiveMetastoreCatalog not thread safe cause driver OOM
> ------------------------------------------------------------------
>
>                 Key: SPARK-18700
>                 URL: https://issues.apache.org/jira/browse/SPARK-18700
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.1, 2.0.0, 2.1.1
>            Reporter: Li Yuanjian
>            Assignee: Li Yuanjian
>             Fix For: 2.1.1, 2.2.0
>
>
>     In our spark sql platform, each query use same HiveContext and independent thread, new data will append to tables as new partitions every 30min. After a new partition added to table T, we should call refreshTable to clear T’s cache in cachedDataSourceTables to make the new partition searchable. 
>     For the table have more partitions and files(much bigger than spark.sql.sources.parallelPartitionDiscovery.threshold), a new query of table T will start a job to fetch all FileStatus in listLeafFiles function. Because of the huge number of files, the job will run several seconds, during the time, new queries of table T will also start new jobs to fetch FileStatus because of the function of getCache is not thread safe. Final cause a driver OOM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org