You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Zhong (JIRA)" <ji...@apache.org> on 2016/06/17 17:23:05 UTC

[jira] [Commented] (SPARK-15340) Limit the size of the map used to cache JobConfs to void OOM

    [ https://issues.apache.org/jira/browse/SPARK-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336511#comment-15336511 ] 

Sean Zhong commented on SPARK-15340:
------------------------------------

[~DoingDone9]

I did some tests, and didn't see the OOM you observed. Can you elaborate more on how the OOM happens?

1. What is the error message and stack trace when the OOM happens?
2. Are you running Apache Spark in single machine mode or cluster mode?
3. What is the driver configuration settings?
4. Is it a Permgen (Metaspace) OOM or heap space OOM?
5. Which client are you using to connect to the thrift JDBC server?
6. Do you have a reproducible script so that we can try to reproduce it on our environment?
7. Is your Spark deployment running on YARN?


 

> Limit the size of the map used to cache JobConfs to void OOM
> ------------------------------------------------------------
>
>                 Key: SPARK-15340
>                 URL: https://issues.apache.org/jira/browse/SPARK-15340
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.5.0, 1.6.0
>            Reporter: Zhongshuai Pei
>            Priority: Critical
>
> when i run tpcds (orc)  by using JDBCServer, driver always  OOM.
> i find tens of thousands of Jobconf from dump file and these JobConf  can not be recycled, So we should limit the size of the map used to cache JobConfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org