You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yin Huai (JIRA)" <ji...@apache.org> on 2015/07/03 07:10:04 UTC

[jira] [Resolved] (SPARK-8776) Increase the default MaxPermSize

     [ https://issues.apache.org/jira/browse/SPARK-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Yin Huai resolved SPARK-8776.
-----------------------------
       Resolution: Fixed
    Fix Version/s: 1.5.0
                   1.4.1

Issue resolved by pull request 7196
[https://github.com/apache/spark/pull/7196]

> Increase the default MaxPermSize
> --------------------------------
>
>                 Key: SPARK-8776
>                 URL: https://issues.apache.org/jira/browse/SPARK-8776
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Yin Huai
>             Fix For: 1.4.1, 1.5.0
>
>
> Since 1.4.0, Spark SQL has isolated class loaders for seperating hive dependencies on metastore and execution, which increases the memory consumption of PermGen. How about we increase the default size from 128m to 256m? Seems the change we need to make is https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/launcher/src/main/java/org/apache/spark/launcher/AbstractCommandBuilder.java#L139. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org