You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Wendell (JIRA)" <ji...@apache.org> on 2014/06/21 22:58:24 UTC

[jira] [Commented] (SPARK-1392) Local spark-shell Runs Out of Memory With Default Settings

    [ https://issues.apache.org/jira/browse/SPARK-1392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14039946#comment-14039946 ] 

Patrick Wendell commented on SPARK-1392:
----------------------------------------

I mentioned this on the pull request, but I think this was an instance of SPARK-1777. I'm running some tests locally on the pull request there to determine whether that was the case.

> Local spark-shell Runs Out of Memory With Default Settings
> ----------------------------------------------------------
>
>                 Key: SPARK-1392
>                 URL: https://issues.apache.org/jira/browse/SPARK-1392
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 0.9.0
>         Environment: OS X 10.9.2, Java 1.7.0_51, Scala 2.10.3
>            Reporter: Pat McDonough
>
> Using the spark-0.9.0 Hadoop2 binary from the project download page, running the spark-shell locally in out of the box configuration, and attempting to cache all the attached data, spark OOMs with: java.lang.OutOfMemoryError: GC overhead limit exceeded
> You can work around the issue by either decreasing spark.storage.memoryFraction or increasing SPARK_MEM



--
This message was sent by Atlassian JIRA
(v6.2#6252)