You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Or (JIRA)" <ji...@apache.org> on 2014/06/27 22:56:25 UTC

[jira] [Comment Edited] (SPARK-1777) Pass "cached" blocks directly to disk if memory is not large enough

    [ https://issues.apache.org/jira/browse/SPARK-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046388#comment-14046388 ] 

Andrew Or edited comment on SPARK-1777 at 6/27/14 8:56 PM:
-----------------------------------------------------------

An easy way reproduce this: run spark shell local mode with default settings
<code>
sc.parallelize(1 to 20 * 1000 * 1000, 1).persist().count()
</code>
It's trying to unroll the entire partition to check if it fits in the cache, but by then it's too late.




was (Author: andrewor):
An easy way reproduce this: run spark shell local mode with default settings
<code>
sc.parallelize(1 to 20 * 1000 * 1000, 1).persist().count()
<code>
It's trying to unroll the entire partition to check if it fits in the cache, but by then it's too late.



> Pass "cached" blocks directly to disk if memory is not large enough
> -------------------------------------------------------------------
>
>                 Key: SPARK-1777
>                 URL: https://issues.apache.org/jira/browse/SPARK-1777
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Patrick Wendell
>            Assignee: Andrew Or
>             Fix For: 1.1.0
>
>         Attachments: spark-1777-design-doc.pdf
>
>
> Currently in Spark we entirely unroll a partition and then check whether it will cause us to exceed the storage limit. This has an obvious problem - if the partition itself is enough to push us over the storage limit (and eventually over the JVM heap), it will cause an OOM.
> This can happen in cases where a single partition is very large or when someone is running examples locally with a small heap.
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/CacheManager.scala#L106
> We should think a bit about the most elegant way to fix this - it shares some similarities with the external aggregation code.
> A simple idea is to periodically check the size of the buffer as we are unrolling and see if we are over the memory limit. If we are we could prepend the existing buffer to the iterator and write that entire thing out to disk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)