You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/04/29 20:43:06 UTC

[jira] [Commented] (SPARK-7214) Unrolling never evicts blocks when MemoryStore is nearly full

    [ https://issues.apache.org/jira/browse/SPARK-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519915#comment-14519915 ] 

Apache Spark commented on SPARK-7214:
-------------------------------------

User 'woggle' has created a pull request for this issue:
https://github.com/apache/spark/pull/5784

> Unrolling never evicts blocks when MemoryStore is nearly full
> -------------------------------------------------------------
>
>                 Key: SPARK-7214
>                 URL: https://issues.apache.org/jira/browse/SPARK-7214
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager
>            Reporter: Charles Reiss
>            Priority: Minor
>
> When less than spark.storage.unrollMemoryThreshold (default 1MB) is left in the MemoryStore, new blocks that are computed with unrollSafely (e.g. any cached RDD split) will always fail unrolling even if old blocks could be dropped to accommodate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org