You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Or (JIRA)" <ji...@apache.org> on 2015/10/17 00:13:05 UTC

[jira] [Resolved] (SPARK-7214) Unrolling never evicts blocks when MemoryStore is nearly full

     [ https://issues.apache.org/jira/browse/SPARK-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Or resolved SPARK-7214.
------------------------------
          Resolution: Fixed
            Assignee: Andrew Or
       Fix Version/s: 1.6.0
    Target Version/s: 1.6.0

Indirectly fixed through https://github.com/apache/spark/pull/9000 (SPARK-10956)

> Unrolling never evicts blocks when MemoryStore is nearly full
> -------------------------------------------------------------
>
>                 Key: SPARK-7214
>                 URL: https://issues.apache.org/jira/browse/SPARK-7214
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager
>            Reporter: Charles Reiss
>            Assignee: Andrew Or
>            Priority: Minor
>             Fix For: 1.6.0
>
>
> When less than spark.storage.unrollMemoryThreshold (default 1MB) is left in the MemoryStore, new blocks that are computed with unrollSafely (e.g. any cached RDD split) will always fail unrolling even if old blocks could be dropped to accommodate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org