You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Wendell (JIRA)" <ji...@apache.org> on 2014/03/30 06:13:21 UTC

[jira] [Updated] (SPARK-1201) Do not materialize partitions whenever possible in BlockManager

     [ https://issues.apache.org/jira/browse/SPARK-1201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Patrick Wendell updated SPARK-1201:
-----------------------------------

    Reporter: Patrick Wendell  (was: Patrick Cogan)

> Do not materialize partitions whenever possible in BlockManager
> ---------------------------------------------------------------
>
>                 Key: SPARK-1201
>                 URL: https://issues.apache.org/jira/browse/SPARK-1201
>             Project: Apache Spark
>          Issue Type: New Feature
>          Components: Block Manager, Spark Core
>            Reporter: Patrick Wendell
>            Assignee: Andrew Or
>             Fix For: 1.0.0
>
>
> This is a slightly more complex version of SPARK-942 where we try to avoid unrolling iterators in other situations where it is possible. SPARK-942 focused on the case where the DISK_ONLY storeage level was used. There are other cases though, such as if data is stored serialized and in memory and but there is not enough memory left to store the RDD.



--
This message was sent by Atlassian JIRA
(v6.2#6252)