You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Wendell (JIRA)" <ji...@apache.org> on 2014/09/16 19:43:34 UTC

[jira] [Resolved] (SPARK-1201) Do not materialize partitions whenever possible in BlockManager

     [ https://issues.apache.org/jira/browse/SPARK-1201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Patrick Wendell resolved SPARK-1201.
------------------------------------
    Resolution: Duplicate

This was solved by SPARK-1777.

> Do not materialize partitions whenever possible in BlockManager
> ---------------------------------------------------------------
>
>                 Key: SPARK-1201
>                 URL: https://issues.apache.org/jira/browse/SPARK-1201
>             Project: Spark
>          Issue Type: New Feature
>          Components: Block Manager, Spark Core
>            Reporter: Patrick Wendell
>            Assignee: Andrew Or
>
> This is a slightly more complex version of SPARK-942 where we try to avoid unrolling iterators in other situations where it is possible. SPARK-942 focused on the case where the DISK_ONLY storage level was used. There are other cases though, such as if data is stored serialized and in memory and but there is not enough memory left to store the RDD.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org