You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Or (JIRA)" <ji...@apache.org> on 2015/10/07 23:42:26 UTC

[jira] [Updated] (SPARK-10000) Consolidate cache memory management and execution memory management

     [ https://issues.apache.org/jira/browse/SPARK-10000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Or updated SPARK-10000:
------------------------------
    Description: 
Memory management in Spark is currently broken down into two disjoint regions: one for execution and one for storage. The sizes of these regions are statically configured and fixed for the duration of the application.

There are several limitations to this approach. It requires user expertise to avoid unnecessary spilling, and there are no sensible defaults that will work for all workloads. As a Spark user, I want Spark to manage the memory more intelligently so I do not need to worry about how to statically partition the execution (shuffle) memory fraction and cache memory fraction. Most importantly, applications that do not use caching use only a small fraction of the heap space, resulting in suboptimal performance.



  was:
As a Spark user, I want Spark to manage the memory more intelligently so I do not need to worry about how to statically partition the execution (shuffle) memory fraction and cache memory fraction.



> Consolidate cache memory management and execution memory management
> -------------------------------------------------------------------
>
>                 Key: SPARK-10000
>                 URL: https://issues.apache.org/jira/browse/SPARK-10000
>             Project: Spark
>          Issue Type: Story
>          Components: Block Manager, Spark Core
>            Reporter: Reynold Xin
>
> Memory management in Spark is currently broken down into two disjoint regions: one for execution and one for storage. The sizes of these regions are statically configured and fixed for the duration of the application.
> There are several limitations to this approach. It requires user expertise to avoid unnecessary spilling, and there are no sensible defaults that will work for all workloads. As a Spark user, I want Spark to manage the memory more intelligently so I do not need to worry about how to statically partition the execution (shuffle) memory fraction and cache memory fraction. Most importantly, applications that do not use caching use only a small fraction of the heap space, resulting in suboptimal performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org