You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2016/03/16 01:49:33 UTC
[jira] [Created] (SPARK-13921) Store serialized blocks as multiple
chunks in MemoryStore
Josh Rosen created SPARK-13921:
----------------------------------
Summary: Store serialized blocks as multiple chunks in MemoryStore
Key: SPARK-13921
URL: https://issues.apache.org/jira/browse/SPARK-13921
Project: Spark
Issue Type: Improvement
Components: Block Manager
Reporter: Josh Rosen
Assignee: Josh Rosen
Instead of storing serialized blocks in individual ByteBuffers, the BlockManager should be capable of storing a serialized block in multiple chunks, each occupying a separate ByteBuffer.
This change will help to improve the efficiency of memory allocation and the accuracy of memory accounting when serializing blocks. Our current serialization code uses a {{ByteBufferOutputStream}}, which doubles and re-allocates its backing byte array; this increases the peak memory requirements during serialization (since we need to hold extra memory while expanding the array). In addition, we currently don't account for the extra wasted space at the end of the ByteBuffer's backing array, so a 129 megabyte serialized block may actually consume 256 megabytes of memory. After switching to storing blocks in multiple chunks, we'll be able to efficiently trim the backing buffers so that no space is wasted.
This change is also a prerequisite to being able to cache blocks which are larger than 2GB (although full support for that depends on several other changes which have not bee implemented yet).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org