You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Max Michels (JIRA)" <ji...@apache.org> on 2015/01/07 15:32:34 UTC

[jira] [Commented] (FLINK-1320) Add an off-heap variant of the managed memory

    [ https://issues.apache.org/jira/browse/FLINK-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14267675#comment-14267675 ] 

Max Michels commented on FLINK-1320:
------------------------------------

I'm working on a pull request which should incorporate most of your suggestions. For now, I think it is a good idea to implement both memory allocation variants (heap and off-heap). The variant used will be determined by a configuration variable which enables us to run further performance and stability tests. If the new off-heap allocation methods turns out to be much more beneficial we could later remove the heap variant.

> Add an off-heap variant of the managed memory
> ---------------------------------------------
>
>                 Key: FLINK-1320
>                 URL: https://issues.apache.org/jira/browse/FLINK-1320
>             Project: Flink
>          Issue Type: Improvement
>          Components: Local Runtime
>            Reporter: Stephan Ewen
>            Priority: Minor
>
> For (nearly) all memory that Flink accumulates (in the form of sort buffers, hash tables, caching), we use a special way of representing data serialized across a set of memory pages. The big work lies in the way the algorithms are implemented to operate on pages, rather than on objects.
> The core class for the memory is the {{MemorySegment}}, which has all methods to set and get primitives values efficiently. It is a somewhat simpler (and faster) variant of a HeapByteBuffer.
> As such, it should be straightforward to create a version where the memory segment is not backed by a heap byte[], but by memory allocated outside the JVM, in a similar way as the NIO DirectByteBuffers, or the Netty direct buffers do it.
> This may have multiple advantages:
>   - We reduce the size of the JVM heap (garbage collected) and the number and size of long living alive objects. For large JVM sizes, this may improve performance quite a bit. Utilmately, we would in many cases reduce JVM size to 1/3 to 1/2 and keep the remaining memory outside the JVM.
>   - We save copies when we move memory pages to disk (spilling) or through the network (shuffling / broadcasting / forward piping)
> The changes required to implement this are
>   - Add a {{UnmanagedMemorySegment}} that only stores the memory adress as a long, and the segment size. It is initialized from a DirectByteBuffer.
>   - Allow the MemoryManager to allocate these MemorySegments, instead of the current ones.
>   - Make sure that the startup script pick up the mode and configure the heap size and the max direct memory properly.
> Since the MemorySegment is probably the most performance critical class in Flink, we must take care that we do this right. The following are critical considerations:
>   - If we want both solutions (heap and off-heap) to exist side-by-side (configurable), we must make the base MemorySegment abstract and implement two versions (heap and off-heap).
>   - To get the best performance, we need to make sure that only one class gets loaded (or at least ever used), to ensure optimal JIT de-virtualization and inlining.
>   - We should carefully measure the performance of both variants. From previous micro benchmarks, I remember that individual byte accesses in DirectByteBuffers (off-heap) were slightly slower than on-heap, any larger accesses were equally good or slightly better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)