You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Stefan Richter (JIRA)" <ji...@apache.org> on 2017/05/29 12:52:04 UTC

[jira] [Updated] (FLINK-6761) Limitation for maximum state size per key in RocksDB backend

     [ https://issues.apache.org/jira/browse/FLINK-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Stefan Richter updated FLINK-6761:
----------------------------------
    Description: 
RocksDB`s JNI bridge allows for putting and getting {{byte[]}} as keys and values. 
States that internally use RocksDB's merge operator, e.g. {{ListState}}, can currently merge multiple {{byte[]}} under one key, which will be internally concatenated to one value in RocksDB. 

This becomes problematic, as soon as the accumulated state size under one key grows larger than {{Integer.MAX_VALUE}} bytes. Whenever Java code tries to access a state that grew beyond this limit through merging, we will encounter an {{ArrayIndexOutOfBoundsException}} at best and a segfault at worst.

This behaviour is problematic, because RocksDB silently stores states that exceed this limitation, but on access (e.g. in checkpointing), the code fails unexpectedly.

I think the only proper solution to this is for RocksDB's JNI bridge to build on {{(Direct)ByteBuffer}} - which can go around the size limitation - as input and output types, instead of simple {{byte[]}}.

  was:
RocksDB`s JNI bridge allows for putting and getting `byte[]` as keys and values. 
States that internally use RocksDB's merge operator, e.g. `ListState`, can currently merge multiple `byte[]` under one key, which will be internally concatenated to one value in RocksDB. 

This becomes problematic, as soon as the accumulated state size under one key grows larger than `Integer.MAX_VALUE` bytes. Whenever Java code tries to access a state that grew beyond this limit through merging, we will encounter an `ArrayIndexOutOfBoundsException` at best and a segfault at worst.

This behaviour is problematic, because RocksDB silently stores states that exceed this limitation, but on access (e.g. in checkpointing), the code fails unexpectedly.

I think the only proper solution to this is for RocksDB's JNI bridge to build on `(Direct)ByteBuffer` - which can go around the size limitation - as input and output types, instead of simple `byte[]`.


> Limitation for maximum state size per key in RocksDB backend
> ------------------------------------------------------------
>
>                 Key: FLINK-6761
>                 URL: https://issues.apache.org/jira/browse/FLINK-6761
>             Project: Flink
>          Issue Type: Bug
>          Components: State Backends, Checkpointing
>    Affects Versions: 1.3.0, 1.2.1
>            Reporter: Stefan Richter
>            Priority: Critical
>
> RocksDB`s JNI bridge allows for putting and getting {{byte[]}} as keys and values. 
> States that internally use RocksDB's merge operator, e.g. {{ListState}}, can currently merge multiple {{byte[]}} under one key, which will be internally concatenated to one value in RocksDB. 
> This becomes problematic, as soon as the accumulated state size under one key grows larger than {{Integer.MAX_VALUE}} bytes. Whenever Java code tries to access a state that grew beyond this limit through merging, we will encounter an {{ArrayIndexOutOfBoundsException}} at best and a segfault at worst.
> This behaviour is problematic, because RocksDB silently stores states that exceed this limitation, but on access (e.g. in checkpointing), the code fails unexpectedly.
> I think the only proper solution to this is for RocksDB's JNI bridge to build on {{(Direct)ByteBuffer}} - which can go around the size limitation - as input and output types, instead of simple {{byte[]}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)