You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Yordan Pavlov (Jira)" <ji...@apache.org> on 2020/05/18 15:33:00 UTC

[jira] [Created] (FLINK-17800) RocksDB optimizeForPointLookup results in missing time windows

Yordan Pavlov created FLINK-17800:
-------------------------------------

             Summary: RocksDB optimizeForPointLookup results in missing time windows
                 Key: FLINK-17800
                 URL: https://issues.apache.org/jira/browse/FLINK-17800
             Project: Flink
          Issue Type: Bug
          Components: Runtime / State Backends
    Affects Versions: 1.10.1, 1.10.0
            Reporter: Yordan Pavlov


+My Setup:+

We have been using the _RocksDb_ option of _optimizeForPointLookup_ and running version 1.7 for years. Upon upgrading to Flink 1.10 we started receiving a strange behavior of missing time windows on a streaming Flink job. For the purpose of testing I experimented with previous Flink version and (1.8, 1.9, 1.9.3) and non of them showed the problem

 

A sample of the code demonstrating the problem is here:


{code:java}
 val datastream = env
 .addSource(KafkaSource.keyedElements(config.kafkaElements, List(config.kafkaBootstrapServer)))

 val result = datastream
 .keyBy( _ => 1)
 .timeWindow(Time.milliseconds(5000))
 .print()
{code}
 

 

The source consists of 3 streams (being either 3 Kafka partitions or 3 Kafka topics), the elements in each of the streams are separately increasing. The elements generate increasing timestamps using an event time and start from 1, increasing by 1. The first partitions would consist of timestamps 1, 2, 10, 15..., the second of 4, 5, 6, 11..., the third of 3, 7, 8, 9...

 

+What I observer:+

The time windows would open as I expect for the first 127 timestamps. Then there would be a huge gap with no opened windows, if the source has many elements, then next open window would be having a timestamp in the thousands. A gap of hundred of elements would be created with what appear to be 'lost' elements. Those elements are not reported as late (if tested with the ._sideOutputLateData_ operator). The way we have been using the option is by setting in inside the config like so:

??etherbi.rocksDB.columnOptions.optimizeForPointLookup=268435456??

We have been using it for performance reasons as we have huge RocksDB state backend.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)