You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Sayat Satybaldiyev (JIRA)" <ji...@apache.org> on 2018/12/16 11:38:00 UTC

[jira] [Created] (FLINK-11171) Unexpected timestamp deserialization failure in RocksDB state backend

Sayat Satybaldiyev created FLINK-11171:
------------------------------------------

             Summary: Unexpected timestamp deserialization failure in RocksDB state backend
                 Key: FLINK-11171
                 URL: https://issues.apache.org/jira/browse/FLINK-11171
             Project: Flink
          Issue Type: Bug
          Components: State Backends, Checkpointing
    Affects Versions: 1.7.0
            Reporter: Sayat Satybaldiyev


We have a job that joins two data stream via Process function and using ValueState TTL with RocksDB backends. The jobs constantly fail to checkpoint due to timestamp serialization error.

TTL state config

{code:java}
StateTtlConfig ttlConfig = StateTtlConfig
 .newBuilder(Time.hours(recommendationRetentionHr))
 .neverReturnExpired()
 .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)
 .cleanupFullSnapshot()
 .build();

{code}



 

Error

 


{code:java}


2018-12-16 06:02:12,609 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Triggering checkpoint 31 @ 1544940132568 for job 7825029dc256542aa312c0b68ecf0631.
 2018-12-16 06:22:12,609 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Checkpoint 31 of job 7825029dc256542aa312c0b68ecf0631 expired before completing.
 2018-12-16 06:22:12,637 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Triggering checkpoint 32 @ 1544941332609 for job 7825029dc256542aa312c0b68ecf0631.
 2018-12-16 06:22:12,899 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Decline checkpoint 32 by task 176c8b3c3ff190d183415ab77b89344c of job 7825029dc256542aa312c0b68ecf0631.
 2018-12-16 06:22:12,900 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Discarding checkpoint 32 of job 7825029dc256542aa312c0b68ecf0631.
 java.lang.Exception: Could not materialize checkpoint 32 for operator joined-stream (1/6).
 at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:942)
 at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:884)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Caused by: java.util.concurrent.ExecutionException: org.apache.flink.util.FlinkRuntimeException: Unexpected timestamp deserialization failure
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:53)
 at org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.<init>(OperatorSnapshotFinalizer.java:47)
 at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:853)
 ... 5 more
 Caused by: org.apache.flink.util.FlinkRuntimeException: Unexpected timestamp deserialization failure
 at org.apache.flink.runtime.state.ttl.TtlStateSnapshotTransformer$TtlSerializedValueStateSnapshotTransformer.filterOrTransform(TtlStateSnapshotTransformer.java:94)
 at org.apache.flink.runtime.state.ttl.TtlStateSnapshotTransformer$TtlSerializedValueStateSnapshotTransformer.filterOrTransform(TtlStateSnapshotTransformer.java:79)
 at org.apache.flink.contrib.streaming.state.iterator.RocksTransformingIteratorWrapper.filterOrTransform(RocksTransformingIteratorWrapper.java:70)
 at org.apache.flink.contrib.streaming.state.iterator.RocksTransformingIteratorWrapper.seekToFirst(RocksTransformingIteratorWrapper.java:48)
 at org.apache.flink.contrib.streaming.state.iterator.RocksStatesPerKeyGroupMergeIterator.buildIteratorHeap(RocksStatesPerKeyGroupMergeIterator.java:128)
 at org.apache.flink.contrib.streaming.state.iterator.RocksStatesPerKeyGroupMergeIterator.<init>(RocksStatesPerKeyGroupMergeIterator.java:68)
 at org.apache.flink.contrib.streaming.state.snapshot.RocksFullSnapshotStrategy$SnapshotAsynchronousPartCallable.writeKVStateData(RocksFullSnapshotStrategy.java:312)
 at org.apache.flink.contrib.streaming.state.snapshot.RocksFullSnapshotStrategy$SnapshotAsynchronousPartCallable.writeSnapshotToOutputStream(RocksFullSnapshotStrategy.java:258)
 at org.apache.flink.contrib.streaming.state.snapshot.RocksFullSnapshotStrategy$SnapshotAsynchronousPartCallable.callInternal(RocksFullSnapshotStrategy.java:223)
 at org.apache.flink.contrib.streaming.state.snapshot.RocksFullSnapshotStrategy$SnapshotAsynchronousPartCallable.callInternal(RocksFullSnapshotStrategy.java:176)
 at org.apache.flink.runtime.state.AsyncSnapshotCallable.call(AsyncSnapshotCallable.java:76)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:50)
 ... 7 more
 2018-12-16 06:22:14,248 WARN  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Received late message for now expired checkpoint attempt 32 from f69eae02946afadeaaefb470472fd36d of job 7825029dc256542aa312c0b68ecf0631.
 2018-12-16 06:22:14,248 WARN  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Received late message for now expired checkpoint attempt 32 from 4da75b007f58259167868d193208e45e of job 7825029dc256542aa312c0b68ecf0631.
{code}

 

Error in Job Manager State, each 20min the interval for checkpointing




{code:java}
2018-12-16 05:43:12,264 WARN  org.apache.hadoop.hdfs.DFSClient                              - DFSOutputStream ResponseProcessor exception for block BP-1761982338-88.99.139.199-1495110347706:blk_1143342640_69606587
 java.io.IOException: Bad response ERROR for block BP-1761982338-88.99.139.199-1495110347706:blk_1143342640_69606587 from datanode DatanodeInfoWithStorage[159.69.65.126:50010,DS-b0de28d5-53a5-41e8-9ff4-698520275b86,DISK]
 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:883)
 2018-12-16 05:43:12,264 WARN  org.apache.hadoop.hdfs.DFSClient                              - Error Recovery for block BP-1761982338-88.99.139.199-1495110347706:blk_1143342640_69606587 in pipeline DatanodeInfoWithStorage[94.130.13.247:50010,DS-075ddc51-750f-4163-9c6a-1a139a265aa7,DISK], DatanodeInfoWithStorage[xxxx:50010,DS-b17733c5-35d9-47aa-8400-fbd198fcdaa6,DISK], DatanodeInfoWithStorage[xxxx:50010,DS-b0de28d5-53a5-41e8-9ff4-698520275b86,DISK]: datanode 2(DatanodeInfoWithStorage[xxxxxxx:50010,DS-b0de28d5-53a5-41e8-9ff4-698520275b86,DISK]) is bad.
 2018-12-16 05:43:12,267 WARN  org.apache.hadoop.hdfs.DFSClient                              - DataStreamer Exception
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): BP-1761982338-88.99.139.199-1495110347706:blk_1143342640_69606587 does not exist or is not under Constructionnull
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6683)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6751)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:930)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
 at org.apache.hadoop.ipc.Client.call(Client.java:1413)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
 at com.sun.proxy.$Proxy10.updateBlockForPipeline(Unknown Source)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:907)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 at com.sun.proxy.$Proxy11.updateBlockForPipeline(Unknown Source)
 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1290)
 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:990)
 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:507)
 2018-12-16 05:43:12,409 WARN  org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory  - Could not close the state stream for hdfs:/user/flink/rocksdb_v2/tracking_clicks/7825029dc256542aa312c0b68ecf0631/chk-30/9f4297ba-6966-487a-8d20-9031c5ba8273.
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): BP-1761982338-xxxxx-1495110347706:blk_1143342640_69606587 does not exist or is not under Constructionnull
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6683)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6751)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:930)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
 at org.apache.hadoop.ipc.Client.call(Client.java:1413)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
 at com.sun.proxy.$Proxy10.updateBlockForPipeline(Unknown Source)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:907)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 at com.sun.proxy.$Proxy11.updateBlockForPipeline(Unknown Source)
 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1290)
 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:990)
{code}


 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)